US20120281759A1 - Power efficient motion estimation techniques for video encoding - Google Patents

Power efficient motion estimation techniques for video encoding Download PDF

Info

Publication number
US20120281759A1
US20120281759A1 US13/001,037 US201013001037A US2012281759A1 US 20120281759 A1 US20120281759 A1 US 20120281759A1 US 201013001037 A US201013001037 A US 201013001037A US 2012281759 A1 US2012281759 A1 US 2012281759A1
Authority
US
United States
Prior art keywords
motion
block
reference frame
motion vector
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/001,037
Inventor
Lidong Xu
Yi-Jen Chiu
Hong Jiang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of US20120281759A1 publication Critical patent/US20120281759A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/37Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes

Definitions

  • Motion estimation (ME) in video coding may be used to improve video compression performance by removing or reducing temporal redundancy among video frames.
  • traditional motion estimation may be performed at an encoder within a specified search window in reference frames. This may allow determination of a motion vector that minimizes the sum of absolute differences (SAD) between the input block and a reference block in a reference frame.
  • the motion vector (MV) information can then be transmitted to a decoder for motion compensation.
  • FIG. 4 depicts a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks.
  • Known solutions may use only one reference frame instead of multiple reference motion estimation (MRME) to reduce the coding complexity or use hierarchical motion estimation (HME) to deal with high motions when the search window is very limited. It is known that using only one reference fame may not be sufficient to achieve high coding gain, and using HME may be not efficient for some blocks of the pictures.
  • MRME multiple reference motion estimation
  • HME hierarchical motion estimation
  • ME acquires the predictions of a current encoding block from reference frames.
  • ME includes a motion search within a search window centered by the search center in reference frames.
  • higher coding gain may be achieved when a larger search window is used.
  • using a larger search window increases the encoding complexity greatly, which may be power-use inefficient.
  • a search window may be very small due to the very limited on-chip memory size.
  • hierarchical motion estimation HME can be utilized to extend the search range of a small window by downscaling reference frames and using the small search window. But for blocks with small inter-frame motion, the small search window may be enough and as a result, HME may waste power for small inter-frame motion blocks.
  • multiple reference motion estimation can be applied for blocks with small inter-frame motion so that motion search will be performed on multiple available reference frames to determine the prediction for the current encoding block.
  • the reference index will be transmitted to the decoder side for motion compensation.
  • motion search on the nearest reference frame is enough and may save power associated with searching multiple frames.
  • Control can be based on slice level control and/or macroblock (MB) level control.
  • a slice is one or more contiguous macroblocks. The order of macroblocks within a slice can be from left-to-right and top-to-bottom.
  • the MBs can be divided into three categories, high motion MBs, low motion MBs, and other MBs. If there are too many high motion MBs in a current slice, experimental results show that MRME will not benefit the coding gain of the whole slice.
  • reference frame In slice level control, it is decided to use only one reference frame to encode blocks of a current slice when there are too many high motion MBs in the current slice.
  • other numbers of reference frames can be used such as two or three. Setting a number of reference frames to one allows the encoder to look for a motion vector in a single reference frame.
  • the selected reference frame can be the closest frame in time (either forward or backward) or a frame that is similar to the current frame but not closest in time (e.g., harmonic motion).
  • the reference index is not encoded into the bitstream because the decoder is aware of the reference frame to use when no reference index is provided.
  • motion search on a single reference frame may be sufficient, and as a result, motion estimation using multiple reference frames may waste power for these blocks. Not encoding the reference index may save bits and may improve the coding gain.
  • the number of reference frames for this slice can be set to any number such as a maximum allowed reference frames.
  • H.264 allows up to 16 reference frames.
  • MB level control it is decided whether to perform MRME or perform HME for a MB and its subblocks based on the motion vector (MV) of the MB determined using HME. Based on the MV determined using HME, if an MB is high motion, HME is performed for the MB and its subblocks. Based on the MV determined using HME, if an MB is low motion, MRME is performed for the MB and its subblocks. But if the low motion MB is in a high motion slice, then one reference frame can be used to encode the low motion MB and its subblocks. If the MB is neither high motion nor low motion, either HME or MRME or combination of HME and MRME can be applied to the MB and its subblocks. A combination of HME and MRME means extended search range in multiple available reference frames.
  • MRME applies motion search on multiple available reference frames to acquire prediction for a current encoding block and the reference index will be transmitted to the decoder side for correct motion compensation.
  • a reference picture buffer stores decoded reference pictures. Those reference frames could be selected as reference frames in MRME.
  • a reference picture can be chosen based on the closest temporal distance or picture-level similarity measurement. Multiple reference motion estimation may increase the encoder complexity.
  • Adaptive control of HME and MRME for encoding the current block can greatly save the coding power while also achieve high coding efficiency gain.
  • FIG. 1 depicts an example of hierarchical motion estimation (HME).
  • HME hierarchical motion estimation
  • a current frame and a reference frame are downscaled to coarse level frames by a factor of S.
  • the width and height of the downscaled frames are 1/(S*S) of the original frame sizes.
  • the original frames are called fine level frames.
  • S may be set to 1.5, 1.6, 2, 4, 8, or other values.
  • Motion estimation can be performed for each M ⁇ N block in the downscaled, coarse level current frame. Motion estimation is performed within the small search window having a size of W ⁇ H in the downscaled, coarse level reference frame.
  • the upscaled motion vector (mvx*S, mvy*S) is used as the search center and motion search is performed on fine level frames within the W ⁇ H search window to find a fine level motion vector.
  • the upscaled MV (mvx*S, mvy*S) can be used as the search center for the pixels in a (M*S) ⁇ (N*S) block in a fine level current frame.
  • the largest motion that can be searched is W/2 in the positive/negative horizontal direction and H/2 in positive/negative vertical direction.
  • HME the biggest motion that can be searched is (S+1)W/2 in the horizontal direction and (S+1)H/2 in the vertical direction.
  • the downscaled frame can be downscaled more than once.
  • the search window can be applied to one or more of the downscaled pictures.
  • motion estimation can be performed in multiple reference frames and the reference index and motion vectors of the best predictions are transmitted to the decoder for motion compensation.
  • the number of maximum allowed reference frames can be signaled in a slice header. If the maximum allowed reference is one, then the reference index for each block may not be encoded if the decoder knows that the reference index is 0 for those blocks. Otherwise, the reference index is encoded.
  • MRME may reduce the prediction errors, but encoding bits for the reference index is non-neglectable overhead. MRME may use more overhead than HME by informing decoder what reference frames are used in encoding.
  • an adaptive control mechanism can be used to adaptively to decide to use HME or MRME to encode a block. Adaptive control can greatly reduce the coding complexity and improve the coding gain. For example, if the true block motion is very small, performing general ME within a W ⁇ H search window on a fine level may be enough to obtain a motion vector. In such case, HME is not necessary for these small motion blocks and MRME can be used instead.
  • FIG. 2 shows the block diagram of a video encoding system.
  • Adaptive ME Control block informs Motion Estimation block to perform MRME or to use motion vectors from Coarse Level ME block based on whether a slice or macro block is high or low motion.
  • Motion Estimation block supports MRME and the traditional fine level motion search around a search center using predicted MV (pmv) within an W ⁇ H search window.
  • Coarse Level ME block determines motion vectors based on HME.
  • HME can be utilized for large motions and adaptive control of HME and MRME is applied to reduce the coding complexity.
  • the total number of macro blocks in the input slice is total_MBs and there are N high motion macroblocks in the input slice. If N/total_MBs is larger than a predefined threshold TH — 0, the input slice is regarded as a high motion slice.
  • MRME may not provide an appreciable gain to the video coding system.
  • the bit overhead for encoding a reference index may even degrade the coding gain.
  • the ME is limited to the nearest reference frame in time or picture-level and in the slice header, the allowed number of reference frames is signaled to be 1.
  • the reference index will not be encoded, thereby saving bandwidth.
  • the reference index is transmitted with a slice to a decoder or could be based on a common understanding between the encoder and decoder so no index is transmitted.
  • the number of reference frames used to determine motion vectors for blocks in the slice, slice_num_ref can be set to M, where M is the number of available frames in the reference buffer of the encoder.
  • M is the number of available frames in the reference buffer of the encoder.
  • the maximum allowed reference frames is signaled in a sequence parameter header.
  • FIG. 3 shows a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks of an input slice of a P or B picture.
  • Block 302 includes performing HME on the slice to determine coarse motion vectors.
  • HME techniques described with regard to FIG. 1 can be used to determine motion vectors for each of the macro blocks in a slice.
  • An input slice is downscaled to a coarse level slice and the downscaled coarse level reference frames are available in the reference buffer.
  • motion estimation is performed within a W ⁇ H search window to acquire its motion vector (mvx(i), mvy(i)).
  • the block can be 4 ⁇ 4 and the search window can be 48 ⁇ 40.
  • This motion vector can be used as a search center for the pixels in the corresponding (M*S) ⁇ (N*S) block, e.g., 16 ⁇ 16 block, in the frame of original dimensions.
  • Block 304 includes determining a number N of high motion macro blocks in the slice.
  • Block 306 includes determining if a slice is a high motion slice.
  • a high motion slice can be one where a certain number of macro blocks in a slice are high motion.
  • Block 306 can include determining whether N/total_MBs>TH — 0, where total_MBs is a total number of macro blocks in a slice and TH — 0 is a threshold value. If the slice is not a high motion slice, then block 308 follows block 306 . If the slice is a high motion slice, then block 310 follows block 306 .
  • Block 308 includes setting a number of reference frames used to determine motion vectors for macro blocks in the current slice to M, the number of available frames in the reference buffer of the encoder.
  • Block 310 includes setting a number of reference frames used to determine motion vectors for macro blocks in the current slice to one.
  • the number of reference frames can be more than one.
  • Block 312 includes determining motion vectors for all MB and subblocks in the slice in a manner described with regard to FIG. 5 .
  • FIG. 4 depicts a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks based on macro blocks.
  • Block 402 includes determining whether a macro block is a high motion macro block.
  • a macro block may be a high motion macro block if abs(mv_x)>TH — 2 and abs(mv_y)>TH — 2, where mv_x and mv_y represent motion vectors determined in a manner similar to that of block 302 .
  • TH — 2 is no bigger than TH — 1.
  • Block 404 includes determining whether a macro block is a low motion macro block.
  • Block 406 includes setting a number of reference frames to one and indicating that the motion vector of the current macro block are to be determined using MRME.
  • Variable mb_num_ref can be used to represent a number of reference frames used to determine motion vectors of the current block.
  • Variable mb_HME_flag represents whether HME or MRME is used to determine motion vectors for the current block.
  • Block 408 includes setting a number of reference frames to the number of reference frames used for the slice of the current macro block and indicating that the motion vector of the current macro block is to be determined using MRME.
  • the number of reference frames used for the slice of the current macro block can be determined based on the flow diagram of FIG. 3 .
  • the number of reference frames could increase as long as power/complexity is not a concern.
  • Block 410 includes setting a number of reference frames to one and indicating that the motion vector of the current macro block is to be determined using HME. Accordingly, motion vectors mv_x and mv_y determined in a manner similar to that of block 302 can be used as the motion vectors for the current macro block.
  • Block 412 includes determining motion vectors for the current macro block and subblocks using either HME or MRME. Determination of motion vectors for the current macro block and subblocks can be made in a manner described with regard to FIG. 5 .
  • determinations of the motion vector are made based on both slice level and macro block level. Accordingly, block 312 is not performed and the process of FIG. 3 precedes the process of FIG. 4 and instead motion vectors are determined in block 412 .
  • FIG. 5 depicts a manner of determining motion vectors for macro blocks.
  • Block 502 includes, in the nearest single reference frame, performing general ME within a W ⁇ H search window centered by the predicted motion vector (pmv) for a macro block and its subblocks. Determination of vector pmv is based on the well known techniques described in ITU-T Series H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS: Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services (March 2009), section 8.4 called “Inter prediction process” which uses median motion vectors of three macro blocks neighboring the macro block of interest.
  • Block 504 includes determining whether the macro block is a high motion macro block. Determination of whether the macro block is high motion can be made in a manner similar to that in block 402 . If the macro block is a high motion macro block, then block 506 follows block 504 . If the macro block is not a high motion macro block, then block 508 follows block 504 .
  • Block 506 includes, in the nearest reference frame, applying ME within a W ⁇ H search window centered by the upscaled coarse level motion vector (cmv) for a macro block and its subblocks.
  • Vector cmv is derived from a coarse level motion vector.
  • Block 506 may determine a MV based on the ME centered from the upscaled cmv.
  • Block 508 includes determining whether the macro block is a low motion macro block. Determination of whether the macro block is low motion can be made in a manner similar to that in block 404 . If the macro block is a low motion macro block, then block 510 follows block 508 . If the macro block is not a low motion macro block, then block 512 follows block 508 .
  • Block 510 includes, in available reference frames (other than the nearest single reference frame), performing general ME within a W ⁇ H search window centered by the predicted motion vector (pmv) for a macro block and its subblocks.
  • the vector pmv is determined using the well known H.264, section 8.4 called “Inter prediction process.” Note that vector pmv may be different for each of the reference frames.
  • Block 512 includes selecting motion vectors determined for the macro block based on the lowest rate-distortion cost.
  • Blocks 502 and 510 provide multiple predicted motion vectors and blocks 502 and 506 provide predicted motion vectors and a single coarse motion vector.
  • the motion vector and reference index with the minimum rate-distortion (RD) cost can be used to encode the current block.
  • Adaptive control can be used to select which of the blocks is to be performed to reduce the ME complexity while maintaining the coding gain. For example, if the current MB is a high motion MB, block 506 will also be performed for this MB and its subblocks and block 510 will be skipped for this MB and its subblocks. In addition, if only a single reference frame is to be used, block 510 will also be skipped for this MB and its subblocks. If the MB is a low motion MB, block 510 will be performed for this MB and its subblocks and block 506 will be skipped for this MB and its subblocks.
  • both blocks 506 and 510 will be skipped for this MB and its subblocks.
  • the ME complexity can be greatly reduced and the high coding gain can still be achieved.
  • FIG. 6 depicts a system in accordance with an embodiment.
  • System 600 may include host system 602 and display 622 .
  • Computer system 600 can be implemented in a handheld personal computer, mobile telephone, set top box, or any computing device.
  • Host system 602 may include chipset 605 , processor 610 , host memory 612 , storage 614 , graphics subsystem 615 , and radio 620 .
  • Chipset 605 may provide intercommunication among processor 610 , host memory 612 , storage 614 , graphics subsystem 615 , and radio 620 .
  • chipset 605 may include a storage adapter (not depicted) capable of providing intercommunication with storage 614 .
  • the storage adapter may be capable of communicating with storage 614 in conformance with any of the following protocols: Small Computer Systems Interface (SCSI), Fibre Channel (FC), and/or Serial Advanced Technology Attachment (S-ATA).
  • SCSI Small Computer Systems Interface
  • FC Fibre Channel
  • S-ATA Serial Advanced Technology Attachment
  • graphics subsystem 615 may perform encoding of video with motion vector and reference frame information for motion estimation based on techniques described herein. Encoded video can be transmitted from system 600 to a video decoder.
  • Processor 610 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, multi-core, or any other microprocessor or central processing unit.
  • CISC Complex Instruction Set Computer
  • RISC Reduced Instruction Set Computer
  • Host memory 612 may be implemented as a volatile memory device such as but not limited to a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM).
  • Storage 614 may be implemented as a non-volatile storage device such as but not limited to a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device.
  • Graphics subsystem 615 may perform processing of images such as still or video for display.
  • An analog or digital interface may be used to communicatively couple graphics subsystem 615 and display 622 .
  • the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques.
  • Graphics subsystem 615 could be integrated into processor 610 or chipset 605 .
  • Graphics subsystem 615 could be a stand-alone card communicatively coupled to chipset 605 .
  • Radio 620 may include one or more radios capable of transmitting and receiving signals in accordance with applicable wireless standards such as but not limited to any version of IEEE 802.11 and IEEE 802.16.
  • graphics and/or video processing techniques described herein may be implemented in various hardware architectures.
  • graphics and/or video functionality may be integrated within a chipset.
  • a discrete graphics and/or video processor may be used.
  • the graphics and/or video functions may be implemented by a general purpose processor, including a multi-core processor.
  • the functions may be implemented in a consumer electronics device.
  • Embodiments of the present invention may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a motherboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA).
  • logic may include, by way of example, software or hardware and/or combinations of software and hardware.
  • Embodiments of the present invention may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments of the present invention.
  • a machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions.

Abstract

Adaptive control can use hierarchical motion estimation (HME) and/or multiple reference motion estimation (MRME) for the motion estimation of current encoding blocks. Both HME and MRME are allowed in the motion estimation to achieve a high coding gain. Control consists of slice level control and macro-block (MB) level control. A slice is one or more contiguous macroblocks. In slice level control, it is decided to use only one reference frame or use multiple reference frames to coding current slice based on the motion vectors obtained in coarse level motion estimation. In MB level control, it is decided to perform MRME or perform HME for the MB and its subblocks based on the coarse level motion vectors of the MB.

Description

    RELATED ART
  • H.264, also known as advanced video codec (AVC), and MPEG-4 Part 10 are ITU-T/ISO video compression standards that are expected to be widely pursued by the industry. The H.264 standard has been prepared by the Joint Video Team (JVT), and consisted of ITU-T SG16 Q.6, known as VCEG (Video Coding Expert Group), and also consisted of ISO/IEC JTC1/SC29/WG11, known as MPEG (Motion Picture Expert Group). H.264 is designed for applications in the area of Digital TV broadcast (DTV), Direct Broadcast Satellite (DBS) video, Digital Subscriber Line (DSL) video, Interactive Storage Media (ISM), Multimedia Messaging (MMM), Digital Terrestrial TV Broadcast (DTTB), and Remote Video Surveillance (RVS).
  • Motion estimation (ME) in video coding may be used to improve video compression performance by removing or reducing temporal redundancy among video frames. For encoding an input block, traditional motion estimation may be performed at an encoder within a specified search window in reference frames. This may allow determination of a motion vector that minimizes the sum of absolute differences (SAD) between the input block and a reference block in a reference frame. The motion vector (MV) information can then be transmitted to a decoder for motion compensation.
  • Where original input frames are not available at the decoder, ME at the decoder can be performed using the reconstructed reference frames. When encoding a predicted frame (P frame), there may be multiple reference frames in a forward reference buffer. When encoding a bi-predictive frame (B frame), there may be multiple reference frames in the forward reference buffer and at least one reference frame in a backward reference buffer. For B frame encoding, mirror ME or projective ME may be performed to determine the MV. For P frame encoding, projective ME may be performed to determine the MV.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an example of hierarchical motion estimation (HME).
  • FIG. 2 shows the block diagram of a video encoding system.
  • FIG. 3 shows a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks of an input slice of a P or B picture.
  • FIG. 4 depicts a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks.
  • FIG. 5 depicts a manner of determining motion vectors for macro blocks.
  • FIG. 6 depicts a system in accordance with an embodiment.
  • DETAILED DESCRIPTION
  • Known solutions may use only one reference frame instead of multiple reference motion estimation (MRME) to reduce the coding complexity or use hierarchical motion estimation (HME) to deal with high motions when the search window is very limited. It is known that using only one reference fame may not be sufficient to achieve high coding gain, and using HME may be not efficient for some blocks of the pictures.
  • ME acquires the predictions of a current encoding block from reference frames. ME includes a motion search within a search window centered by the search center in reference frames. Generally, higher coding gain may be achieved when a larger search window is used. On the other hand, using a larger search window increases the encoding complexity greatly, which may be power-use inefficient. In addition, for a hardware implementation, a search window may be very small due to the very limited on-chip memory size. In various embodiments, hierarchical motion estimation (HME) can be utilized to extend the search range of a small window by downscaling reference frames and using the small search window. But for blocks with small inter-frame motion, the small search window may be enough and as a result, HME may waste power for small inter-frame motion blocks. To achieve higher coding gain, multiple reference motion estimation (MRME) can be applied for blocks with small inter-frame motion so that motion search will be performed on multiple available reference frames to determine the prediction for the current encoding block. The reference index will be transmitted to the decoder side for motion compensation. For some high motion blocks, motion search on the nearest reference frame is enough and may save power associated with searching multiple frames.
  • Various embodiments adaptively control whether to use HME and/or MRME for encoding a current block, which can greatly save the coding power while also achieving high coding gains. Control can be based on slice level control and/or macroblock (MB) level control. A slice is one or more contiguous macroblocks. The order of macroblocks within a slice can be from left-to-right and top-to-bottom. Based on motion vectors determined using HME, the MBs can be divided into three categories, high motion MBs, low motion MBs, and other MBs. If there are too many high motion MBs in a current slice, experimental results show that MRME will not benefit the coding gain of the whole slice. In slice level control, it is decided to use only one reference frame to encode blocks of a current slice when there are too many high motion MBs in the current slice. However, other numbers of reference frames can be used such as two or three. Setting a number of reference frames to one allows the encoder to look for a motion vector in a single reference frame. The selected reference frame can be the closest frame in time (either forward or backward) or a frame that is similar to the current frame but not closest in time (e.g., harmonic motion). For all blocks in this slice, the reference index is not encoded into the bitstream because the decoder is aware of the reference frame to use when no reference index is provided. In addition, motion search on a single reference frame may be sufficient, and as a result, motion estimation using multiple reference frames may waste power for these blocks. Not encoding the reference index may save bits and may improve the coding gain.
  • For a non-high-motion slice, the number of reference frames for this slice can be set to any number such as a maximum allowed reference frames. For example, H.264 allows up to 16 reference frames.
  • In MB level control, it is decided whether to perform MRME or perform HME for a MB and its subblocks based on the motion vector (MV) of the MB determined using HME. Based on the MV determined using HME, if an MB is high motion, HME is performed for the MB and its subblocks. Based on the MV determined using HME, if an MB is low motion, MRME is performed for the MB and its subblocks. But if the low motion MB is in a high motion slice, then one reference frame can be used to encode the low motion MB and its subblocks. If the MB is neither high motion nor low motion, either HME or MRME or combination of HME and MRME can be applied to the MB and its subblocks. A combination of HME and MRME means extended search range in multiple available reference frames.
  • MRME applies motion search on multiple available reference frames to acquire prediction for a current encoding block and the reference index will be transmitted to the decoder side for correct motion compensation. A reference picture buffer stores decoded reference pictures. Those reference frames could be selected as reference frames in MRME. A reference picture can be chosen based on the closest temporal distance or picture-level similarity measurement. Multiple reference motion estimation may increase the encoder complexity.
  • Adaptive control of HME and MRME for encoding the current block can greatly save the coding power while also achieve high coding efficiency gain.
  • FIG. 1 depicts an example of hierarchical motion estimation (HME). Before motion estimation, a current frame and a reference frame are downscaled to coarse level frames by a factor of S. In other words, the width and height of the downscaled frames are 1/(S*S) of the original frame sizes. The original frames are called fine level frames. Generally, S may be set to 1.5, 1.6, 2, 4, 8, or other values. Motion estimation can be performed for each M×N block in the downscaled, coarse level current frame. Motion estimation is performed within the small search window having a size of W×H in the downscaled, coarse level reference frame. After obtaining the coarse level motion vector (mvx, mvy), the upscaled motion vector (mvx*S, mvy*S) is used as the search center and motion search is performed on fine level frames within the W×H search window to find a fine level motion vector. The upscaled MV (mvx*S, mvy*S) can be used as the search center for the pixels in a (M*S)×(N*S) block in a fine level current frame. For a W×H search window, the largest motion that can be searched is W/2 in the positive/negative horizontal direction and H/2 in positive/negative vertical direction. After HME, the biggest motion that can be searched is (S+1)W/2 in the horizontal direction and (S+1)H/2 in the vertical direction. The downscaled frame can be downscaled more than once. For example, an original picture of size 120 can be downscaled using S=2 to 60×60 pixels and then downscaled again to 30×30 pixels. The search window can be applied to one or more of the downscaled pictures.
  • HME can greatly reduce the ME complexity while achieving the same motion range. Complexity may refer to the processor software computational cycle or to the hardward gate count of the chip. Let the complexity of performing M×N block ME within a W×H search window be C. Then the complexity of HME for encoding the whole fine level frame is approximately K*C +S*S*K*C=(S*S+1)KC, where K is the number of M×N block in coarse level. But if a search window is ((S+1)W)×((S+1)H) in fine level, the ME complexity for encoding the whole fine level frame will be approximately (S+1)*(S+1)*S*S*K*C which is many times of HME.
  • In some embodiments, a flag mb_HME_flag=1 is set to tell the encoder to apply HME to determine a motion vector for the macro block.
  • In some advanced video coding standards, e.g., H.264/AVC, motion estimation can be performed in multiple reference frames and the reference index and motion vectors of the best predictions are transmitted to the decoder for motion compensation. The number of maximum allowed reference frames can be signaled in a slice header. If the maximum allowed reference is one, then the reference index for each block may not be encoded if the decoder knows that the reference index is 0 for those blocks. Otherwise, the reference index is encoded. MRME may reduce the prediction errors, but encoding bits for the reference index is non-neglectable overhead. MRME may use more overhead than HME by informing decoder what reference frames are used in encoding.
  • Experimental results show that HME has visible gains in terms of higher coding efficiency when the inter-frame block motion is large whereas MRME has visible gains of higher coding efficiency when inter-frame block motion is relatively small. Accordingly, an adaptive control mechanism can be used to adaptively to decide to use HME or MRME to encode a block. Adaptive control can greatly reduce the coding complexity and improve the coding gain. For example, if the true block motion is very small, performing general ME within a W×H search window on a fine level may be enough to obtain a motion vector. In such case, HME is not necessary for these small motion blocks and MRME can be used instead.
  • FIG. 2 shows the block diagram of a video encoding system. Adaptive ME Control block informs Motion Estimation block to perform MRME or to use motion vectors from Coarse Level ME block based on whether a slice or macro block is high or low motion. Motion Estimation block supports MRME and the traditional fine level motion search around a search center using predicted MV (pmv) within an W×H search window.
  • Coarse Level ME block determines motion vectors based on HME. HME can be utilized for large motions and adaptive control of HME and MRME is applied to reduce the coding complexity.
  • After performing coarse level ME for all M×N blocks in a coarse level slice, slice level control is performed based on the obtained coarse level motion vectors. For S=4 and M×N=4×4 block, the corresponding block in fine level is a 16×16 macroblock. A fine level MB can be defined as a high motion MB if its coarse level MV satisfies “abs(mv_x)>=TH 1∥abs(mv_y)>=TH 1”, where TH 1 may be a predefined constant. Suppose that the total number of macro blocks in the input slice is total_MBs and there are N high motion macroblocks in the input slice. If N/total_MBs is larger than a predefined threshold TH0, the input slice is regarded as a high motion slice.
  • For a high motion slice, MRME may not provide an appreciable gain to the video coding system. In addition, the bit overhead for encoding a reference index may even degrade the coding gain. Accordingly, for a high motion slice, the ME is limited to the nearest reference frame in time or picture-level and in the slice header, the allowed number of reference frames is signaled to be 1. For all blocks in this slice, the reference index will not be encoded, thereby saving bandwidth. The reference index is transmitted with a slice to a decoder or could be based on a common understanding between the encoder and decoder so no index is transmitted.
  • For a non-high-motion slice, the number of reference frames used to determine motion vectors for blocks in the slice, slice_num_ref, can be set to M, where M is the number of available frames in the reference buffer of the encoder. In H.264/AVC, the maximum allowed reference frames is signaled in a sequence parameter header.
  • FIG. 3 shows a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks of an input slice of a P or B picture.
  • Block 302 includes performing HME on the slice to determine coarse motion vectors. For example, HME techniques described with regard to FIG. 1 can be used to determine motion vectors for each of the macro blocks in a slice. An input slice is downscaled to a coarse level slice and the downscaled coarse level reference frames are available in the reference buffer. Then, for the ith M×N block in the current coarse level slice, motion estimation is performed within a W×H search window to acquire its motion vector (mvx(i), mvy(i)). For example, the block can be 4×4 and the search window can be 48×40. This motion vector can be used as a search center for the pixels in the corresponding (M*S)×(N*S) block, e.g., 16×16 block, in the frame of original dimensions.
  • Block 304 includes determining a number N of high motion macro blocks in the slice. A MB is regarded as a high motion MB if abs(mv_x)>=TH 1∥abs(mv_y)>=TH 1, where ∥ represents a logical OR, mv_x is a scaled-up motion vector in the x direction, mv_y is a scaled-up motion vector in the y direction, and TH 1 is a threshold.
  • Block 306 includes determining if a slice is a high motion slice. For example, a high motion slice can be one where a certain number of macro blocks in a slice are high motion. Block 306 can include determining whether N/total_MBs>TH0, where total_MBs is a total number of macro blocks in a slice and TH0 is a threshold value. If the slice is not a high motion slice, then block 308 follows block 306. If the slice is a high motion slice, then block 310 follows block 306.
  • Block 308 includes setting a number of reference frames used to determine motion vectors for macro blocks in the current slice to M, the number of available frames in the reference buffer of the encoder.
  • Block 310 includes setting a number of reference frames used to determine motion vectors for macro blocks in the current slice to one. However, the number of reference frames can be more than one.
  • Block 312 includes determining motion vectors for all MB and subblocks in the slice in a manner described with regard to FIG. 5.
  • FIG. 4 depicts a flow diagram of a manner to determine whether to use HME or MRME to determine motion vectors for blocks based on macro blocks.
  • Block 402 includes determining whether a macro block is a high motion macro block. A macro block may be a high motion macro block if abs(mv_x)>TH2 and abs(mv_y)>TH2, where mv_x and mv_y represent motion vectors determined in a manner similar to that of block 302. Here, TH2 is no bigger than TH 1.
  • Block 404 includes determining whether a macro block is a low motion macro block. A macro block may be a low motion macro block if abs(mv_x)<=TH2 and abs(mv_y)<=TH2, where mv_x and mv_y represent motion vectors determined in block 304.
  • Block 406 includes setting a number of reference frames to one and indicating that the motion vector of the current macro block are to be determined using MRME. Variable mb_num_ref can be used to represent a number of reference frames used to determine motion vectors of the current block. Variable mb_HME_flag represents whether HME or MRME is used to determine motion vectors for the current block.
  • Block 408 includes setting a number of reference frames to the number of reference frames used for the slice of the current macro block and indicating that the motion vector of the current macro block is to be determined using MRME. For example, the number of reference frames used for the slice of the current macro block can be determined based on the flow diagram of FIG. 3. The number of reference frames could increase as long as power/complexity is not a concern.
  • Block 410 includes setting a number of reference frames to one and indicating that the motion vector of the current macro block is to be determined using HME. Accordingly, motion vectors mv_x and mv_y determined in a manner similar to that of block 302 can be used as the motion vectors for the current macro block.
  • Block 412 includes determining motion vectors for the current macro block and subblocks using either HME or MRME. Determination of motion vectors for the current macro block and subblocks can be made in a manner described with regard to FIG. 5.
  • In some embodiments, determinations of the motion vector are made based on both slice level and macro block level. Accordingly, block 312 is not performed and the process of FIG. 3 precedes the process of FIG. 4 and instead motion vectors are determined in block 412.
  • FIG. 5 depicts a manner of determining motion vectors for macro blocks.
  • Block 502 includes, in the nearest single reference frame, performing general ME within a W×H search window centered by the predicted motion vector (pmv) for a macro block and its subblocks. Determination of vector pmv is based on the well known techniques described in ITU-T Series H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS: Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services (March 2009), section 8.4 called “Inter prediction process” which uses median motion vectors of three macro blocks neighboring the macro block of interest.
  • Block 504 includes determining whether the macro block is a high motion macro block. Determination of whether the macro block is high motion can be made in a manner similar to that in block 402. If the macro block is a high motion macro block, then block 506 follows block 504. If the macro block is not a high motion macro block, then block 508 follows block 504.
  • Block 506 includes, in the nearest reference frame, applying ME within a W×H search window centered by the upscaled coarse level motion vector (cmv) for a macro block and its subblocks. Vector cmv is derived from a coarse level motion vector. Block 506 may determine a MV based on the ME centered from the upscaled cmv.
  • Block 508 includes determining whether the macro block is a low motion macro block. Determination of whether the macro block is low motion can be made in a manner similar to that in block 404. If the macro block is a low motion macro block, then block 510 follows block 508. If the macro block is not a low motion macro block, then block 512 follows block 508.
  • Block 510 includes, in available reference frames (other than the nearest single reference frame), performing general ME within a W×H search window centered by the predicted motion vector (pmv) for a macro block and its subblocks. The vector pmv is determined using the well known H.264, section 8.4 called “Inter prediction process.” Note that vector pmv may be different for each of the reference frames.
  • Block 512 includes selecting motion vectors determined for the macro block based on the lowest rate-distortion cost. Blocks 502 and 510 provide multiple predicted motion vectors and blocks 502 and 506 provide predicted motion vectors and a single coarse motion vector. The motion vector and reference index with the minimum rate-distortion (RD) cost can be used to encode the current block.
  • Adaptive control can be used to select which of the blocks is to be performed to reduce the ME complexity while maintaining the coding gain. For example, if the current MB is a high motion MB, block 506 will also be performed for this MB and its subblocks and block 510 will be skipped for this MB and its subblocks. In addition, if only a single reference frame is to be used, block 510 will also be skipped for this MB and its subblocks. If the MB is a low motion MB, block 510 will be performed for this MB and its subblocks and block 506 will be skipped for this MB and its subblocks. If the MB is neither a high motion MB nor a low motion MB, both blocks 506 and 510 will be skipped for this MB and its subblocks. With this MB level control, the ME complexity can be greatly reduced and the high coding gain can still be achieved.
  • FIG. 6 depicts a system in accordance with an embodiment. System 600 may include host system 602 and display 622. Computer system 600 can be implemented in a handheld personal computer, mobile telephone, set top box, or any computing device. Host system 602 may include chipset 605, processor 610, host memory 612, storage 614, graphics subsystem 615, and radio 620. Chipset 605 may provide intercommunication among processor 610, host memory 612, storage 614, graphics subsystem 615, and radio 620. For example, chipset 605 may include a storage adapter (not depicted) capable of providing intercommunication with storage 614. For example, the storage adapter may be capable of communicating with storage 614 in conformance with any of the following protocols: Small Computer Systems Interface (SCSI), Fibre Channel (FC), and/or Serial Advanced Technology Attachment (S-ATA).
  • In various embodiments, graphics subsystem 615 may perform encoding of video with motion vector and reference frame information for motion estimation based on techniques described herein. Encoded video can be transmitted from system 600 to a video decoder.
  • Processor 610 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, multi-core, or any other microprocessor or central processing unit.
  • Host memory 612 may be implemented as a volatile memory device such as but not limited to a Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or Static RAM (SRAM). Storage 614 may be implemented as a non-volatile storage device such as but not limited to a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device.
  • Graphics subsystem 615 may perform processing of images such as still or video for display. An analog or digital interface may be used to communicatively couple graphics subsystem 615 and display 622. For example, the interface may be any of a High-Definition Multimedia Interface, DisplayPort, wireless HDMI, and/or wireless HD compliant techniques. Graphics subsystem 615 could be integrated into processor 610 or chipset 605. Graphics subsystem 615 could be a stand-alone card communicatively coupled to chipset 605.
  • Radio 620 may include one or more radios capable of transmitting and receiving signals in accordance with applicable wireless standards such as but not limited to any version of IEEE 802.11 and IEEE 802.16.
  • The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a discrete graphics and/or video processor may be used. As still another embodiment, the graphics and/or video functions may be implemented by a general purpose processor, including a multi-core processor. In a further embodiment, the functions may be implemented in a consumer electronics device.
  • Embodiments of the present invention may be implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a motherboard, hardwired logic, software stored by a memory device and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The term “logic” may include, by way of example, software or hardware and/or combinations of software and hardware.
  • Embodiments of the present invention may be provided, for example, as a computer program product which may include one or more machine-readable media having stored thereon machine-executable instructions that, when executed by one or more machines such as a computer, network of computers, or other electronic devices, may result in the one or more machines carrying out operations in accordance with embodiments of the present invention. A machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (Compact Disc-Read Only Memories), magneto-optical disks, ROMs (Read Only Memories), RAMs (Random Access Memories), EPROMs (Erasable Programmable Read Only Memories), EEPROMs (Electrically Erasable Programmable Read Only Memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions.
  • The drawings and the forgoing description gave examples of the present invention. Although depicted as a number of disparate functional items, those skilled in the art will appreciate that one or more of such elements may well be combined into single functional elements. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of the present invention, however, is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of the invention is at least as broad as given by the following claims.

Claims (20)

1. A computer-implemented method comprising:
determining a scaled-down motion vector for a block based on a scaled-down reference frame;
determining whether to apply a hierarchical motion estimation (HME) technique or multiple reference motion estimation (MRME) technique based in part on one or more of whether a slice of the block is a high motion slice or whether the block is a high motion block, wherein whether a slice is high motion or whether the block is high motion depends in part on the determined scaled-down motion vector;
determining a motion vector for the block based on the determined motion estimation technique; and
transmitting encoded video including the determined motion vector to a video decoder.
2. The method of claim 1, wherein:
when a slice of the block is a low motion slice, a number of reference frames used in determining a motion vector for the block is set to a number of reference frames in the motion buffer and transmitting encoded video includes specifying indexes of the reference frames and
when a slice of the block is a high motion slice, a number of reference frames used in determining a motion vector for the block is set to one and transmitting encoded video includes not specifying an index of the reference frame.
3. The method of claim 1, wherein when the block is a high motion block, determining a motion vector for the block based on the determined motion estimation technique comprises:
determining a predicted motion vector based on a nearest reference frame;
determining a motion vector based on the nearest reference frame; and
selecting a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the motion vector based on the nearest reference frame.
4. The method of claim 3, wherein the determining a predicted motion vector based on a nearest reference frame comprises determining median motion vectors of macro blocks neighboring the block of interest.
5. The method of claim 3, wherein the determining a motion vector based on the nearest reference frame comprises applying motion estimation within a search window centered in an up-scaled version of the scaled-down motion vector.
6. The method of claim 1, wherein when the block is a low motion block, determining a motion vector for the block based on the determined motion estimation technique comprises:
determining a predicted motion vector based on a nearest reference frame;
determining at least one motion vector based on at least one available reference frame other than the nearest reference frame; and
selecting a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the at least one motion vector based on at least one available reference frame other than the nearest reference frame.
7. The method of claim 6, wherein the determining a predicted motion vector based on a nearest reference frame comprises determining median motion vectors of macro blocks neighboring the block of interest.
8. The method of claim 6, wherein the determining at least one motion vector based on at least one available reference frame other than the nearest reference frame comprises determining median motion vectors of macro blocks neighboring the block of interest.
9. An apparatus comprising:
a coarse level motion estimator to determine scaled-down motion vectors for a block based on a scaled-down version of a reference frame;
an adaptive motion estimation control block to
determine whether a slice is a high motion slice based in part on the scaled-down motion vectors and whether blocks of the slice are high motion based in part on the scaled-down motion vectors and
determine whether to determine motion vectors for the block based on a hierarchical motion estimation (H ME) technique or multiple reference motion estimation (MRME) based on one or more of whether the slice is a high motion slice and the block of the slice is high motion; and
a motion estimation block to selectively determine motion vectors for the block in response to a request from the adaptive motion estimation control block.
10. The apparatus of claim 9, wherein when the block is a high motion block, the motion estimation block is to:
determine a predicted motion vector based on a nearest reference frame;
determine a motion vector based on the nearest reference frame; and
select a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the motion vector based on the nearest reference frame.
11. The apparatus of claim 10, wherein to determine a predicted motion vector based on a nearest reference frame, the motion estimation block is to determine median motion vectors of macro blocks neighboring the block of interest.
12. The apparatus of claim 10, wherein to determine a motion vector based on the nearest reference frame, the motion estimation block is to apply motion estimation within a search window centered in an up-scaled version of a scaled-down motion vector.
13. The apparatus of claim 9, wherein when the block is a low motion block, the motion estimation block is to:
determine a predicted motion vector based on a nearest reference frame;
determine at least one motion vector based on at least one available reference frame other than the nearest reference frame; and
select a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the at least one motion vector based on at least one available reference frame other than the nearest reference frame.
14. The apparatus of claim 13, wherein to determine a predicted motion vector based on a nearest reference frame, the motion estimation block is to:
determine median motion vectors of macro blocks neighboring the block of interest.
15. The apparatus of claim 13, wherein to determine at least one motion vector based on at least one available reference frame other than the nearest reference frame, the motion estimation block is to:
determine median motion vectors of macro blocks neighboring the block of interest.
16. The apparatus of claim 13, wherein:
when a slice of the block is a low motion slice, a number of reference frames used to selectively determine motion vectors for the block is set to a number of reference frames in the motion buffer and
when a slice of the block is a high motion slice, a number of reference frames used to selectively determine motion vectors for the block is set to one.
17. The apparatus of claim 13, wherein a high motion slice comprises a slice with more than a predetermined number of high motion blocks.
18. A system comprising:
a display device;
a wireless network interface; and
a computer system configured to:
determine a scaled-down motion vector for a block based on a scaled-down reference frame,
determine whether to apply a hierarchical motion estimation (HME) technique or multiple reference motion estimation (MRME) technique based in part on one or more of whether a slice of the block is a high motion slice or whether the block is a high motion block, wherein whether a slice is high motion or whether the block is high motion depends in part on the determined scaled-down motion vector, and
determine a motion vector for the block based on the determined motion estimation technique.
19. The system of claim 18, wherein:
when a slice of the block is a low motion slice, a number of reference frames used to determine a motion vector for the block is set to a number of reference frames in the motion buffer and
when a slice of the block is a high motion slice, a number of reference frames used to determine a motion vector for the block is set to one.
20. The system of claim 18, wherein:
when the block is a high motion block, to determine a motion vector for the block based on the determined motion estimation technique, the computer system is to:
determine a predicted motion vector based on a nearest reference frame;
determine a motion vector based on the nearest reference frame; and
select a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the motion vector based on the nearest reference frame;
when the block is a low motion block, to determine a motion vector for the block based on the determined motion estimation technique, the computer system is to:
determine a predicted motion vector based on a nearest reference frame,
determine at least one motion vector based on at least one available reference frame other than the nearest reference frame, and
select a motion vector and reference frame based on a lowest rate-distortion from among the predicted motion vector based on a nearest reference frame and the at least one motion vector based on at least one available reference frame other than the nearest reference frame.
US13/001,037 2010-03-31 2010-03-31 Power efficient motion estimation techniques for video encoding Abandoned US20120281759A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/071451 WO2011120221A1 (en) 2010-03-31 2010-03-31 Power efficient motion estimation techniques for video encoding

Publications (1)

Publication Number Publication Date
US20120281759A1 true US20120281759A1 (en) 2012-11-08

Family

ID=44711304

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/001,037 Abandoned US20120281759A1 (en) 2010-03-31 2010-03-31 Power efficient motion estimation techniques for video encoding
US13/638,628 Active 2032-12-27 US9591326B2 (en) 2010-03-31 2010-03-31 Power efficient motion estimation techniques for video encoding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/638,628 Active 2032-12-27 US9591326B2 (en) 2010-03-31 2010-03-31 Power efficient motion estimation techniques for video encoding

Country Status (5)

Country Link
US (2) US20120281759A1 (en)
KR (1) KR101390620B1 (en)
CN (1) CN102918839B (en)
TW (1) TWI451766B (en)
WO (1) WO2011120221A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022126A1 (en) * 2010-03-31 2013-01-24 Lidong Xu Power Efficient Motion Estimation Techniques for Video Encoding
US20160127741A1 (en) * 2013-01-30 2016-05-05 Atul Puri Content adaptive prediction distance analyzer and hierarchical motion estimation system for next generation video coding
US20200068214A1 (en) * 2018-08-27 2020-02-27 Ati Technologies Ulc Motion estimation using pixel activity metrics

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9532048B2 (en) 2012-03-15 2016-12-27 Intel Corporation Hierarchical motion estimation employing nonlinear scaling and adaptive source block size
EP2831811A4 (en) * 2012-03-28 2016-02-17 Intel Corp Content aware selective adjusting of motion estimation
CN102685371B (en) * 2012-05-22 2015-04-08 大连理工大学 Digital video image stabilization method based on multi-resolution block matching and PI (Portion Integration) control
US10075712B2 (en) 2014-11-20 2018-09-11 Hfi Innovation Inc. Method of motion vector and block vector resolution control
CN105306953A (en) * 2015-12-10 2016-02-03 腾讯科技(深圳)有限公司 Image coding method and device
WO2019001741A1 (en) 2017-06-30 2019-01-03 Huawei Technologies Co., Ltd. Motion vector refinement for multi-reference prediction
US10542277B2 (en) 2017-10-24 2020-01-21 Arm Limited Video encoding
CN117834906A (en) * 2019-03-08 2024-04-05 华为技术有限公司 Motion vector refined search area
TWI768324B (en) 2020-04-16 2022-06-21 瑞昱半導體股份有限公司 Image processing method and image processing device
CN113542743A (en) * 2020-04-22 2021-10-22 瑞昱半导体股份有限公司 Image processing method and image processing apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549575B1 (en) * 1996-11-07 2003-04-15 International Business Machines Corporation. Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding
US20040218676A1 (en) * 2003-05-01 2004-11-04 Samsung Electronics Co., Ltd. Method of determining reference picture, method of compensating for motion and apparatus therefor
US20050074064A1 (en) * 2003-10-04 2005-04-07 Samsung Electronics Co., Ltd. Method for hierarchical motion estimation

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100243430B1 (en) * 1997-12-31 2000-02-01 구자홍 Method of adaptive quantization control
US6721454B1 (en) * 1998-10-09 2004-04-13 Sharp Laboratories Of America, Inc. Method for automatic extraction of semantically significant events from video
EP1592248B1 (en) * 2004-04-30 2006-09-20 Matsushita Electric Industrial Co., Ltd. Motion vector estimation employing adaptive temporal prediction
JP5342777B2 (en) * 2004-09-29 2013-11-13 トムソン リサーチ ファンディング コーポレイション RRU video encoding and decoding method and apparatus
JP2006174415A (en) * 2004-11-19 2006-06-29 Ntt Docomo Inc Image decoding apparatus, image decoding program, image decoding method, image encoding apparatus, image encoding program, and image encoding method
US20070121728A1 (en) * 2005-05-12 2007-05-31 Kylintv, Inc. Codec for IPTV
US9113147B2 (en) * 2005-09-27 2015-08-18 Qualcomm Incorporated Scalability techniques based on content information
US8406303B2 (en) * 2005-12-15 2013-03-26 Analog Devices, Inc. Motion estimation using prediction guided decimated search
CN100471275C (en) 2006-09-08 2009-03-18 清华大学 Motion estimating method for H.264/AVC coder
US8379734B2 (en) * 2007-03-23 2013-02-19 Qualcomm Incorporated Methods of performing error concealment for digital video
WO2011120221A1 (en) * 2010-03-31 2011-10-06 Intel Corporation Power efficient motion estimation techniques for video encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549575B1 (en) * 1996-11-07 2003-04-15 International Business Machines Corporation. Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding
US20040218676A1 (en) * 2003-05-01 2004-11-04 Samsung Electronics Co., Ltd. Method of determining reference picture, method of compensating for motion and apparatus therefor
US20050074064A1 (en) * 2003-10-04 2005-04-07 Samsung Electronics Co., Ltd. Method for hierarchical motion estimation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130022126A1 (en) * 2010-03-31 2013-01-24 Lidong Xu Power Efficient Motion Estimation Techniques for Video Encoding
US9591326B2 (en) * 2010-03-31 2017-03-07 Intel Corporation Power efficient motion estimation techniques for video encoding
US20160127741A1 (en) * 2013-01-30 2016-05-05 Atul Puri Content adaptive prediction distance analyzer and hierarchical motion estimation system for next generation video coding
US10284852B2 (en) * 2013-01-30 2019-05-07 Intel Corporation Content adaptive prediction distance analyzer and hierarchical motion estimation system for next generation video coding
US20200068214A1 (en) * 2018-08-27 2020-02-27 Ati Technologies Ulc Motion estimation using pixel activity metrics

Also Published As

Publication number Publication date
CN102918839A (en) 2013-02-06
CN102918839B (en) 2016-05-18
WO2011120221A1 (en) 2011-10-06
US9591326B2 (en) 2017-03-07
TW201204055A (en) 2012-01-16
KR20130001303A (en) 2013-01-03
KR101390620B1 (en) 2014-04-30
US20130022126A1 (en) 2013-01-24
TWI451766B (en) 2014-09-01

Similar Documents

Publication Publication Date Title
US9591326B2 (en) Power efficient motion estimation techniques for video encoding
US9942572B2 (en) Content adaptive fusion filtering of prediction signals for next generation video coding
US8472525B2 (en) Method and apparatus for encoding/decoding motion vector
US8477847B2 (en) Motion compensation module with fast intra pulse code modulation mode decisions and methods for use therewith
CN106713915B (en) Method for encoding video data
US8908763B2 (en) Fragmented reference in temporal compression for video coding
KR102121558B1 (en) Method of stabilizing video image, post-processing device and video encoder including the same
EP3662663A1 (en) Intra merge prediction
US20220116648A1 (en) Encoder, a decoder and corresponding methods
US20090141808A1 (en) System and methods for improved video decoding
US20120027092A1 (en) Image processing device, system and method
US20080247465A1 (en) Method and System for Mapping Motion Vectors between Different Size Blocks
EP2687016A1 (en) Low memory access motion vector derivation
CN116405696A (en) Encoder, decoder and corresponding methods for merge mode
US20070133689A1 (en) Low-cost motion estimation apparatus and method thereof
US20140354771A1 (en) Efficient motion estimation for 3d stereo video encoding
EP3896974A1 (en) Video encoder, video decoder, and corresponding method
US9197892B2 (en) Optimized motion compensation and motion estimation for video coding
EP4037320A1 (en) Boundary extension for video coding
KR20170007665A (en) Rate control encoding method using skip mode information and therefore encoding device
JP2006246277A (en) Re-encoding apparatus, re-encoding method, and re-encoding program
EP1683361B1 (en) Power optimized collocated motion estimation method
US20110051815A1 (en) Method and apparatus for encoding data and method and apparatus for decoding data
US8897368B2 (en) Image coding device, image coding method, image coding integrated circuit and image coding program
WO2023056360A1 (en) Method, apparatus and medium for video processing

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION