WO2010069113A1 - Video processing method and apparatus with residue prediction - Google Patents

Video processing method and apparatus with residue prediction Download PDF

Info

Publication number
WO2010069113A1
WO2010069113A1 PCT/CN2008/073599 CN2008073599W WO2010069113A1 WO 2010069113 A1 WO2010069113 A1 WO 2010069113A1 CN 2008073599 W CN2008073599 W CN 2008073599W WO 2010069113 A1 WO2010069113 A1 WO 2010069113A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
residues
current block
block
unit
Prior art date
Application number
PCT/CN2008/073599
Other languages
French (fr)
Inventor
Kai Zhang
Li Zhang
Si Wei Ma
Wen Gao
Shaw Min Lei
Original Assignee
Mediatek Singapore Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Singapore Pte. Ltd. filed Critical Mediatek Singapore Pte. Ltd.
Priority to EP08878857.5A priority Critical patent/EP2380354A4/en
Priority to PCT/CN2008/073599 priority patent/WO2010069113A1/en
Priority to US13/119,757 priority patent/US9088797B2/en
Publication of WO2010069113A1 publication Critical patent/WO2010069113A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the invention relates to video processing and more particularly relates to a video processing method and apparatus with residue prediction.
  • a video sequence is composed of a series of video frames containing spatial and temporal redundancies, which may be encoded by many block-based video-coding standards, e.g., MPEG- 1/2/4, H.264, etc, without significantly sacrificing video quality.
  • block-based video-coding standards e.g., MPEG- 1/2/4, H.264, etc.
  • spatial correlations between adjacent pixels or blocks may be removed by introducing intra-frame prediction methods.
  • intra-frame prediction methods allowing a current block to be predicted from reconstructed neighboring pixels of previous blocks within a current video frame. Due to intra-frame prediction, each block is reconstructed for encoding subsequent neighboring blocks.
  • inter-frame prediction i.e., motion compensation prediction
  • motion vectors which indicate the displacement of a moving object from a current block in the current video frame to a corresponding displaced block in a reference video frame.
  • residues The difference between the current block and the corresponding displaced block is referred to as residues.
  • the reconstructed video frame is used for intra-frame prediction of subsequent neighboring blocks within the current video frame and inter- frame prediction of subsequent video frames.
  • intra-frame prediction is selected only when a scene change occurs or significant motion exists.
  • intra-frame prediction possesses some merits in image regions with high geometric features and provides error resilience.
  • motion compensation can be combined with the intra-frame prediction.
  • C. Chen and K. Pang “Hybrid Coders with Motion Compensation,” Multidimensional Systems and Signal Processing, May. 1992 describes that some spatial correlations are among motion compensation residues.
  • B. Tao and M. Orchard "Gradient-Based Residual Variance Modeling and Its Applications to Motion-Compensated Video Coding," /EEE Transactions on Image Processing, Jan. 2001 mentions that the spatial correlations show some geometric features. Further, the method described by K.
  • a video processing apparatus with residue prediction of a current video frame spatially partitioned into a plurality of blocks comprises a motion estimation/compensation unit, a pseudo-residue generating unit, a first arithmetic unit, a residue-predicting unit and a post-processing unit.
  • the motion estimation/compensation unit determines a matching block of a reference video frame according to a current block of the current video frame, obtains a motion vector of the current block describing motion relative to the matching block, acquires neighboring reconstructed pixels adjacent to the current block, and retrieves corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector.
  • the pseudo-residue generating unit constructs a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame.
  • the first arithmetic unit generates first- order residues by subtracting the matching block from the current block.
  • the residue- predicting unit is coupled to the pseudo-residue generating unit for employing the set of pseudo residues to predict the first-order residues and derive second-order residues and corresponding residue prediction information for the current block.
  • the post-processing unit is coupled to the residue-predicting unit for deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame.
  • a video processing method with residue prediction of a current video frame spatially partitioned into a plurality of blocks comprising: determining a matching block of a reference video frame according to a current block of the current video frame; obtaining a motion vector of the current block describing motion relative to the matching block; acquiring neighboring reconstructed pixels adjacent to the current block; retrieving corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector; constructing a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame; generating first-order residues by subtracting the matching block from the current block; employing the set of pseudo residues to predict the first-order residues; deriving second- order residues and corresponding residue prediction information for the current block; and deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame.
  • a video decoder with residue prediction for decoding a bit-stream into video frames partitioned into a plurality of blocks comprises a decoding unit, an inverse quantization and discrete cosine transform (IQ/IDCT) unit, a pseudo-residue generating unit, a residue predicting unit, an arithmetic unit, a motion compensation unit, and a reconstruction unit.
  • the decoding unit receives and decodes the bit-stream for generating inter mode information, residue prediction information and corresponding residual data.
  • the inverse quantization and discrete cosine transform (IQ/IDCT) unit generates a reconstructed second-order residues from the residual data for reconstructing a current block of a current video frame from the residual data.
  • the pseudo-residue generating unit provides pseudo residues for reconstructing the current block.
  • the residue predicting unit derives a prediction set of the current block according to the residue prediction information from the decoding unit and the pseudo residues from the pseudo-residue generating unit.
  • the arithmetic unit outputs first-order residues for reconstructing the current block by adding the reconstructed second-order residues to the prediction set.
  • the motion compensation unit acquires corresponding pixels adjacent to a matching block of a reference video frame for the current block according to the inter mode information.
  • the reconstruction unit combines the first-order residues with the corresponding pixels to generate a reconstructed current block.
  • FIG. 1 is a block diagram illustrating a video processing apparatus with residue prediction for video coding according to one embodiment of the invention
  • FIG. 2 illustrates an exemplary process for generating pseudo residues for residue prediction in a video processing apparatus according to the invention
  • FIG. 3 illustrates four prediction modes for residue prediction in accordance with one embodiment of the invention
  • FIG. 4 is a block diagram of a decoding unit according to one embodiment of the invention.
  • FIG. 5 is a flowchart illustrating a video processing method with residue prediction according to one embodiment of the invention.
  • FIG. 1 is a block diagram illustrating a video processing apparatus 10 with residue prediction for video coding according to one embodiment of the invention.
  • the video processing apparatus 10 receives a current video frame 102 spatially partitioned into a plurality of independent blocks. Each partitioned block may be a 16x16 macroblock or sub-partitioned into block sizes of 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4.
  • the video processing apparatus 10 comprises a motion estimation/compensation (ME/MC) unit 110, a pseudo-residue generating unit 112, a first arithmetic unit 114, a residue-predicting unit 116 and a post-processing unit 118.
  • ME/MC motion estimation/compensation
  • FIG. 2 illustrates an exemplary process for generating pseudo residues for residue prediction in a video processing apparatus according to the invention.
  • the video frame k is the current video frame 102 being processed and the video frame k-1 is a previous reference video frame.
  • a future reference video frame may be provided for predicting the video frame k according to another embodiment.
  • a matching block B' of the video frame k-1 is determined according to a matching method for predicting the current block B of the video frame k.
  • Some matching methods such as the mean squared error (MSE) matching method may be used to determine the similarity between the current block B and those in the video frame k- 1.
  • a motion vector MV(V X B , V y B ) is then calculated to represent a displacement between the current block B and the matching block B'.
  • first-order residues Rt is obtained by
  • subtracting pixel values within the matching block B' (denoted by S k-i,w) from those within the current block B (denoted by S ⁇ B )-
  • certain regions of one- pixel width adjacent to the left, the top-left, the top-right, or the top borders of the current block B are defined. According to this embodiment, such regions are referred to as a region set A having neighboring reconstructed pixels.
  • a region set A having neighboring reconstructed pixels.
  • S indicates the data required to be encoded and S indicates the data being reconstructed.
  • the ME/MC unit 110 receives the current video frame 102, e.g., video frame k, and a reference video frame 104, e.g., video frame k-1, to determine the matching block B' of the video frame k-1 according to the current block B of the video frame k.
  • a reference video frame 104 e.g., video frame k-1
  • the region set A and A' (respectively denoted by S k,A and S k-i,A') is provided to the pseudo-residue generating unit 112 for generating pseudo residues Q A>
  • the first arithmetic unit 114 then performs inter- frame prediction to acquire the first-order
  • the residue-predicting unit 116 coupled to the pseudo-residue generating unit 112, employs the pseudo residues Q ⁇ to predict the first- order residues R ⁇ to derive second-order residues 106 and corresponding residue prediction information 108 for the current block B.
  • the residue-predicting unit 116 comprises a determination unit 134 and a second arithmetic unit 132.
  • the determination unit 134 determines a prediction set 136 in response to the pseudo residues Q ⁇ for the first-order residues Rt.
  • FIG. 3 illustrates four prediction modes, such as Mode 0, Mode 1, Mode 2 and Mode 3, for residue prediction in accordance with one embodiment of the invention.
  • the prediction direction corresponding to each mode is a vertical prediction (Mode 0), a horizontal prediction (Mode 1), a DC prediction (Mode 2) and a diagonal prediction (Mode 3).
  • not using directional residue prediction may also be treated as a special mode.
  • the determination unit 134 may select one prediction mode to generate the prediction set 136 with respect to the pseudo residues Q ⁇ .
  • the second arithmetic unit 132 is coupled to the determination unit 134 for providing the second-order residues 106 according to the prediction set 136 and the first-order residues R ⁇ .
  • the second-order residues 106 are calculated by performing the difference between the first-order residues Rt for the current block and the prediction set 136.
  • the second arithmetic unit 132 simply subtracts the prediction set 136 from the first-order residues Rt to generate the second-order residues 106.
  • the second-order residues 106 may be adjusted by further processing steps including, but not limited to, offset processing, weighting and filtering.
  • the determination unit 134 may jointly optimize all candidate motion vectors for the current block B and entire prediction modes for determining the second- order residues 106 and the corresponding residue prediction information 108 for the current block B.
  • the optimal prediction set is the one that minimizes the energy of the second-order resides 106 in a joint optimization process.
  • a switch SW A is set to an input contact N 1 , thereby causing the second-order residues 106 to be inputted to a further encoding unit 120.
  • the switch SW A is set to an input contact N 2 which causes the first-order residues R ⁇ to be directly encoded.
  • the encoding unit 120 comprises a discrete cosine transform and quantization (DCT/Q) unit 122 and an entropy coding unit 124.
  • the DCT/Q unit 122 transforms and quantizes the first-order residues R ⁇ or the second-order residues 106 for the current video frame 102 and yields quantized DCT values with respect to the current video frame 102.
  • the entropy coding unit 124 applies entropy coding, such as a variation of run length coding, to the quantized DCT values and the corresponding residue prediction information 108 to generate an output bit-stream 126.
  • inter-frame prediction for the first-order residues R t (not shown) is also entropy coded via the entropy coding unit 124.
  • the bit-stream 126 may be stored, further processed or provided to a decoding unit.
  • the decoding unit employs the information in the bit-stream 126 to reconstruct the original video frames.
  • the decoding process of the decoding unit is described below in detail with reference to FIG. 4. [0023] When no residue prediction is used, another switch SW B is set to an input contact N 4 to transmit a reconstructed current block generated from an inverse quantization and DCT (IQ/IDCT) unit 128 to the post-processing unit 118.
  • IQ/IDCT inverse quantization and DCT
  • the IQ/IDCT unit 128 outputs a reconstructed second-order residues to a third arithmetic unit 130.
  • the third arithmetic unit 130 then generates a reconstructed current block according to the reconstructed second-order residues and the prediction set 136.
  • the switch SW B is set to an input contact N 3 which causes the reconstructed current block from the third arithmetic unit 130 to be inputted to the post-processing unit 118.
  • the post-processing unit 118 comprises a deblocking unit 140 and a memory unit 142.
  • the de-blocking unit 140 alleviates the discontinuity artifacts around the boundaries of the reconstructed current block and generates a reconstructed current video frame when all blocks of the current video frame 102 are processed.
  • the memory unit 142 is coupled to the de-blocking unit for storing the reconstructed current block and the reconstructed current video frame respectively provided for encoding subsequent blocks within the current video frame 102 and a next incoming video frame.
  • the reconstructed current video frame may also be outputted to a video display unit (not shown) for display.
  • the video processing apparatus 10 further comprises an intra predicting unit 138 capable of performing directional residue prediction on residual data for intra-frame prediction of the current block.
  • the intra predicting unit 138 acquires the neighboring reconstructed pixels adjacent to the current block within the current video frame 102.
  • the intra predicting unit 138 then performs intra-frame prediction on the current block to generate a pattern block 144 according to the neighboring reconstructed pixels.
  • the intra predicting unit 138 also defines corresponding pixels 148 adjacent to the neighboring reconstructed pixels for generation of pseudo residues.
  • the first arithmetic unit 114 accordingly generates the first-order residues R ⁇ by subtracting the pattern block 144 from the current block S ⁇ B , as shown in FIG. 1.
  • the pseudo-residue generating unit 112 constructs another set of pseudo residues Q ⁇ according to the neighboring reconstructed pixels and the corresponding pixels 148 in the current video frame 102. Consequently, the residue-predicting unit 116 predicts the first-order residues Rk according to the another set of pseudo residues Qk and derives the second-order residues 106 and the corresponding residue prediction information 108 for the current block.
  • the corresponding residue prediction information 108 comprises prediction parameters respectively from the ME/MC unit 110 and the intra predicting unit 138, block size information of each partitioned block, and mode information indicating a prediction direction of each partitioned block.
  • FIG. 4 is a block diagram of an exemplary decoding unit 40.
  • the decoding unit 40 performs a reverse process for video coding with residue prediction and comprises an entropy decoding unit 424, an inverse quantization and DCT (IQ/IDCT) unit 428, a residue predicting unit 416, an intra predicting unit 438, a motion compensation (MC) unit 410, a reconstruction unit 420 and a post-processing unit 418.
  • IQ/IDCT inverse quantization and DCT
  • MC motion compensation
  • the bit-stream 126 is inputted to the entropy decoding unit 424.
  • the incoming bit-stream 126 as encoded by the entropy coding unit 124 in FIG. 1, specifies the information of each video frame and thus, determines whether inter- frame or intra-frame prediction is to be applied.
  • the entropy decoding unit 424 decodes the bit-stream 126 to generate intra mode information for intra- frame prediction, inter mode information for inter-frame prediction, residue prediction information for residue predication and corresponding residual data.
  • the IQ/IDCT unit 428 receives the residual data and outputs a reconstructed block to a switch SWc , which is set to an input contact N 6 .
  • the reconstructed block is subsequently inputted to the reconstruction unit 420. Further, when the output bit-stream 126 is encoded with residual prediction, the IQ/IDCT unit 428 generates a reconstructed second-order residues to a fourth arithmetic unit 430.
  • the residue predicting unit 416 derives a prediction set 436 according to the residue prediction information and pseudo residues Q ⁇ from the pseudo- residue generating unit 412. Operations of generating the pseudo residues Q k with respect to the intra predicting unit 438 and the MC unit 410 and deriving the prediction set 436 are stated in the aforementioned embodiments of FIGs. 1 and 2, and hence, further description thereof is omitted for brevity.
  • the fourth arithmetic unit 430 outputs first-order residues by adding the reconstructed second-order residues to the prediction set 436.
  • the switch SWc is set to an input contact N 5 which causes the first-order residues from the fourth arithmetic unit 430 to be inputted to the reconstruction unit 420.
  • the MC unit 410 acquires corresponding pixels adjacent to a matching block of a reference video frame for the current block according to the inter mode information.
  • the reconstruction unit 420 combines the first-order residues with the corresponding pixels to generate a reconstructed current block. Consequently, the post-processing unit 418 stores and performs postprocessing to the reconstructed current block output from the reconstruction unit 420.
  • FIG. 5 is a flowchart illustrating a video processing method with residue prediction according to one embodiment of the invention.
  • the video processing method is provided for residue prediction of a current video frame with motion alignment.
  • the current video frame is spatially partitioned into a plurality of independent blocks.
  • a matching block of a reference video frame is determined according to a current block of the current video frame (step S502).
  • the matching block B' in FIG. 2 is searched and selected to most closely match the current block B of the current video frame k.
  • a motion vector describing the location of the matching block B' relative to the current block B is obtained (step S504), as represented in FIG. 2 by the arrow MV(V X B , V y B ).
  • corresponding pixels adjacent to the matching block is subsequently retrieved by aligning the neighboring reconstructed pixels with the motion vector (step S506).
  • a set of pseudo residues is constructed (step S508).
  • a subtraction operation is carried out for the matching block and the current block to generate first-order residues (step S510).
  • the set of pseudo residues is employed to make residue prediction of the first-order residues, thereby deriving second-order residues and corresponding residue prediction information for the current block (step S512).
  • a reconstructed current block is generated in accordance with the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame (step S514).
  • the invention provides significant improvement over prior art by introducing directional residue prediction to video frame coded with inter- frame prediction or intra- frame prediction, and therefore, a considerable bit-rate saving from attenuating the energy of residual data after inter-frame prediction or intra-frame prediction is achieved without sacrificing video quality for coding.
  • the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this invention. Therefore, the scope of the present invention shall be defined and protected by the following claims and their equivalents.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video processing apparatus with residue prediction includes a motion estimation/compensation unit to determine a matching block of a reference video frame, obtain a motion vector of a current block of a current video frame that is related to the matching block, and acquire neighboring reconstructed pixels adjacent to the current block and corresponding pixels adjacent to the matching block with the motion vector alignment. Additionally, a pseudo-residue generating unit is included and constructs pseudo residues according to the neighboring reconstructed pixels and the corresponding pixels, an arithmetic unit is included and generates first-order residues by subtracting the matching block from the current block, and a residue- predicting unit is included and derives second-order residues and corresponding information according to the pseudo residues and the first-order residues. Moreover, a post-processing unit is included and derives a reconstructed current block according to the second-order residues and its corresponding information.

Description

VIDEO PROCESSING METHOD AND APPARATUS WITH
RESIDUE PREDICTION
BACKGROUND OF THE INVENTION
Field of the Invention [0001] The invention relates to video processing and more particularly relates to a video processing method and apparatus with residue prediction.
Description of the Related Art
[0002] With rapid development of video processing, including decoding and encoding technology, higher compression ratios so that video can be stored and broadcasted more efficiently are being demanded. Generally, a video sequence is composed of a series of video frames containing spatial and temporal redundancies, which may be encoded by many block-based video-coding standards, e.g., MPEG- 1/2/4, H.264, etc, without significantly sacrificing video quality. [0003] As for the H.264 standard, spatial correlations between adjacent pixels or blocks may be removed by introducing intra-frame prediction methods. Thus, allowing a current block to be predicted from reconstructed neighboring pixels of previous blocks within a current video frame. Due to intra-frame prediction, each block is reconstructed for encoding subsequent neighboring blocks. Further, inter-frame prediction (i.e., motion compensation prediction) has been adopted to reduce temporal redundancies between successive video frames by using motion vectors which indicate the displacement of a moving object from a current block in the current video frame to a corresponding displaced block in a reference video frame. The difference between the current block and the corresponding displaced block is referred to as residues. The reconstructed video frame is used for intra-frame prediction of subsequent neighboring blocks within the current video frame and inter- frame prediction of subsequent video frames.
[0004] In most cases, intra-frame prediction is selected only when a scene change occurs or significant motion exists. However, intra-frame prediction possesses some merits in image regions with high geometric features and provides error resilience. In some research studies, motion compensation can be combined with the intra-frame prediction. C. Chen and K. Pang, "Hybrid Coders with Motion Compensation," Multidimensional Systems and Signal Processing, May. 1992 describes that some spatial correlations are among motion compensation residues. Also, B. Tao and M. Orchard, "Gradient-Based Residual Variance Modeling and Its Applications to Motion-Compensated Video Coding," /EEE Transactions on Image Processing, Jan. 2001 mentions that the spatial correlations show some geometric features. Further, the method described by K. Andersson in "Combined Intra Inter Prediction Coding Mode," VCEG-ADIl, 30th VCEG meeting, Oct. 2006 proposes a direct combination of intra-frame prediction and inter- frame prediction. Another method disclosed by S. Chen, L. Yu in "Re-prediction in Inter-prediction of H.264," VCEG-AG20, 33rd VCEG meeting, Oct. 2007 uses the residues of neighboring blocks in a current video frame to predict the motion compensation residues of a current block within the current video frame. Nevertheless, no apparent improvement in coding efficiency is specified according to the prior art.
[0005] Therefore, it is crucial to provide an innovative algorithmic technique for video coding capable of utilizing residual correlations between neighboring blocks to predict the current block to improve coding efficiency or enhancing video quality.
BRIEF SUMMARY OF THE INVENTION
[0006] A video processing apparatus with residue prediction of a current video frame spatially partitioned into a plurality of blocks is disclosed. The video processing apparatus comprises a motion estimation/compensation unit, a pseudo-residue generating unit, a first arithmetic unit, a residue-predicting unit and a post-processing unit. The motion estimation/compensation unit determines a matching block of a reference video frame according to a current block of the current video frame, obtains a motion vector of the current block describing motion relative to the matching block, acquires neighboring reconstructed pixels adjacent to the current block, and retrieves corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector. The pseudo-residue generating unit constructs a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame. The first arithmetic unit generates first- order residues by subtracting the matching block from the current block. The residue- predicting unit is coupled to the pseudo-residue generating unit for employing the set of pseudo residues to predict the first-order residues and derive second-order residues and corresponding residue prediction information for the current block. The post-processing unit is coupled to the residue-predicting unit for deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame. [0007] According to another embodiment of the invention, a video processing method with residue prediction of a current video frame spatially partitioned into a plurality of blocks is provided, comprising: determining a matching block of a reference video frame according to a current block of the current video frame; obtaining a motion vector of the current block describing motion relative to the matching block; acquiring neighboring reconstructed pixels adjacent to the current block; retrieving corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector; constructing a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame; generating first-order residues by subtracting the matching block from the current block; employing the set of pseudo residues to predict the first-order residues; deriving second- order residues and corresponding residue prediction information for the current block; and deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame.
[0008] According to still another embodiment of the invention, a video decoder with residue prediction for decoding a bit-stream into video frames partitioned into a plurality of blocks is provided. The video decoder comprises a decoding unit, an inverse quantization and discrete cosine transform (IQ/IDCT) unit, a pseudo-residue generating unit, a residue predicting unit, an arithmetic unit, a motion compensation unit, and a reconstruction unit. The decoding unit receives and decodes the bit-stream for generating inter mode information, residue prediction information and corresponding residual data. The inverse quantization and discrete cosine transform (IQ/IDCT) unit generates a reconstructed second-order residues from the residual data for reconstructing a current block of a current video frame from the residual data. The pseudo-residue generating unit provides pseudo residues for reconstructing the current block. The residue predicting unit derives a prediction set of the current block according to the residue prediction information from the decoding unit and the pseudo residues from the pseudo-residue generating unit. The arithmetic unit outputs first-order residues for reconstructing the current block by adding the reconstructed second-order residues to the prediction set. The motion compensation unit acquires corresponding pixels adjacent to a matching block of a reference video frame for the current block according to the inter mode information. The reconstruction unit combines the first-order residues with the corresponding pixels to generate a reconstructed current block.
[0009] A detailed description is given in the following embodiments with reference to the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein: [0010] FIG. 1 is a block diagram illustrating a video processing apparatus with residue prediction for video coding according to one embodiment of the invention; [0011] FIG. 2 illustrates an exemplary process for generating pseudo residues for residue prediction in a video processing apparatus according to the invention; [0012] FIG. 3 illustrates four prediction modes for residue prediction in accordance with one embodiment of the invention;
[0013] FIG. 4 is a block diagram of a decoding unit according to one embodiment of the invention; and
[0014] FIG. 5 is a flowchart illustrating a video processing method with residue prediction according to one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0015] The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
[0016] FIG. 1 is a block diagram illustrating a video processing apparatus 10 with residue prediction for video coding according to one embodiment of the invention. The video processing apparatus 10 receives a current video frame 102 spatially partitioned into a plurality of independent blocks. Each partitioned block may be a 16x16 macroblock or sub-partitioned into block sizes of 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4. The video processing apparatus 10 comprises a motion estimation/compensation (ME/MC) unit 110, a pseudo-residue generating unit 112, a first arithmetic unit 114, a residue-predicting unit 116 and a post-processing unit 118. The detailed description of residue prediction for video coding will now be described in the following with reference to FIGs. 2 and 3. [0017] FIG. 2 illustrates an exemplary process for generating pseudo residues for residue prediction in a video processing apparatus according to the invention. As shown in FIG. 2, it is assumed that the video frame k is the current video frame 102 being processed and the video frame k-1 is a previous reference video frame. Note that a future reference video frame may be provided for predicting the video frame k according to another embodiment. After block search, a matching block B' of the video frame k-1 is determined according to a matching method for predicting the current block B of the video frame k. Some matching methods, such as the mean squared error (MSE) matching method may be used to determine the similarity between the current block B and those in the video frame k- 1. A motion vector MV(VX B, Vy B) is then calculated to represent a displacement between the current block B and the matching block B'. Thus, first-order residues Rt is obtained by
Λ subtracting pixel values within the matching block B' (denoted by S k-i,w) from those within the current block B (denoted by S^B)- In some embodiments, certain regions of one- pixel width adjacent to the left, the top-left, the top-right, or the top borders of the current block B are defined. According to this embodiment, such regions are referred to as a region set A having neighboring reconstructed pixels. By aligning the neighboring reconstructed pixels with the motion vector MV(VX B, Vy B), corresponding pixels in a region set A' adjacent to the matching block B' are localized. That is, the corresponding pixels in the region set A' are located with motion alignment of the neighboring reconstructed pixels in the region set A. Pseudo residues Qk are subsequently derived by subtracting pixel values
Λ within the region set A' (denoted by S k-ijJ) from those within the region set A (denoted by
Λ Λ
S ^A). It is noted that S indicates the data required to be encoded and S indicates the data being reconstructed.
[0018] During operation, when residue prediction is activated, the ME/MC unit 110 receives the current video frame 102, e.g., video frame k, and a reference video frame 104, e.g., video frame k-1, to determine the matching block B' of the video frame k-1 according to the current block B of the video frame k. As described above, by motion-aligning with
Λ Λ the current block B, the region set A and A' (respectively denoted by S k,A and S k-i,A') is provided to the pseudo-residue generating unit 112 for generating pseudo residues QA> The first arithmetic unit 114 then performs inter- frame prediction to acquire the first-order
Λ residues R^ by subtracting the matching block B' (denoted by S k-ip) from the current block B (denoted by S^B). Thereafter, the residue-predicting unit 116, coupled to the pseudo-residue generating unit 112, employs the pseudo residues Q^ to predict the first- order residues R^ to derive second-order residues 106 and corresponding residue prediction information 108 for the current block B. [0019] More specifically, the residue-predicting unit 116 comprises a determination unit 134 and a second arithmetic unit 132. The determination unit 134 determines a prediction set 136 in response to the pseudo residues Q^ for the first-order residues Rt. Some prediction modes may be employed to predict the first-order residues Rt. FIG. 3 illustrates four prediction modes, such as Mode 0, Mode 1, Mode 2 and Mode 3, for residue prediction in accordance with one embodiment of the invention. As shown in FIG. 3, the prediction direction corresponding to each mode is a vertical prediction (Mode 0), a horizontal prediction (Mode 1), a DC prediction (Mode 2) and a diagonal prediction (Mode 3). According to another embodiment, not using directional residue prediction may also be treated as a special mode. The determination unit 134 may select one prediction mode to generate the prediction set 136 with respect to the pseudo residues Q^. The second arithmetic unit 132 is coupled to the determination unit 134 for providing the second-order residues 106 according to the prediction set 136 and the first-order residues R^. According to the embodiment of FIG. 1, the second-order residues 106 are calculated by performing the difference between the first-order residues Rt for the current block and the prediction set 136. In detail, the second arithmetic unit 132 simply subtracts the prediction set 136 from the first-order residues Rt to generate the second-order residues 106. In other embodiments, the second-order residues 106 may be adjusted by further processing steps including, but not limited to, offset processing, weighting and filtering. [0020] Note that the determination unit 134 may jointly optimize all candidate motion vectors for the current block B and entire prediction modes for determining the second- order residues 106 and the corresponding residue prediction information 108 for the current block B. For example, the optimal prediction set is the one that minimizes the energy of the second-order resides 106 in a joint optimization process. [0021] Referring to FIG. 1, a switch SWA is set to an input contact N1, thereby causing the second-order residues 106 to be inputted to a further encoding unit 120. When no residue prediction is expected, the switch SWA is set to an input contact N2 which causes the first-order residues R^ to be directly encoded. [0022] According to one embodiment, the encoding unit 120 comprises a discrete cosine transform and quantization (DCT/Q) unit 122 and an entropy coding unit 124. The DCT/Q unit 122 transforms and quantizes the first-order residues R^ or the second-order residues 106 for the current video frame 102 and yields quantized DCT values with respect to the current video frame 102. The entropy coding unit 124 applies entropy coding, such as a variation of run length coding, to the quantized DCT values and the corresponding residue prediction information 108 to generate an output bit-stream 126. In addition, some information regarding inter-frame prediction for the first-order residues Rt (not shown) is also entropy coded via the entropy coding unit 124. The bit-stream 126 may be stored, further processed or provided to a decoding unit. The decoding unit employs the information in the bit-stream 126 to reconstruct the original video frames. The decoding process of the decoding unit is described below in detail with reference to FIG. 4. [0023] When no residue prediction is used, another switch SWB is set to an input contact N4 to transmit a reconstructed current block generated from an inverse quantization and DCT (IQ/IDCT) unit 128 to the post-processing unit 118. Alternatively, when residue prediction is applied, the IQ/IDCT unit 128 outputs a reconstructed second-order residues to a third arithmetic unit 130. The third arithmetic unit 130 then generates a reconstructed current block according to the reconstructed second-order residues and the prediction set 136. And, the switch SWB is set to an input contact N3 which causes the reconstructed current block from the third arithmetic unit 130 to be inputted to the post-processing unit 118.
[0024] Furthermore, as shown in FIG. 1, the post-processing unit 118 comprises a deblocking unit 140 and a memory unit 142. The de-blocking unit 140 alleviates the discontinuity artifacts around the boundaries of the reconstructed current block and generates a reconstructed current video frame when all blocks of the current video frame 102 are processed. The memory unit 142 is coupled to the de-blocking unit for storing the reconstructed current block and the reconstructed current video frame respectively provided for encoding subsequent blocks within the current video frame 102 and a next incoming video frame. The reconstructed current video frame may also be outputted to a video display unit (not shown) for display. [0025] In accordance with one embodiment of the invention, the video processing apparatus 10 further comprises an intra predicting unit 138 capable of performing directional residue prediction on residual data for intra-frame prediction of the current block. The intra predicting unit 138 acquires the neighboring reconstructed pixels adjacent to the current block within the current video frame 102. The intra predicting unit 138 then performs intra-frame prediction on the current block to generate a pattern block 144 according to the neighboring reconstructed pixels. The intra predicting unit 138 also defines corresponding pixels 148 adjacent to the neighboring reconstructed pixels for generation of pseudo residues. Similarly, the first arithmetic unit 114 accordingly generates the first-order residues R^ by subtracting the pattern block 144 from the current block S^B, as shown in FIG. 1. Afterwards, the pseudo-residue generating unit 112 constructs another set of pseudo residues Q^ according to the neighboring reconstructed pixels and the corresponding pixels 148 in the current video frame 102. Consequently, the residue-predicting unit 116 predicts the first-order residues Rk according to the another set of pseudo residues Qk and derives the second-order residues 106 and the corresponding residue prediction information 108 for the current block.
[0026] It is further noted that the corresponding residue prediction information 108 comprises prediction parameters respectively from the ME/MC unit 110 and the intra predicting unit 138, block size information of each partitioned block, and mode information indicating a prediction direction of each partitioned block.
[0027] FIG. 4 is a block diagram of an exemplary decoding unit 40. The decoding unit 40 performs a reverse process for video coding with residue prediction and comprises an entropy decoding unit 424, an inverse quantization and DCT (IQ/IDCT) unit 428, a residue predicting unit 416, an intra predicting unit 438, a motion compensation (MC) unit 410, a reconstruction unit 420 and a post-processing unit 418.
[0028] In the decoding process, the bit-stream 126 is inputted to the entropy decoding unit 424. The incoming bit-stream 126, as encoded by the entropy coding unit 124 in FIG. 1, specifies the information of each video frame and thus, determines whether inter- frame or intra-frame prediction is to be applied. Specifically, the entropy decoding unit 424 decodes the bit-stream 126 to generate intra mode information for intra- frame prediction, inter mode information for inter-frame prediction, residue prediction information for residue predication and corresponding residual data. When no residual prediction is applied, the IQ/IDCT unit 428 receives the residual data and outputs a reconstructed block to a switch SWc, which is set to an input contact N6. The reconstructed block is subsequently inputted to the reconstruction unit 420. Further, when the output bit-stream 126 is encoded with residual prediction, the IQ/IDCT unit 428 generates a reconstructed second-order residues to a fourth arithmetic unit 430. The residue predicting unit 416 derives a prediction set 436 according to the residue prediction information and pseudo residues Q^ from the pseudo- residue generating unit 412. Operations of generating the pseudo residues Qk with respect to the intra predicting unit 438 and the MC unit 410 and deriving the prediction set 436 are stated in the aforementioned embodiments of FIGs. 1 and 2, and hence, further description thereof is omitted for brevity. Next, the fourth arithmetic unit 430 outputs first-order residues by adding the reconstructed second-order residues to the prediction set 436. The switch SWc is set to an input contact N5 which causes the first-order residues from the fourth arithmetic unit 430 to be inputted to the reconstruction unit 420. According to an embodiment, when inter- frame prediction is applied, the MC unit 410 acquires corresponding pixels adjacent to a matching block of a reference video frame for the current block according to the inter mode information. Thus, the reconstruction unit 420 combines the first-order residues with the corresponding pixels to generate a reconstructed current block. Consequently, the post-processing unit 418 stores and performs postprocessing to the reconstructed current block output from the reconstruction unit 420. [0029] FIG. 5 is a flowchart illustrating a video processing method with residue prediction according to one embodiment of the invention. As mentioned above, the video processing method is provided for residue prediction of a current video frame with motion alignment. The current video frame is spatially partitioned into a plurality of independent blocks.
[0030] Referring to FIG. 5, a matching block of a reference video frame is determined according to a current block of the current video frame (step S502). From the aforementioned embodiment, the matching block B' in FIG. 2 is searched and selected to most closely match the current block B of the current video frame k. Then, a motion vector describing the location of the matching block B' relative to the current block B is obtained (step S504), as represented in FIG. 2 by the arrow MV(VX B, Vy B). [0031] After some neighboring reconstructed pixels adjacent to the current block are acquired, corresponding pixels adjacent to the matching block is subsequently retrieved by aligning the neighboring reconstructed pixels with the motion vector (step S506). In accordance with the neighboring reconstructed pixels in the current video frame and the corresponding motion-aligned pixels in the reference video frame, a set of pseudo residues is constructed (step S508). A subtraction operation is carried out for the matching block and the current block to generate first-order residues (step S510). Thus, the set of pseudo residues is employed to make residue prediction of the first-order residues, thereby deriving second-order residues and corresponding residue prediction information for the current block (step S512). As a result, a reconstructed current block is generated in accordance with the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame (step S514). After all blocks of the current video frame are completely processed, a reconstructed current video frame is accordingly determined for transmission, storage or display. [0032] The invention provides significant improvement over prior art by introducing directional residue prediction to video frame coded with inter- frame prediction or intra- frame prediction, and therefore, a considerable bit-rate saving from attenuating the energy of residual data after inter-frame prediction or intra-frame prediction is achieved without sacrificing video quality for coding. [0033] While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this invention. Therefore, the scope of the present invention shall be defined and protected by the following claims and their equivalents.

Claims

1. A video processing apparatus with residue prediction of a current video frame spatially partitioned into a plurality of blocks, comprising: a motion estimation/compensation unit for determining a matching block of a reference video frame according to a current block of the current video frame, obtaining a motion vector of the current block describing motion relative to the matching block, acquiring neighboring reconstructed pixels adjacent to the current block, and retrieving corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector; a pseudo-residue generating unit for constructing a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame; a first arithmetic unit for generating first-order residues by subtracting the matching block from the current block; a residue-predicting unit coupled to the pseudo-residue generating unit for employing the set of pseudo residues to predict the first-order residues and derive second-order residues and corresponding residue prediction information for the current block; and a post-processing unit coupled to the residue-predicting unit for deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame.
2. The video processing apparatus as claimed in claim 1, wherein the post- processing unit comprises: a de-blocking unit for de-blocking the current video frame; and a memory unit coupled to the de-blocking unit for storing the reconstructed current block and the reconstructed current video frame respectively provided for encoding subsequent blocks within the current video frame and a next incoming video frame.
3. The video processing apparatus as claimed in claim 1, wherein the residue- predicting unit comprises: a determination unit for determining a prediction set in response to the set of pseudo residues for the first-order residues; and a second arithmetic unit coupled to the determination unit for providing the second-order residues according to the prediction set and the first- order residues.
4. The video processing apparatus as claimed in claim 3, wherein the prediction set is subtracted from the first-order residues to generate the second-order residues.
5. The video processing apparatus as claimed in claim 1, further comprising: an intra predicting unit for acquiring the neighboring reconstructed pixels adjacent to the current block within the current video frame, performing intra-frame prediction on the current block to generate a pattern block according to the neighboring reconstructed pixels, and defining corresponding pixels adjacent to the neighboring reconstructed pixels, wherein the first arithmetic unit generates the first-order residues by subtracting the pattern block from the current block, the pseudo- residue generating unit constructs another set of pseudo residues according to the neighboring reconstructed pixels and the corresponding pixels in the current video frame, and the residue- predicting unit predicts the first-order residues according to the another set of pseudo residues and derives the second-order residues and the corresponding residue prediction information for the current block.
6. The video processing apparatus as claimed in claim 5, wherein the corresponding residue prediction information comprises prediction parameters respectively from the motion estimation/compensation unit and the intra predicting unit, block size information of each partitioned block, and mode information indicating a prediction direction of each partitioned block.
7. The video processing apparatus as claimed in claim 6, wherein the prediction direction includes but not limited to a vertical prediction, a horizontal prediction, a DC prediction and a diagonal prediction.
8. The video processing apparatus as claimed in claim 1, wherein the neighboring reconstructed pixels defines at least one region with one-pixel width, adjacent to the left, the top-left, the top-right, or the top borders of the current block.
9. The video processing apparatus as claimed in claim 6, wherein all candidate motion vectors for the current block and the prediction direction are jointly optimized for determining the second-order residues and corresponding residue prediction information for the current block.
10. The video processing apparatus as claimed in claim 1, further comprising an encoding unit for generating an output bit-stream according to the second-order residues and the corresponding residue prediction information.
11. A video processing method with residue prediction of a current video frame spatially partitioned into a plurality of blocks, comprising: determining a matching block of a reference video frame according to a current block of the current video frame; obtaining a motion vector of the current block describing motion relative to the matching block; acquiring neighboring reconstructed pixels adjacent to the current block; retrieving corresponding pixels adjacent to the matching block by aligning the neighboring reconstructed pixels with the motion vector; constructing a set of pseudo residues according to the neighboring reconstructed pixels in the current video frame and the corresponding pixels in the reference video frame; generating first-order residues by subtracting the matching block from the current block; employing the set of pseudo residues to predict the first-order residues; deriving second-order residues and corresponding residue prediction information for the current block; and deriving a reconstructed current block according to the second-order residues and the corresponding residue prediction information for encoding subsequent blocks within the current video frame.
12. The video processing method as claimed in claim 11, further comprising: generating a reconstructed current video frame when all blocks of the current video frame are processed; and storing the reconstructed current block and the reconstructed current video frame respectively for encoding subsequent blocks within the current video frame and a next incoming video frame.
13. The video processing method as claimed in claim 11, wherein the step of employing the set of pseudo residues comprises: determining a prediction set in response to the set of pseudo residues for the first-order residues; and providing the second-order residues according to the prediction set and the first-order residues.
14. The video processing method as claimed in claim 13, wherein the prediction set is subtracted from the first-order residues to generate the second-order residues.
15. The video processing method as claimed in claim 11, further comprising: acquiring the neighboring reconstructed pixels adjacent to the current block within the current video frame; performing intra-frame prediction on the current block to generate a pattern block according to the neighboring reconstructed pixels; and defining corresponding pixels adjacent to the neighboring reconstructed pixels, wherein the first-order residues is generated by subtracting the pattern block from the current block, another set of pseudo residues is constructed according to the neighboring reconstructed pixels and the corresponding pixels in the current video frame, the first-order residues is predicted according to the another set of pseudo residues, and the second-order residues and the corresponding residue prediction information are derived for the current block.
16. The video processing method as claimed in claim 15, wherein the corresponding residue prediction information comprises prediction parameters for each partitioned block, block size information of each partitioned block, and mode information indicating a prediction direction of each partitioned block.
17. The video processing method as claimed in claim 16, wherein the prediction direction comprises a vertical prediction, a horizontal prediction, a DC prediction and a diagonal prediction.
18. The video processing method as claimed in claim 11, wherein the neighboring reconstructed pixels defines at least one region with one-pixel width, adjacent to the left, the top-left, the top-right, or the top borders of the current block.
19. The video processing method as claimed in claim 16, wherein all candidate motion vectors for the current block and the prediction direction are jointly optimized for determining the second-order residues and the corresponding residue prediction information for the current block.
20. The video processing method as claimed in claim 11, further comprising: generating an output bit-stream according to the second-order residues and the corresponding residue prediction information.
21. A video decoder with residue prediction for decoding a bit-stream into video frames partitioned into a plurality of blocks, comprising: a decoding unit, receiving and decoding the bit-stream, generating inter mode information, residue prediction information and corresponding residual data; an inverse quantization and discrete cosine transform (IQ/IDCT) unit, generating a reconstructed second-order residues from the residual data for reconstructing a current block of a current video frame from the residual data; a pseudo-residue generating unit, generating pseudo residues for reconstructing the current block; a residue predicting unit, deriving a prediction set of the current block according to the residue prediction information from the decoding unit and the pseudo residues from the pseudo-residue generating unit; an arithmetic unit, outputting first-order residues for reconstructing the current block by adding the reconstructed second-order residues to the prediction set; a motion compensation unit, acquiring corresponding pixels adjacent to a matching block of a reference video frame for the current block according to the inter mode information; and a reconstruction unit, combining the first-order residues with the corresponding pixels to generate a reconstructed current block.
22. The video decoder as claimed in claim 21, wherein the pseudo-residue generating unit generates the pseudo residues according to neighboring reconstructed pixels adjacent to the current block in the current video frame and the corresponding pixels in the reference video frame.
23. The video decoder as claimed in claim 22, wherein the neighboring reconstructed pixels defines at least one region with one-pixel width, adjacent to the left, the top-left, the top-right, or the top borders of the current block.
24. The video decoder as claimed in claim 21, further comprising: a post-processing unit, storing and performing post-processing to the reconstructed current block output from the reconstruction unit.
25. The video decoder as claimed in claim 21, further comprising: an intra predicting unit, acquiring neighboring reconstructed pixels adjacent to the current block within the current video frame according to intra mode information from the decoding unit and defining corresponding pixels adjacent to the neighboring reconstructed pixels.
26. The video decoder as claimed in claim 21, wherein the residue prediction information indicates whether the residue prediction is performed on each partitioned block and a prediction direction of each partitioned block.
27. The video decoder as claimed in claim 26, wherein the prediction direction includes but not limited to a vertical prediction, a horizontal prediction, a DC prediction and a diagonal prediction.
PCT/CN2008/073599 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction WO2010069113A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP08878857.5A EP2380354A4 (en) 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction
PCT/CN2008/073599 WO2010069113A1 (en) 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction
US13/119,757 US9088797B2 (en) 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2008/073599 WO2010069113A1 (en) 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction

Publications (1)

Publication Number Publication Date
WO2010069113A1 true WO2010069113A1 (en) 2010-06-24

Family

ID=42268260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/073599 WO2010069113A1 (en) 2008-12-19 2008-12-19 Video processing method and apparatus with residue prediction

Country Status (3)

Country Link
US (1) US9088797B2 (en)
EP (1) EP2380354A4 (en)
WO (1) WO2010069113A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104782128A (en) * 2012-11-14 2015-07-15 联发科技(新加坡)私人有限公司 Method and apparatus for residual prediction in three-dimensional video coding

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2980068A1 (en) * 2011-09-13 2013-03-15 Thomson Licensing METHOD FOR ENCODING AND RECONSTRUCTING A BLOCK OF PIXELS AND CORRESPONDING DEVICES
US11039138B1 (en) * 2012-03-08 2021-06-15 Google Llc Adaptive coding of prediction modes using probability distributions
WO2014075236A1 (en) * 2012-11-14 2014-05-22 Mediatek Singapore Pte. Ltd. Methods for residual prediction with pseudo residues in 3d video coding
KR102517104B1 (en) 2016-02-17 2023-04-04 삼성전자주식회사 Method and apparatus for processing image in virtual reality system
US10356439B2 (en) * 2017-06-29 2019-07-16 Intel Corporation Flexible frame referencing for display transport
CN109889836B (en) * 2019-02-28 2020-09-25 武汉随锐亿山科技有限公司 Method for optimizing energy efficiency of wireless video receiver
WO2020224581A1 (en) 2019-05-05 2020-11-12 Beijing Bytedance Network Technology Co., Ltd. Chroma deblocking harmonization for video coding
EP4320866A1 (en) * 2021-04-09 2024-02-14 InterDigital CE Patent Holdings, SAS Spatial illumination compensation on large areas
CN117939163A (en) * 2024-03-22 2024-04-26 广东工业大学 Compressed sensing video reconstruction method and system based on double-flow feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1478189A2 (en) * 2003-05-16 2004-11-17 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding image using image residue prediction
WO2007078111A1 (en) * 2005-12-30 2007-07-12 Samsung Electronics Co., Ltd. Image encoding and/or decoding system, medium, and method
WO2008100022A1 (en) * 2007-02-14 2008-08-21 Samsung Electronics Co., Ltd. Video encoding method and apparatus and video decoding method and apparatus using residual resizing
CN101325713A (en) * 2007-06-11 2008-12-17 三星电子株式会社 Method and apparatus for encoding and decoding image by using inter color compensation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6888894B2 (en) * 2000-04-17 2005-05-03 Pts Corporation Segmenting encoding system with image segmentation performed at a decoder and encoding scheme for generating encoded data relying on decoder segmentation
KR100888962B1 (en) * 2004-12-06 2009-03-17 엘지전자 주식회사 Method for encoding and decoding video signal
KR101246915B1 (en) * 2005-04-18 2013-03-25 삼성전자주식회사 Method and apparatus for encoding or decoding moving picture
WO2008056934A1 (en) * 2006-11-07 2008-05-15 Samsung Electronics Co., Ltd. Method of and apparatus for video encoding and decoding based on motion estimation
CN101159875B (en) * 2007-10-15 2011-10-05 浙江大学 Double forecast video coding/decoding method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1478189A2 (en) * 2003-05-16 2004-11-17 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding image using image residue prediction
WO2007078111A1 (en) * 2005-12-30 2007-07-12 Samsung Electronics Co., Ltd. Image encoding and/or decoding system, medium, and method
WO2008100022A1 (en) * 2007-02-14 2008-08-21 Samsung Electronics Co., Ltd. Video encoding method and apparatus and video decoding method and apparatus using residual resizing
CN101325713A (en) * 2007-06-11 2008-12-17 三星电子株式会社 Method and apparatus for encoding and decoding image by using inter color compensation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2380354A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104782128A (en) * 2012-11-14 2015-07-15 联发科技(新加坡)私人有限公司 Method and apparatus for residual prediction in three-dimensional video coding
CN104782128B (en) * 2012-11-14 2017-10-24 寰发股份有限公司 Method and its device for three-dimensional or multidimensional view Video coding

Also Published As

Publication number Publication date
EP2380354A1 (en) 2011-10-26
EP2380354A4 (en) 2015-12-23
US9088797B2 (en) 2015-07-21
US20110170606A1 (en) 2011-07-14

Similar Documents

Publication Publication Date Title
CN111602399B (en) Improved decoder-side motion vector derivation
US9088797B2 (en) Video processing method and apparatus with residue prediction
US8306120B2 (en) Method and apparatus for predicting motion vector using global motion vector, encoder, decoder, and decoding method
EP1958448B1 (en) Multi-dimensional neighboring block prediction for video encoding
US8503532B2 (en) Method and apparatus for inter prediction encoding/decoding an image using sub-pixel motion estimation
WO2019204297A1 (en) Limitation of the mvp derivation based on decoder-side motion vector derivation
KR101521336B1 (en) Template matching for video coding
EP3603060A1 (en) Decoder-side motion vector derivation
TWI597974B (en) Dynamic image prediction decoding method and dynamic image prediction decoding device
US20070098067A1 (en) Method and apparatus for video encoding/decoding
WO2010001917A1 (en) Image processing device and method
TWI621351B (en) Image prediction decoding device, image prediction decoding method and image prediction decoding program
US9438925B2 (en) Video encoder with block merging and methods for use therewith
US8699576B2 (en) Method of and apparatus for estimating motion vector based on sizes of neighboring partitions, encoder, decoding, and decoding method
CA2706711C (en) Method and apparatus for selecting a coding mode
WO2010035735A1 (en) Image processing device and method
WO2015057570A1 (en) Multi-threaded video encoder
JP2009260421A (en) Moving image processing system, encoding device, encoding method, encoding program, decoding device, decoding method and decoding program
Stankowski et al. Analysis of the Complexity of the HEVC Motion Estimation
EP3994881A1 (en) Motion compensation using combined inter and intra prediction
US20160156905A1 (en) Method and system for determining intra mode decision in h.264 video coding
WO2011099242A1 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
JP2012054618A (en) Moving image encoding apparatus and encoding method, and moving image decoding apparatus and decoding method
WO2023205283A1 (en) Methods and devices for enhanced local illumination compensation
KR20120079561A (en) Apparatus and method for intra prediction encoding/decoding based on selective multi-path predictions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08878857

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13119757

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2008878857

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008878857

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE