WO2007029919A1 - Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction - Google Patents
Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction Download PDFInfo
- Publication number
- WO2007029919A1 WO2007029919A1 PCT/KR2006/002869 KR2006002869W WO2007029919A1 WO 2007029919 A1 WO2007029919 A1 WO 2007029919A1 KR 2006002869 W KR2006002869 W KR 2006002869W WO 2007029919 A1 WO2007029919 A1 WO 2007029919A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- residual data
- prediction
- residual
- directional
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 239000000284 extract Substances 0.000 claims description 4
- 230000006835 compression Effects 0.000 description 14
- 238000007906 compression Methods 0.000 description 14
- 238000013139 quantization Methods 0.000 description 6
- 230000002123 temporal effect Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/39—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving multiple description coding [MDC], i.e. with separate layers being structured as independently decodable descriptions of input picture data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- Methods and apparatuses consistent with the present invention relate to video encoding and decoding and, more particularly, to encoding and decoding a video signal according to a directional intra-residual prediction.
- video compression methods can be classified into lossy/lossless compression, intraframe/interframe compression, and symmetric/asymmetric compression, depending on whether source data is lost, whether compression is independently performed for respective frames, and whether the same time is required for compression and reconstruction, respectively.
- lossy/lossless compression intraframe/interframe compression
- symmetric/asymmetric compression depending on whether source data is lost, whether compression is independently performed for respective frames, and whether the same time is required for compression and reconstruction, respectively.
- the corresponding compression is called scalable compression.
- Scalability refers to the ability of a decoder to selectively decode a base layer and an enhancement layer according to processing conditions and network conditions.
- fine granularity scalability (FGS) methods encode the base layer and the enhancement layer, and the enhancement layer may not be transmitted or decoded depending on the network transmission efficiency or the state of the decoder side. Accordingly, data can be properly transmitted according to the network transmission rate.
- FGS fine granularity scalability
- FIG. 1 illustrates an example of a scalable video codec using a multilayer structure.
- the base layer is in the Quarter Common Intermediate Format (QCIF) at 15 Hz (frame rate)
- the first enhancement layer is in the Common Intermediate Format (CIF) at 30 Hz
- the second enhancement layer is in the SD (Standard Definition) format at 60 Hz. If CIF 0.5 Mbps stream is required, the bit stream is truncated to obtain a bit rate of 0.5 Mbps based on a first enhancement layer having a CIF, a frame rate of 30 Hz and a bit rate of 0.7 Mbps.
- QCIF Quarter Common Intermediate Format
- CIF Common Intermediate Format
- SD Standard Definition
- FIG. 2 is a view schematically explaining the above-described three prediction methods.
- First ( ® ) intra-prediction with respect to a certain macroblock 14 of the current frame 11 is performed, second ( ® ) inter-prediction using a frame 12 that is at a temporal position different from that of the current frame 11 is performed, and third ( ® ) intra-BL prediction is performed using texture data for an area 16 of a base-layer frame 13 that corresponds to the macroblock 14. Disclosure of Invention
- the present invention has been made to address the above-mentioned problems occurring in the prior art, and an aspect of the present invention is to reduce the size of data to be encoded by obtaining residual data of an enhancement layer based on directional intra-prediction data of a base layer.
- Another aspect of the present invention is to reduce the amount of data to be encoded and to increase the compression efficiency while performing intra-prediction by reducing the size of symbols to be allocated to directional information that exists in directional intra-prediction data.
- a method of encoding a video signal according to a directional intra-residual prediction which includes calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer; and encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
- a method of decoding a video signal according to a directional intra-residual prediction which includes extracting third residual data that is directional intra-residual prediction data on a third block of an enhancement layer from a enhancement-layer residual stream; extracting first residual data that is the result of performing directional intra-prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; calculating second residual data that is the result of performing directional intra-prediction on the third block by adding the third residual data and the first residual data; and restoring the third block using the second residual data.
- a video encoder for encoding a video signal according to a directional intra-residual prediction which includes a base-layer intra-prediction unit calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; an enhancement-layer intra-prediction unit calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer; and a residual encoding unit encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
- a video decoder for decoding a video signal according to a directional intra-residual prediction which includes a residual decoding unit extracting third residual data that is directional intra- residual prediction data on a third block of an enhancement layer from a enhancement- layer residual stream; a base-layer residual decoding unit extracting first residual data that is the result of performing directional intra-prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; an enhancement-layer residual decoding unit calculating second residual data that is the result of performing directional intra-prediction on the third block by adding the third residual data and the first residual data; and an enhancement-layer decoding unit restoring the third block using the second residual data.
- FlG. 1 is a view illustrating an example of a scalable video codec using a multilayer structure
- FlG. 2 is a view schematically explaining three prediction methods
- FlG. 3 is a view explaining a process of obtaining a difference between residual data generated by performing intra-prediction on an enhancement layer and a base layer, respectively;
- FlG. 4 is a view illustrating a residual difference mechanism of directional intra- prediction according to an exemplary embodiment of the present invention
- FlG. 5A and 5B are views explaining existing intraprediction directions and extended intra-prediction directions according to an exemplary embodiment of the present invention.
- FlG. 6 is a view explaining relations among blocks which are referred to based on the extended intra-prediction according to an exemplary embodiment of the present invention.
- FlG. 7 is a view explaining a process of decoding video data according to directional intra-residual prediction according to an exemplary embodiment of the present invention.
- FlG. 8 is a flowchart illustrating an encoding process according to directional intra- residual prediction according to an exemplary embodiment of the present invention
- FlG. 9 is a flowchart illustrating a decoding process according to directional intra- residual prediction according to an exemplary embodiment of the present invention.
- FlG. 10 is a block diagram illustrating the construction of a video encoder according to an exemplary embodiment of the present invention.
- FlG. 11 is a block diagram illustrating the construction of a video decoder according to an exemplary embodiment of the present invention.
- These computer program instructions may also be stored in a computer usable or computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks.
- FIG. 3 is a view explaining a process of obtaining a difference between residual data generated by performing directional intra-prediction on an enhancement layer and on a base layer, respectively.
- Residual data (Rb) 102 generated by performing directional intra-prediction on a base layer has information on a difference between a block to be predicted and the original block 101 to be encoded.
- the residual data (Rb) 102 includes information on the directionality to be referred to.
- Residual data (Rc) 112 is generated by performing directional intra-prediction on a block 111 of an enhancement layer.
- the residual data (Rc) 112 includes directional in- formation required to refer to a block located in a specified direction for the directional intra-prediction.
- a decoder side performs a block restoration by selecting a block or pixel to be referred to according to such directional information.
- the coding efficiency can be improved by encoding residual prediction data (R) 120 obtained from the difference between the residual data Rb of the base layer and the residual data Rc of the enhancement layer, rather than by encoding the residual data Rc obtained by performing the directional intra-prediction on the block of the enhancement layer as it is.
- FlG. 4 is a view illustrating a residual difference mechanism of a directional intra- prediction according to an exemplary embodiment of the present invention.
- a block or pixel 151 to be encoded exists in a specified frame or slice 150 of the enhancement layer.
- a block or pixel 141 exists in a frame 140 of the base layer.
- Residual data (Rb) 143 is generated by performing the directional intra-prediction on the block or pixel 141 of the base layer with reference to a block (or pixel) 142.
- the residual data includes directional information 145 for referring to the block 142.
- residual data (Rc) 153 is generated by performing the directional intra- prediction on the block or pixel 151 of the enhancement layer that corresponds to the block 141 of the base layer with reference to a block 152 that corresponds to the block 142 of the base layer.
- the residual data includes directional information 155 for referring to the block 152.
- the directional information 155 whereby the block 151 refers to the block 152 is the same as or similar to the directional information 145 whereby the block 141 refers to the block 142. This is because their relative locations are the same or similar to each other. Also, there is a high probability that their texture residuals are similar to each other.
- directional intra-residual prediction may be performed by obtaining residual prediction data (R) 156 that is the difference between the residual data (Rc) 153 and the residual data (Rb) 143.
- a directional intra-residual prediction can be performed by obtaining the difference 168 between the directional information 145 according to the intra-prediction of the base layer and the directional information 155 according to the intra-prediction of the enhancement layer.
- a multilayer intra-prediction may be performed by a residual prediction of a directional intra-prediction mode. It can be recognized using a residual prediction flag whether the enhancement layer will refer to the directional intra-prediction information of the base layer. Also, it can be recognized using a base-layer flag (blflag) whether the direction of the residual layer has been reused in the enhancement layer. For example, if the base-layer flag (blflag) is T, the directional information of the base layer can be reused. If the directional information of the base layer is different from the directional information of the enhancement layer, it can be used after the directional information is adjusted according to a qpel flag.
- the residual prediction flag used in the temporal inter-prediction and the qpel flag can be used.
- the directions of the existing intra-prediction can be extended in order to perform the directional intra-residual prediction as illustrated in FIGs. 3 and 4.
- a more accurate directional prediction is performed, and the difference between the directional predictions in the enhancement layer that refers to the corresponding directional prediction becomesaki, so that the encoding efficiency of the directional intra-prediction result can be heightened.
- FIG. 5A and 5B are views explaining existing intraprediction directions and extended intra-prediction directions according to an exemplary embodiment of the present invention, respectively.
- the directional intra-prediction proposed in the H.264 specifications has 9 intra- prediction directions including 8 directions as illustrated in the drawing and DC.
- the extended directional intra-prediction proposed according to the exemplary embodiment of present invention has 7 additional intra-prediction directions, thus the entire number of intra-prediction directions becomes 16. By adding information on intraBL4 ' 4 to the 16 directions, the number of intra-prediction directions becomes 17 in total.
- the extended intra-prediction proposed according to the exemplary embodiment of the present invention information, which can be hardly indicated by the existing directionality, is indicated through the extended directionality, and thus the performance of the intra-prediction is improved.
- the intra-prediction can be applied in the case where the intra-BL for the base layer fails to have a high compression rate due to the difference in resolution or quantization size between the base layer and the enhancement layer.
- FIG. 6 is a view explaining relations among blocks that are referred to based on the extended intra-prediction as described above according to an exemplary embodiment of the present invention.
- Reference numeral 170 shows blocks that are referred to for the intra-prediction in the conventional H.264.
- adjacent blocks indicated as a reference numeral 180 are referred to according to the extended intra-prediction directions as shown in FIG. 5B. In this case, it is required to give weightings to adjacent pixels.
- Blocks 181, 182, 183, 184, 185, 186, and 187 show the relations among the adjacent pixels that are referred to during the extended intra-prediction.
- the blocks as illustrated in FlG. 6 include subblocks.
- FlG. 7 is a view explaining a process of decoding video data according to a directional intra-residual prediction according to an exemplary embodiment of the present invention.
- Residual prediction data (R) 256 and residual data (Rb) 243 are included in an enhancement-layer bitstream and a base-layer bitstream, respectively.
- (R) 256 includes the result of subtracting the residual data of the base layer from the residual data of the enhancement layer according to the directional intra-residual prediction.
- (R) 256 includes a difference value 268 between the directionality of the enhancement layer and the directionality 245 of the base layer.
- Residual data (Rc) 253 for the directional intra-prediction on the enhancement layer can be restored by adding (Rb) 243 and (R) 256.
- the residual data 253 also includes information on the directionality 255.
- a block 241 of a base-layer frame 240 can be restored by performing the decoding in accordance with the typical directional intra-prediction using (Rb) 243.
- the block 241 refers to a block 242.
- a block 251 of an enhancement- layer frame 250 can be restored through a restoration process using (Rc) 253.
- the block 251 refers to a block 252.
- FlG. 8 is a flowchart illustrating an encoding process according to a directional intra-residual prediction according to an exemplary embodiment of the present invention.
- a direction intra-prediction is performed on the base layer (S301). That is, as illustrated in FlG. 4, the directional intra-prediction is performed on the first block (141 in FlG. 4) of the base layer with reference to the second block (142 in FlG. 4) in the same frame as the first block of the base layer. Then, the residual data Rb (143 in FlG. 4) is calculated as the result of the prediction (S302).
- the directional intra-residual prediction data R (156 in FlG. 4) on the enhancement layer is generated by calculating Rc - Rb S305. Then, the residual data R is encoded and then transmitted to the decoder side (S306).
- the above-described extended directional intra-prediction can be performed based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
- FlG. 9 is a flowchart illustrating a decoding process according to a directional intra-residual prediction according to an exemplary embodiment of the present invention. The decoding process will now be explained with reference to FlGs. 7 and 9.
- the residual data R (256 in FlG. 7) that is the result of the directional intra-residual prediction is decoded (S321). Also, the residual data Rb (243 in FlG. 7) that is the result of the intra-prediction performed on the block (241 in FlG. 7) of the base layer, which the block (251 in FlG. 7) to be finally restored through the residual data R refers to, is extracted (S322). Then, the residual data Rc (253 in FlG. 7) that is the result of the intra-prediction on the enhancement layer is calculated by adding Rb and R (S324). Then, the data of the enhancement layer is restored using Rc (S325).
- the residual date can be extremelyly predicted by performing the extended directional intra-prediction based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
- FlG. 10 is a block diagram illustrating the construction of a video encoder according to an exemplary embodiment of the present invention.
- the video encoder 300 includes an enhancement-layer intra- prediction unit 320 for generating a residual stream for the enhancement-layer data, a residual encoding unit 330, a quantization unit 340, an entropy coding unit 350, a base- layer intra-prediction unit 310 for generating a residual stream for the base-layer data, a base-layer quantization unit 345, and a base-layer entropy coding unit 355.
- the base-layer intra-prediction unit 310 performs the directional intra-prediction on the first block (141 in FlG. 4) of the base layer with reference to the second block (142 in FlG. 4) in the same frame as the first block of the base layer, resulting in that the residual data Rb (143 in FlG. 4) is generated.
- This residual data is encoded through the base-layer quantization unit 345 and the base- layer entropy coding unit 355, and then the encoded residual data is transmitted to the decoder side.
- the enhancement-layer intra-prediction unit performs the directional intra-prediction on the third block (151 in FlG. 4) that corresponds to the first block (141 in FlG. 4) of the base layer.
- the fourth block (152 in FlG. 4) of the enhancement layer that corresponds to the second block (142 in FlG. 4) becomes the reference block.
- the residual data Rc is generated.
- the residual encoding unit 330 generates R that is the result of the directional intra- residual prediction by obtaining Rc and Rb.
- the value R is encoded through the quantization unit 340 and the entropy coding unit 350.
- the enhancement-layer intra-prediction unit 320 and the base-layer intra-prediction unit 310 can perform the directional intra-prediction based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
- FlG. 11 is a block diagram illustrating the construction of a video decoder according to an exemplary embodiment of the present invention.
- the video decoder 600 includes a residual decoding unit 610 for restoring the enhancement-layer residual stream to the enhancement-layer video data, an enhancement-layer residual decoding unit 620, and an enhancement-layer decoding unit 640.
- the video decoder also includes a base-layer residual decoding unit 630 and a base-layer decoding unit 650.
- the residual decoding unit 610 extracts the residual data R (256 in FlG. 7) that is the direction intra-residual prediction data on the third block (251 in FlG. 7) of the enhancement layer.
- the base-layer residual decoding unit 630 extracts the residual data Rb, which is the result of performing the directional intra-prediction on the first block (241 in FlG. 7) corresponding to the third block, from the base-layer residual stream.
- the enhancement-layer residual decoding unit 620 calculates the residual data Rc that is the result of performing the directional intra-prediction on the third block (251 in FlG. 7) by adding R and Rb.
- the calculated residual data is inputted to the enhancement-layer decoding unit 640 so as to be restored to the video data.
- the base-layer decoding unit 650 also restores the video data using the residual data Rb.
- the enhancement-layer decoding unit 640 and the base-layer decoding unit 650 can restore the video data based on the third direction that exists between the two adjacent directions used for the conventional directional intra-prediction.
- the decoding can be efficiently performed without changing the multi-loop decoding process.
- the coding size of symbols to be allocated to the directional information can be markedly reduced, and the directional information can be adjusted with reference to the directional information of the base layer.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method and apparatus for encoding and decoding a video signal according to directional intra-residual prediction. The video encoding method of the present invention includes calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer, calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer, and encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
Description
Description
METHOD AND APPARATUS FOR ENCODING AND DECODING VIDEO SIGNAL ACCORDING TO DIRECTIONAL
INTRA-RESIDUAL PREDICTION
Technical Field
[1] Methods and apparatuses consistent with the present invention relate to video encoding and decoding and, more particularly, to encoding and decoding a video signal according to a directional intra-residual prediction.
Background Art
[2] Since multimedia data that includes text, moving pictures (hereinafter referred to as
'video') and audio is typically large, mass storage media and wide bandwidths are required for storing and transmitting the data. Accordingly, compression coding techniques are required to transmit the multimedia data. Among multimedia compression methods, video compression methods can be classified into lossy/lossless compression, intraframe/interframe compression, and symmetric/asymmetric compression, depending on whether source data is lost, whether compression is independently performed for respective frames, and whether the same time is required for compression and reconstruction, respectively. In the case where frames have diverse resolutions, the corresponding compression is called scalable compression.
[3] The purpose of conventional video coding is to transmit information that is optimized to a given transmission rate. However, in a network video application such as an Internet streaming video, the performance of the network is not constant, but varies according to circumstances, and thus flexible coding is required in addition to coding optimized to the specified transmission rate.
[4] Scalability refers to the ability of a decoder to selectively decode a base layer and an enhancement layer according to processing conditions and network conditions. In particular, fine granularity scalability (FGS) methods encode the base layer and the enhancement layer, and the enhancement layer may not be transmitted or decoded depending on the network transmission efficiency or the state of the decoder side. Accordingly, data can be properly transmitted according to the network transmission rate.
[5] FIG. 1 illustrates an example of a scalable video codec using a multilayer structure.
In this video codec, the base layer is in the Quarter Common Intermediate Format (QCIF) at 15 Hz (frame rate), the first enhancement layer is in the Common Intermediate Format (CIF) at 30 Hz, and the second enhancement layer is in the SD (Standard Definition) format at 60 Hz. If CIF 0.5 Mbps stream is required, the bit stream is truncated to obtain a bit rate of 0.5 Mbps based on a first enhancement layer
having a CIF, a frame rate of 30 Hz and a bit rate of 0.7 Mbps. In this method, spatial and temporal SNR scalability can be obtained.
[6] As shown in FIG. 1, frames (e.g., 10, 20 and 30) of respective layers, which have the same temporal position, have images similar to one another. Accordingly, a method of predicting the texture of the current layer and encoding the difference between the predicted value and the actual texture value of the current layer has been proposed. In the Scalable Video Mode 3.0 of ISO/IEC 21000-13 Scalable Video Coding (hereinafter referred to as 'SVM 3.0'), such a method is called intra-BL prediction.
[7] According to SVM 3.0, in addition to an inter-prediction and a directional intra- prediction used for prediction of blocks or macroblocks that constitute the current frame in the existing H.264, a method of predicting the current block by using the correlation between the current block and a corresponding lower-layer block has been adopted. This prediction method is called an 'intra-BL prediction', and a mode for performing an encoding using such a prediction method is called an 'intra-BL mode'.
[8] FIG. 2 is a view schematically explaining the above-described three prediction methods. First ( ® ) intra-prediction with respect to a certain macroblock 14 of the current frame 11 is performed, second ( ® ) inter-prediction using a frame 12 that is at a temporal position different from that of the current frame 11 is performed, and third ( ® ) intra-BL prediction is performed using texture data for an area 16 of a base-layer frame 13 that corresponds to the macroblock 14. Disclosure of Invention
Technical Problem
[9] In the case of encoding residual data by obtaining the difference between the result of the prediction and a video to be encoded according to the result of the prediction in the temporal inter-prediction, the compression efficiency is increased. In addition, the compression efficiency can be heightened by reducing the amount of data to be encoded by obtaining the difference between the residual data. Consequently, a method and an apparatus for compressing the residual data in the directional intra-prediction are required.
Technical Solution
[10] Accordingly, the present invention has been made to address the above-mentioned problems occurring in the prior art, and an aspect of the present invention is to reduce the size of data to be encoded by obtaining residual data of an enhancement layer based on directional intra-prediction data of a base layer.
[11] Another aspect of the present invention is to reduce the amount of data to be encoded and to increase the compression efficiency while performing intra-prediction by reducing the size of symbols to be allocated to directional information that exists in
directional intra-prediction data.
[12] Additional aspects of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
[13] In order to accomplish these aspects, there is provided a method of encoding a video signal according to a directional intra-residual prediction, according to the present invention, which includes calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer; and encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
[14] In another aspect of the present invention, there is provided a method of decoding a video signal according to a directional intra-residual prediction, which includes extracting third residual data that is directional intra-residual prediction data on a third block of an enhancement layer from a enhancement-layer residual stream; extracting first residual data that is the result of performing directional intra-prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; calculating second residual data that is the result of performing directional intra-prediction on the third block by adding the third residual data and the first residual data; and restoring the third block using the second residual data.
[15] In still another aspect of the present invention, there is provided a video encoder for encoding a video signal according to a directional intra-residual prediction, which includes a base-layer intra-prediction unit calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; an enhancement-layer intra-prediction unit calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer; and a residual encoding unit encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
[16] In still another aspect of the present invention, there is provided a video decoder for decoding a video signal according to a directional intra-residual prediction, which includes a residual decoding unit extracting third residual data that is directional intra- residual prediction data on a third block of an enhancement layer from a enhancement-
layer residual stream; a base-layer residual decoding unit extracting first residual data that is the result of performing directional intra-prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; an enhancement-layer residual decoding unit calculating second residual data that is the result of performing directional intra-prediction on the third block by adding the third residual data and the first residual data; and an enhancement-layer decoding unit restoring the third block using the second residual data.
Description of Drawings
[17] The above and other aspects of the present invention will become more apparent from the following detailed description of exemplary embodiments taken in conjunction with the accompanying drawings, in which:
[18] FlG. 1 is a view illustrating an example of a scalable video codec using a multilayer structure;
[19] FlG. 2 is a view schematically explaining three prediction methods;
[20] FlG. 3 is a view explaining a process of obtaining a difference between residual data generated by performing intra-prediction on an enhancement layer and a base layer, respectively;
[21] FlG. 4 is a view illustrating a residual difference mechanism of directional intra- prediction according to an exemplary embodiment of the present invention;
[22] FlG. 5A and 5B are views explaining existing intraprediction directions and extended intra-prediction directions according to an exemplary embodiment of the present invention;
[23] FlG. 6 is a view explaining relations among blocks which are referred to based on the extended intra-prediction according to an exemplary embodiment of the present invention;
[24] FlG. 7 is a view explaining a process of decoding video data according to directional intra-residual prediction according to an exemplary embodiment of the present invention;
[25] FlG. 8 is a flowchart illustrating an encoding process according to directional intra- residual prediction according to an exemplary embodiment of the present invention;
[26] FlG. 9 is a flowchart illustrating a decoding process according to directional intra- residual prediction according to an exemplary embodiment of the present invention;
[27] FlG. 10 is a block diagram illustrating the construction of a video encoder according to an exemplary embodiment of the present invention; and
[28] FlG. 11 is a block diagram illustrating the construction of a video decoder according to an exemplary embodiment of the present invention.
Mode for Invention
[29] Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. The aspects and features of the present invention and methods for achieving the aspects and features will become apparent by referring to the exemplary embodiments to be described in detail with reference to the accompanying drawings. However, the present invention is not limited to the exemplary embodiments disclosed hereinafter, but can be implemented in diverse forms. The matters defined in the description, such as the detailed construction and elements, are nothing but specific details provided to assist those of ordinary skill in the art in a comprehensive understanding of the invention, and the present invention is only defined within the scope of the appended claims. In the entire description of the present invention, the same drawing reference numerals are used for the same elements across various figures.
[30] Exemplary embodiments of the present invention will be described with reference to the accompanying drawings illustrating block diagrams and flowcharts for explaining a method and apparatus for encoding and decoding a video signal according to directional intra-residual prediction according to the present invention. It will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer usable or computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks.
[31] FIG. 3 is a view explaining a process of obtaining a difference between residual data generated by performing directional intra-prediction on an enhancement layer and on a base layer, respectively. Residual data (Rb) 102 generated by performing directional intra-prediction on a base layer has information on a difference between a block to be predicted and the original block 101 to be encoded. In the case of the directional intra-prediction, the residual data (Rb) 102 includes information on the directionality to be referred to.
[32] Residual data (Rc) 112 is generated by performing directional intra-prediction on a block 111 of an enhancement layer. The residual data (Rc) 112 includes directional in-
formation required to refer to a block located in a specified direction for the directional intra-prediction. A decoder side performs a block restoration by selecting a block or pixel to be referred to according to such directional information.
[33] In the base layer and the enhancement layer, there is a high possibility that the residual data according to the intra-prediction of the base layer is similar to the residual data according to the intra-prediction of the enhancement layer. Accordingly, the coding efficiency can be improved by encoding residual prediction data (R) 120 obtained from the difference between the residual data Rb of the base layer and the residual data Rc of the enhancement layer, rather than by encoding the residual data Rc obtained by performing the directional intra-prediction on the block of the enhancement layer as it is.
[34] FlG. 4 is a view illustrating a residual difference mechanism of a directional intra- prediction according to an exemplary embodiment of the present invention. In a specified frame or slice 150 of the enhancement layer, a block or pixel 151 to be encoded exists. Also, a block or pixel 141 exists in a frame 140 of the base layer. Residual data (Rb) 143 is generated by performing the directional intra-prediction on the block or pixel 141 of the base layer with reference to a block (or pixel) 142. In this case, the residual data includes directional information 145 for referring to the block 142.
[35] Meanwhile, residual data (Rc) 153 is generated by performing the directional intra- prediction on the block or pixel 151 of the enhancement layer that corresponds to the block 141 of the base layer with reference to a block 152 that corresponds to the block 142 of the base layer. The residual data includes directional information 155 for referring to the block 152. Here, the directional information 155 whereby the block 151 refers to the block 152 is the same as or similar to the directional information 145 whereby the block 141 refers to the block 142. This is because their relative locations are the same or similar to each other. Also, there is a high probability that their texture residuals are similar to each other. In order to remove this similarity between the information, directional intra-residual prediction may be performed by obtaining residual prediction data (R) 156 that is the difference between the residual data (Rc) 153 and the residual data (Rb) 143. In the case of the directional information, a directional intra-residual prediction can be performed by obtaining the difference 168 between the directional information 145 according to the intra-prediction of the base layer and the directional information 155 according to the intra-prediction of the enhancement layer.
[36] In performing the directional intra-residual prediction as illustrated in FlGs. 3 and
4, a multilayer intra-prediction may be performed by a residual prediction of a directional intra-prediction mode. It can be recognized using a residual prediction flag whether the enhancement layer will refer to the directional intra-prediction information
of the base layer. Also, it can be recognized using a base-layer flag (blflag) whether the direction of the residual layer has been reused in the enhancement layer. For example, if the base-layer flag (blflag) is T, the directional information of the base layer can be reused. If the directional information of the base layer is different from the directional information of the enhancement layer, it can be used after the directional information is adjusted according to a qpel flag.
[37] In this case, the residual prediction flag used in the temporal inter-prediction and the qpel flag can be used.
[38] On the other hand, the directions of the existing intra-prediction can be extended in order to perform the directional intra-residual prediction as illustrated in FIGs. 3 and 4. In this case, a more accurate directional prediction is performed, and the difference between the directional predictions in the enhancement layer that refers to the corresponding directional prediction becomes exquisite, so that the encoding efficiency of the directional intra-prediction result can be heightened.
[39] FIG. 5A and 5B are views explaining existing intraprediction directions and extended intra-prediction directions according to an exemplary embodiment of the present invention, respectively.
[40] The directional intra-prediction proposed in the H.264 specifications has 9 intra- prediction directions including 8 directions as illustrated in the drawing and DC. The extended directional intra-prediction proposed according to the exemplary embodiment of present invention has 7 additional intra-prediction directions, thus the entire number of intra-prediction directions becomes 16. By adding information on intraBL4 ' 4 to the 16 directions, the number of intra-prediction directions becomes 17 in total. According to the extended intra-prediction proposed according to the exemplary embodiment of the present invention, information, which can be hardly indicated by the existing directionality, is indicated through the extended directionality, and thus the performance of the intra-prediction is improved. As a result, the intra-prediction can be applied in the case where the intra-BL for the base layer fails to have a high compression rate due to the difference in resolution or quantization size between the base layer and the enhancement layer.
[41] FIG. 6 is a view explaining relations among blocks that are referred to based on the extended intra-prediction as described above according to an exemplary embodiment of the present invention. Reference numeral 170 shows blocks that are referred to for the intra-prediction in the conventional H.264. According to the extended intra- prediction, adjacent blocks indicated as a reference numeral 180 are referred to according to the extended intra-prediction directions as shown in FIG. 5B. In this case, it is required to give weightings to adjacent pixels. Blocks 181, 182, 183, 184, 185, 186, and 187 show the relations among the adjacent pixels that are referred to during
the extended intra-prediction. The blocks as illustrated in FlG. 6 include subblocks.
[42] FlG. 7 is a view explaining a process of decoding video data according to a directional intra-residual prediction according to an exemplary embodiment of the present invention. Residual prediction data (R) 256 and residual data (Rb) 243 are included in an enhancement-layer bitstream and a base-layer bitstream, respectively. (R) 256 includes the result of subtracting the residual data of the base layer from the residual data of the enhancement layer according to the directional intra-residual prediction. Also, (R) 256 includes a difference value 268 between the directionality of the enhancement layer and the directionality 245 of the base layer. Residual data (Rc) 253 for the directional intra-prediction on the enhancement layer can be restored by adding (Rb) 243 and (R) 256. The residual data 253 also includes information on the directionality 255. A block 241 of a base-layer frame 240 can be restored by performing the decoding in accordance with the typical directional intra-prediction using (Rb) 243. The block 241 refers to a block 242. A block 251 of an enhancement- layer frame 250 can be restored through a restoration process using (Rc) 253. The block 251 refers to a block 252.
[43] FlG. 8 is a flowchart illustrating an encoding process according to a directional intra-residual prediction according to an exemplary embodiment of the present invention.
[44] First, a direction intra-prediction is performed on the base layer (S301). That is, as illustrated in FlG. 4, the directional intra-prediction is performed on the first block (141 in FlG. 4) of the base layer with reference to the second block (142 in FlG. 4) in the same frame as the first block of the base layer. Then, the residual data Rb (143 in FlG. 4) is calculated as the result of the prediction (S302).
[45] Meanwhile, a directional intra-prediction is performed on the enhancement layer
(S303). That is, the directional intra-prediction is performed on the third block (151 in FlG. 4) of the enhancement layer that corresponds to the first block (141 in FlG. 4) of the base layer with reference to the fourth block (152 in FlG. 4) of the enhancement layer that corresponds to the second block (142 in FlG. 4) of the base later. Then, the residual data Rc (153 in FlG. 4) is calculated as the result of the prediction (S304).
[46] The directional intra-residual prediction data R (156 in FlG. 4) on the enhancement layer is generated by calculating Rc - Rb S305. Then, the residual data R is encoded and then transmitted to the decoder side (S306).
[47] The above-described extended directional intra-prediction can be performed based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
[48] FlG. 9 is a flowchart illustrating a decoding process according to a directional intra-residual prediction according to an exemplary embodiment of the present
invention. The decoding process will now be explained with reference to FlGs. 7 and 9.
[49] The residual data R (256 in FlG. 7) that is the result of the directional intra-residual prediction is decoded (S321). Also, the residual data Rb (243 in FlG. 7) that is the result of the intra-prediction performed on the block (241 in FlG. 7) of the base layer, which the block (251 in FlG. 7) to be finally restored through the residual data R refers to, is extracted (S322). Then, the residual data Rc (253 in FlG. 7) that is the result of the intra-prediction on the enhancement layer is calculated by adding Rb and R (S324). Then, the data of the enhancement layer is restored using Rc (S325).
[50] As described above, the residual date can be exquisitely predicted by performing the extended directional intra-prediction based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
[51] FlG. 10 is a block diagram illustrating the construction of a video encoder according to an exemplary embodiment of the present invention.
[52] Referring to FlG. 10, the video encoder 300 includes an enhancement-layer intra- prediction unit 320 for generating a residual stream for the enhancement-layer data, a residual encoding unit 330, a quantization unit 340, an entropy coding unit 350, a base- layer intra-prediction unit 310 for generating a residual stream for the base-layer data, a base-layer quantization unit 345, and a base-layer entropy coding unit 355.
[53] Referring to FlG. 4, the base-layer intra-prediction unit 310 performs the directional intra-prediction on the first block (141 in FlG. 4) of the base layer with reference to the second block (142 in FlG. 4) in the same frame as the first block of the base layer, resulting in that the residual data Rb (143 in FlG. 4) is generated. This residual data is encoded through the base-layer quantization unit 345 and the base- layer entropy coding unit 355, and then the encoded residual data is transmitted to the decoder side.
[54] Meanwhile, the enhancement-layer intra-prediction unit performs the directional intra-prediction on the third block (151 in FlG. 4) that corresponds to the first block (141 in FlG. 4) of the base layer. In this case, the fourth block (152 in FlG. 4) of the enhancement layer that corresponds to the second block (142 in FlG. 4) becomes the reference block. As the result of performing the directional intra-prediction, the residual data Rc is generated.
[55] The residual encoding unit 330 generates R that is the result of the directional intra- residual prediction by obtaining Rc and Rb. The value R is encoded through the quantization unit 340 and the entropy coding unit 350.
[56] Since the quantization process and the entropy coding as illustrated in FlG. 10 have also been used in the conventional video encoder, the detailed explanation thereof will be omitted.
[57] In the case of applying the above-described extended directional intra-prediction, the enhancement-layer intra-prediction unit 320 and the base-layer intra-prediction unit 310 can perform the directional intra-prediction based on the third direction that exists between two adjacent directions used for the conventional directional intra-prediction.
[58] FlG. 11 is a block diagram illustrating the construction of a video decoder according to an exemplary embodiment of the present invention.
[59] Referring to FlG. 11, the video decoder 600 includes a residual decoding unit 610 for restoring the enhancement-layer residual stream to the enhancement-layer video data, an enhancement-layer residual decoding unit 620, and an enhancement-layer decoding unit 640. The video decoder also includes a base-layer residual decoding unit 630 and a base-layer decoding unit 650.
[60] The residual decoding unit 610 extracts the residual data R (256 in FlG. 7) that is the direction intra-residual prediction data on the third block (251 in FlG. 7) of the enhancement layer. The base-layer residual decoding unit 630 extracts the residual data Rb, which is the result of performing the directional intra-prediction on the first block (241 in FlG. 7) corresponding to the third block, from the base-layer residual stream.
[61] The enhancement-layer residual decoding unit 620 calculates the residual data Rc that is the result of performing the directional intra-prediction on the third block (251 in FlG. 7) by adding R and Rb. The calculated residual data is inputted to the enhancement-layer decoding unit 640 so as to be restored to the video data.
[62] The base-layer decoding unit 650 also restores the video data using the residual data Rb.
[63] Since the restoration process as illustrated in FlG. 11 has also been used in the conventional video decoder, the detailed explanation thereof will be omitted.
[64] In the case of applying the above-described extended directional intra-prediction, the enhancement-layer decoding unit 640 and the base-layer decoding unit 650 can restore the video data based on the third direction that exists between the two adjacent directions used for the conventional directional intra-prediction.
Industrial Applicability
[65] As described above, according to the exemplary embodiments of the present invention, the decoding can be efficiently performed without changing the multi-loop decoding process.
[66] Also, in the case of performing the directional intra-prediction on the enhancement layer, the coding size of symbols to be allocated to the directional information can be markedly reduced, and the directional information can be adjusted with reference to the directional information of the base layer.
[67] The residual prediction flag and the base-layer flag currently used in the temporal inter-prediction can also be used in the directional intra-prediction.
[68] The exemplary embodiments of the present invention have been described for illustrative purposes, and those skilled in the art will appreciate that various modifications, additions and substitutions are possible without departing from the scope and spirit of the invention as disclosed in the accompanying claims. Therefore, the scope of the present invention is defined by the appended claims and their legal equivalents.
Claims
[1] A method of encoding a video signal according to directional intra-residual prediction, comprising: calculating first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; calculating second residual data by performing directional intra-prediction on a third block of an enhancement layer that corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer that corresponds to the second block of the base layer; and encoding the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
[2] The method of claim 1, wherein the encoding of the third block comprises encoding of the third residual data.
[3] The method of claim 1, wherein the first residual data is a difference value between the first block and the second block, and comprises directional information of the second block.
[4] The method of claim 1, wherein the second residual data is a difference value between the third block and the fourth block, and comprises directional information of the fourth block.
[5] The method of claim 1, wherein the third residual data comprises a difference between directional information included in the first residual data and directional information included in the second residual data.
[6] The method of claim 1, wherein the third residual data comprises information on a residual prediction mode that indicates the directional intra-residual prediction.
[7] The method of claim 1, wherein the third residual data comprises flag information that indicates whether to reuse directional information included in the first residual data.
[8] The method of claim 1, wherein the third residual data comprises flag information that indicates whether to adjust and use directional information included in the first residual data.
[9] The method of claim 1, wherein at least one of the second block and the fourth block exists in a third direction which is between a first direction and a second direction, which are adjacent to each other, for use in the directional intra- prediction.
[10] The method of claim 9, wherein the first and second directions are determined according to H.264 intraprediction standard.
[11] A method of decoding a video signal according to directional intra-residual prediction, comprising: extracting third residual data that is directional intra-residual prediction data on a third block of an enhancement layer from an enhancement-layer residual stream; extracting first residual data that is a result of performing directional intra- prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; calculating second residual data that is a result of performing directional intra- prediction on the third block by adding the third residual data and the first residual data; and restoring the third block using the second residual data.
[12] The method of claim 11, wherein the first residual data is a difference value between the first block and a second block that the first block refers to according to the directional intra-prediction, and comprises directional information of the second block.
[13] The method of claim 11 , wherein the second residual data is a difference value between the third block and a fourth block that the third block refers to according to the directional intra-prediction, and comprises directional information of the fourth block.
[14] The method of claim 11, wherein the third residual data comprises a difference between directional information included in the first residual data and directional information included in the second residual data.
[15] The method of claim 11, wherein the third residual data comprises information on a residual prediction mode that indicates the directional intra-residual prediction.
[16] The method of claim 11, wherein the third residual data comprises flag information that indicates whether to reuse directional information included in the first residual data.
[17] The method of claim 11, wherein the third residual data comprises flag information that indicates whether to adjust and use directional information included in the first residual data.
[18] The method of claim 11, wherein the first residual data and the second residual data are obtained with reference to a second block and a fourth block, respectively, and wherein at least one of the second block and the fourth block exists in a third direction which is between a first direction and a second direction, which are adjacent to each other, for use in the directional intra-prediction.
[19] The method of claim 18, wherein the first and second directions are determined
according to H.264 intraprediction standard.
[20] A video encoder for encoding a video signal according to a directional intra- residual prediction, the video encoder comprising: a base-layer intra-prediction unit which calculates first residual data by performing directional intra-prediction on a first block of a base layer with reference to a second block of the base layer; an enhancement-layer intra-prediction unit which calculates second residual data by performing directional intra-prediction on a third block of an enhancement layer which corresponds to the first block of the base layer with reference to a fourth block of the enhancement layer which corresponds to the second block of the base layer; and a residual encoding unit which encodes the third block according to the directional intra-residual prediction by obtaining third residual data that is a difference between the first residual data and the second residual data.
[21] The video encoder of claim 20, wherein, in encoding the third block, the residual encoding unit is configured to encode the third residual data.
[22] The video encoder of claim 20, wherein the first residual data is a difference value between the first block and the second block, and comprises directional information of the second block.
[23] The video encoder of claim 20, wherein the second residual data is a difference value between the third block and the fourth block, and comprises directional information of the fourth block.
[24] The video encoder of claim 20 wherein the third residual data comprises a difference between directional information included in the first residual data and directional information included in the second residual data.
[25] The video encoder of claim 20, wherein the third residual data comprises information on a residual prediction mode that indicates the directional intra- residual prediction.
[26] The video encoder of claim 20, wherein the third residual data comprises flag information that indicates whether to reuse directional information included in the first residual data.
[27] The video encoder of claim 20, wherein the third residual data comprises flag information that indicates whether to adjust and use directional information included in the first residual data.
[28] The video encoder of claim 20, wherein at least one of the second block and the fourth block exists in a third direction that is between a first direction and a second direction, which are adjacent to each other, for use in the directional intra-prediction.
[29] The video encoder of claim 28, wherein the first and second directions are determined according to H.264 intraprediction standard.
[30] A video decoder for decoding a video signal according to a directional intra- residual prediction, comprising: a residual decoding unit which extracts third residual data that is directional intra-residual prediction data on a third block of an enhancement layer from an enhancement-layer residual stream; a base-layer residual decoding unit which extracts first residual data that is a result of performing directional intra-prediction on a first block of a base layer corresponding to the third block from a base-layer residual stream; an enhancement-layer residual decoding unit which calculates second residual data that is a result of performing directional intra-prediction on the third block by adding the third residual data and the first residual data; and an enhancement-layer decoding unit which restores the third block using the second residual data.
[31] The video decoder of claim 30, wherein the first residual data is a difference value between the first block and a second block that the first block refers to according to the directional intra-prediction, and comprises directional information of the second block.
[32] The video decoder of claim 30, wherein the second residual data is a difference value between the third block and a fourth block that the third block refers to according to the directional intra-prediction, and comprises directional information of the fourth block.
[33] The video decoder of claim 30, wherein the third residual data comprises a difference between directional information included in the first residual data and directional information included in the second residual data.
[34] The video decoder of claim 30, wherein the third residual data comprises information on a residual prediction mode that indicates the directional intra- residual prediction.
[35] The video decoder of claim 30, wherein the third residual data comprises flag information that indicates whether to reuse directional information included in the first residual data.
[36] The video decoder of claim 30, wherein the third residual data comprises flag information that indicates whether to adjust and use directional information included in the first residual data.
[37] The video decoder of claim 30, wherein the first residual data and the second residual data are obtained with reference to the second block and the fourth block, respectively, and
wherein at least one of the second block and the fourth block exists in a third direction that is between a first direction and a second direction, which are adjacent to each other, for use in the directional intra-prediction.
[38] The video decoder of claim 37, wherein the first and second directions are determined according to H.264 intraprediction standard.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006800265650A CN101228796B (en) | 2005-07-21 | 2006-07-21 | Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US70103705P | 2005-07-21 | 2005-07-21 | |
US60/701,037 | 2005-07-21 | ||
US70229505P | 2005-07-26 | 2005-07-26 | |
US60/702,295 | 2005-07-26 | ||
KR1020050110927A KR100725407B1 (en) | 2005-07-21 | 2005-11-18 | Method and apparatus for video signal encoding and decoding with directional intra residual prediction |
KR10-2005-0110927 | 2005-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007029919A1 true WO2007029919A1 (en) | 2007-03-15 |
Family
ID=37836013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2006/002869 WO2007029919A1 (en) | 2005-07-21 | 2006-07-21 | Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2007029919A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140140392A1 (en) * | 2012-11-16 | 2014-05-22 | Sony Corporation | Video processing system with prediction mechanism and method of operation thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6614936B1 (en) * | 1999-12-03 | 2003-09-02 | Microsoft Corporation | System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding |
US20040101052A1 (en) * | 2002-09-17 | 2004-05-27 | Lg Electroncis Inc. | Fine granularity scalability encoding/decoding apparatus and method |
EP1442602A1 (en) * | 2001-10-26 | 2004-08-04 | Koninklijke Philips Electronics N.V. | Spatial scalable compression scheme using adaptive content filtering |
-
2006
- 2006-07-21 WO PCT/KR2006/002869 patent/WO2007029919A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6614936B1 (en) * | 1999-12-03 | 2003-09-02 | Microsoft Corporation | System and method for robust video coding using progressive fine-granularity scalable (PFGS) coding |
EP1442602A1 (en) * | 2001-10-26 | 2004-08-04 | Koninklijke Philips Electronics N.V. | Spatial scalable compression scheme using adaptive content filtering |
US20040101052A1 (en) * | 2002-09-17 | 2004-05-27 | Lg Electroncis Inc. | Fine granularity scalability encoding/decoding apparatus and method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140140392A1 (en) * | 2012-11-16 | 2014-05-22 | Sony Corporation | Video processing system with prediction mechanism and method of operation thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8111745B2 (en) | Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction | |
EP2428042B1 (en) | Scalable video coding method, encoder and computer program | |
US8687707B2 (en) | Method and apparatus for encoding/decoding using extended macro-block skip mode | |
JP4991699B2 (en) | Scalable encoding and decoding methods for video signals | |
US20070019726A1 (en) | Method and apparatus for encoding and decoding video signal by extending application of directional intra-prediction | |
KR100888962B1 (en) | Method for encoding and decoding video signal | |
US8155181B2 (en) | Multilayer-based video encoding method and apparatus thereof | |
US8351502B2 (en) | Method and apparatus for adaptively selecting context model for entropy coding | |
KR100885443B1 (en) | Method for decoding a video signal encoded in inter-layer prediction manner | |
US20070086516A1 (en) | Method of encoding flags in layer using inter-layer correlation, method and apparatus for decoding coded flags | |
US10070140B2 (en) | Method and apparatus for quantization matrix signaling and representation in scalable video coding | |
JP2008543160A (en) | Decoding video signals encoded through inter-layer prediction | |
EP1955546A1 (en) | Scalable video coding method and apparatus based on multiple layers | |
WO2006112642A1 (en) | Method and apparatus for adaptively selecting context model for entropy coding | |
US9143797B2 (en) | Lossy data compression with conditional reconstruction refinement | |
EP2816805A1 (en) | Lossy data compression with conditional reconstruction reinfinement | |
US20070160136A1 (en) | Method and apparatus for motion prediction using inverse motion transform | |
US20100303151A1 (en) | Method for decoding video signal encoded using inter-layer prediction | |
WO2007032600A1 (en) | Method and apparatus for encoding and decoding video signal by extending application of directional intra-prediction | |
US20070014351A1 (en) | Method and apparatus for encoding and decoding FGS layer using reconstructed data of lower layer | |
WO2007029919A1 (en) | Method and apparatus for encoding and decoding video signal according to directional intra-residual prediction | |
KR20140088002A (en) | Video encoding and decoding method and apparatus using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680026565.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06823606 Country of ref document: EP Kind code of ref document: A1 |