EP1839441A2 - Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage - Google Patents

Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage

Info

Publication number
EP1839441A2
EP1839441A2 EP06715725A EP06715725A EP1839441A2 EP 1839441 A2 EP1839441 A2 EP 1839441A2 EP 06715725 A EP06715725 A EP 06715725A EP 06715725 A EP06715725 A EP 06715725A EP 1839441 A2 EP1839441 A2 EP 1839441A2
Authority
EP
European Patent Office
Prior art keywords
deblocking
base layer
data
fgs
enhancement layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06715725A
Other languages
German (de)
English (en)
Inventor
Bae-Keun Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050011423A external-priority patent/KR100703744B1/ko
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP1839441A2 publication Critical patent/EP1839441A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • H04N19/615Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]

Definitions

  • the present invention relates to a fine granularity scalable video encoding and decoding method and apparatus capable of controlling deblocking
  • multimedia data is large, a high capacity storage medium and a wide bandwidth is required to store and transmit the multimedia data, respectively. Therefore, in order to transmit multimedia data including text, moving pictures (hereinafter referred to as 'video'), and audio, a compression and coding technique must be used of methods of compressing multimedia data, in particular, video compression methods can be classified into lossy/non lossy compression, intra frame/inter frame compression, and symmetric/asymmetric compression according to whether original data is lost, whether data is independently compressed for each frame, and whether the time required for compression is the same as the time for reconstruction, respectively Compression when the resolution of frames varies is classified as scalable compression
  • Scalability is a technique using a base layer and an enhancement layer, and allowing a decoder to observe the processing status, network status, and others, and to perform selective decoding with respect to time, space, or the Signal to Noise Ratio (SNR) Of scalabilities, Fine Granularity Scalability (FGS) encodes the base layer and the enhancement layer. After the enhancement layer has been encoded, the encoded enhancement layer may not be transmitted or decoded according to the transmission efficiency of a network or the status of a decoder. Through FGS, data can be suitably transmitted according to a bit rate.
  • SNR Signal to Noise Ratio
  • FGS Fine Granularity Scalability
  • an aspect of the present invention is to provide an encoding and decoding method and apparatus, which can perform low-intensity deblocking in video encoding and decoding that supports FGS, thus improving a Peak Signal to Noise Ratio (PSNR).
  • PSNR Peak Signal to Noise Ratio
  • Another aspect of the present invention is to provide an encoding and decoding method and apparatus, which improve video quality while reducing data loss caused by deblocking.
  • a FGS-based video encoding method capable of controlling deblocking, comprising the steps of (a) receiving original data of video and generating a base layer based on the original data, (b) obtaining a difference between data that are obtained by reconstructing the base layer and deblocking the reconstructed base layer, and the original data, thus generating an enhancement layer, (c) generating a reconstructed frame, based on the data that are obtained by reconstructing the enhancement layer, and data that are obtained by reconstructing and deblocking the reconstructed base layer, and (d) deblocking the reconstructed frame at a lower intensity than that of deblocking that has been performed in step (b) or (c).
  • FGS-based video decoding method capable of controlling deblocking, comprising the steps of (a) receiving a video stream and extracting a base layer from the video stream, (b) extracting an enhancement layer from the video stream, (c) adding data that are obtained by reconstructing and deblocking the base layer, to data that are obtained by reconstructing the enhancement layer, thus generating a reconstructed frame, and (d) deblocking the reconstructed frame at a lower intensity than that of deblocking performed m step (c).
  • FGS-based video encoder capable of controlling deblocking, comprising a base layer generation unit for generating a base layer based on original data of video, an enhancement layer generation unit for obtaining a difference between data that are obtained by reconstructing and deblocking the base layer, and the original data, thus generating an enhancement layer, a reconstructed frame generation unit for generating a reconstructed frame, based on data that are obtained by reconstructing the enhancement layer, and data that are obtained by reconstructing and deblocking the base layer, and a first deblocking unit for deblocking the reconstructed frame at a lower intensity than that of deblocking performed by the enhancement layer generation unit or the reconstructed frame generation unit.
  • FGS-based video decoder capable of controlling deblocking, comprising a base layer extraction unit for extracting a base layer from a received video stream, an enhancement layer extraction unit for extracting an enhancement layer from the received video stream, a reconstructed frame generation unit for adding data that are obtained by reconstructing and deblocking the base layer, to data that are obtained by reconstructing the enhancement layer, thus generating a reconstructed frame, and a first deblocking unit for deblocking the reconstructed frame at a lower intensity than that of deblocking performed by the reconstructed frame generation unit.
  • FIG. 1 is a diagram showing an apparatus for encoding video that supports FGS according to an embodiment of the present invention
  • FIG. 2 is a diagram showing an apparatus for decoding video that supports FGS according to an embodiment of the present invention
  • FIG. 3 is a diagram showing an apparatus for encoding video that supports FGS according to another embodiment of the present invention.
  • FIG. 4 is a diagram showing an apparatus for decoding video that supports FGS according to another embodiment of the present invention.
  • FIG. 5 is a flowchart showing a process of encoding the original data of a video according to an embodiment of the present invention
  • FIG. 6 is a flowchart showing a process of decoding a received video stream according to an embodiment of the present invention
  • FIG. 7 is a view showing an example of reconstruction results for a base layer and enhancement layers according to an embodiment of the present invention.
  • FIGS. 8 A and 8B are graphs showing the degree of improvement of a PSNR according to an embodiment of the present invention.
  • FIGS. 9 A and 9B are graphs showing the degree of improvement of a PSNR according to another embodiment of the present invention.
  • the terms 'unit' and 'module' which are used in the exemplary embodiments of the present invention, denote software components, or hardware components, such as a Field- Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
  • FPGA Field- Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • Each module executes certain functions.
  • a module can be implemented to reside in an addressable storage medium, or to run on one or more processors. Therefore, as an example, a module includes various components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, sub-routines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functions provided by the components and modules can be combined into a small number of components and modules, or can be separated into additional components or modules.
  • components and modules can be implemented to drive one or more central processing units (CPUs) in a device or security multimedia card.
  • FIG. 1 is a diagram showing an apparatus for encoding video that supports FGS according to an exemplary embodiment of the present invention.
  • a base layer is generated using an original frame 101.
  • the original frame 101 may be a frame extracted from a group of pictures (GOP), and it may be obtained by performing Motion-Compensated Temporal Filtering (MCTF) on the GOP.
  • MCTF Motion-Compensated Temporal Filtering
  • a transform & quantization unit 201 performs transformation and quantization.
  • a base layer frame 501 is generated.
  • an enhancement layer denotes data to be added to the base layer
  • the difference between the original frame and the base layer frame is obtained. Residual data obtained by the difference is used later in such a way that a decoder obtains original video data by adding corresponding residual data to the original frame.
  • the data obtained by the decoder is inversely quantized and inversely transformed with respect to the original frame. Accordingly, the base layer frame, calculated by the transform & quantization unit 201, is inversely quantized and inversely transformed by an inverse quantization & inverse -transform unit 301 in order to reconstruct the base layer frame.
  • the decoder performs deblocking to eliminate boundaries between blocks constituting the reconstructed frame; deblocking is performed on the frame reconstructed by a deblocking unit 401.
  • the difference between the reconstructed base layer frame 102 calculated by the inverse quantization & inverse transform unit 301 and the original frame 101 is obtained by a subtracter 11.
  • Data obtained using the subtracter 11 is transformed and quantized by a transform & quantization unit 202 in order to generate a first enhancement layer frame 502.
  • the first enhancement layer frame is added to the reconstructed base layer frame 102 in order to generate a second enhancement layer frame.
  • the first enhancement layer frame is reconstructed using an inverse quantization & inverse transform unit 302 so that a first reconstructed enhancement layer frame 103 is generated.
  • the frames 103 and 102 are added to each other by an adder 12 to generate a new frame 104.
  • the difference between the frame 104 and the original frame 101 is obtained by a subtracter 11.
  • Residual data, obtained by the difference is transformed and quantized by a transform & quantization unit 203 to generate a second enhancement layer frame 503.
  • the above process is repeated so that a third enhancement layer frame, a fourth enhancement layer frame, and others can be successively generated.
  • the base layer frame 501, the first enhancement layer frame 502 and the second enhancement layer frame 503 generated in this way can be transmitted in the form of a Network Abstraction Layer unit (NAL unit).
  • NAL unit Network Abstraction Layer unit
  • the decoder can reconstruct data even if part of the received NAL unit is truncated.
  • deblocking is performed on a reconstructed frame 106 that is obtained by adding the second reconstructed enhancement layer frame 105, reconstructed by an inverse quantization & inverse transform unit 303, to the frame 104 through the adder 12.
  • a deblocking coefficient decreases when deblocking is performed by a deblocking unit 402.
  • a high deblocking coefficient is assigned when deblocking is performed by the deblocking unit 402, but an over- smoothing problem occurs.
  • the deblocking co- efficient is set to a low value, such as 1 or 2, for the deblocking unit 402 so as to prevent the above problem, thus decreasing the degree of deblocking and preventing over-smoothing.
  • the reconstructed frame, deblocked in this way, can be referred to when other frames are generated.
  • a temporal sub-band picture is generated by performing MCTF on a GOP constituting video, and original data is extracted from the temporal sub-band picture.
  • the original data is down-sampled from all of the data. If this data is transformed through a Discrete Cosine Transform (DCT) or a wavelet transform, and quantized and encoded, the base layer is generated.
  • DCT Discrete Cosine Transform
  • the transform & quantization units 201, 202 and 203 of FIG. 1 can perform lossy encoding. Part of the original information is lost because it is transformed through a DCT and quantized. Accordingly, this encoding is called lossy encoding.
  • the transform & quantization unit 201 of FIG. 1 is an exemplary embodiment of a base layer generation unit for generating a base layer, and the transform & quantization units 202 and 203 for generating enhancement layers are exemplary embodiments of an enhancement layer generation unit.
  • the reconstructed frames are indicated by reference numerals 102, 104, 106, 103 and 105, and the inverse quantization & inverse transform units 301, 302 and 303 for generating the reconstructed frames are exemplary embodiments of a reconstructed frame generation unit.
  • FIG. 2 is a diagram of an apparatus for decoding video to support FGS according to an exemplary embodiment of the present invention.
  • the base layer frame 501, the first enhancement layer frame 502 and the second enhancement layer frame 503, generated in the process shown in FIG. 1, are received, and since these frames are encoded data, they are decoded by inverse quantization & inverse transform units 311, 312 and 313. At this time, a reconstructed base layer frame 111 is obtained through a deblocking block 411.
  • Frames 111, 112 and 113 which have been decoded and reconstructed, are added to each other by an adder 12.
  • Deblocking is performed on the added frames by a deblocking unit to eliminate the boundaries between blocks.
  • the base layer frame has already been deblocked by the deblocking unit 411 so that a coefficient for deblocking, which is performed by the deblocking unit 412, decreases to 1 or 2 in the embodiment of the present invention. After deblocking has been completed in this way, a reconstructed original frame is reproduced.
  • the inverse quantization & inverse transform unit 311 of FIG. 2 is an exemplary embodiment of a base layer extraction unit for extracting a base layer
  • the inverse quantization & inverse transform units 312 and 313 for extracting enhancement layers are exemplary embodiments of an enhancement layer extraction unit.
  • Reconstructed frames are indicated by reference numerals 111, 112 and 113, and the adder 12 for adding the frames to each other is an embodiment of a reconstructed frame generation unit.
  • FGS uses an enhancement layer of a Scalable Video
  • a NAL unit obtained as a result of FGS can be truncated at a specific point, and frames can be reconstructed using data existing up to the truncation point.
  • data to be transmitted corresponds to a base layer, and other enhancement layers can be flexibly transmitted depending on the transmission status of a network. All enhancement layers have residual data occurring due to the difference between the enhancement layers and the base layer (or a reconstructed frame composed of the base layer and a previous enhancement layer).
  • a quantization parameter QPi is a parameter for generating an i-th enhancement layer. As the magnitude of the quantization parameter increases, the step size increases. Therefore, at the time of generating enhancement layers, data can be obtained while the magnitude of the quantization parameter gradually decreases.
  • FIGS. 1 and 2 perform deblocking at a low intensity when enhancement layers are directly encoded, or when enhancement layers are added to a base layer and decoded, thus reducing information loss caused by excessive deblocking.
  • FGS described with reference to FIGS. 1 and 2
  • SVM 3.0 An exemplary embodiment for implementing FGS using another method is described below.
  • FIG. 3 is a diagram of an apparatus for encoding video to support FGS according to another embodiment of the present invention. Unlike FIG. 1, a base layer and an enhancement layer are generated, and the enhancement layer is implemented through a bit plane.
  • original video data is transformed by a transform unit 221.
  • a transform unit 221 As an example of transform, a Discrete Cosine Transform (DCT) can be used.
  • a base layer is generated if data obtained as the result of the DCT transform is quantized by a quantization unit 222, and the quantized data is encoded by an encoding unit 223 that uses entropy encoding or variable length coding (VLC).
  • VLC variable length coding
  • deblocking is performed in a decoder
  • deblocking is also performed by a deblocking unit 421 m an encoding stage
  • residual data the difference between deblocked data and the original video data
  • the residual data is encoded again by an encoding unit 224
  • MSB Most Significant Bit
  • MSB Most Significant Bit
  • LSB Least Significant Bit
  • the enhancement layer generated by the encoding unit 224 is transmitted with the base layer.
  • deblocking is performed by a deblocking unit 422 in order to reconstruct the frame
  • the deblocking is performed by the deblocking unit 422 after the deblocking for the base layer has been performed by the deblocking unit 421, a deblocking coefficient is decreased, thus preventing the occurrence of over- smoothing.
  • FIG. 4 is a view of an apparatus for decoding video to support FGS according to another exemplary embodiment of the present invention. Unlike FIG. 2, a base layer and an enhancement layer are received Data of the enhancement layer can be partially truncated in one enhancement layer depending on the receiving capability or decoding capability of a decoding stage (decorder).
  • Both the base layer and the enhancement layer, transmitted in a stream format, are inverse quantized and inverse transformed
  • the base layer is reconstructed by a deblocking unit 431 after passing through an inverse quantization unit 331 and an inverse transform unit 332.
  • the enhancement layer is reconstructed through an inverse quantization unit 335 and an inverse transform unit 336
  • the reconstructed base layer and enhancement layer are added to each other by an adder 12 so that a single reconstructed frame is created.
  • deblocking is performed by a deblocking unit 432.
  • FIG. 5 is a flowchart showing a process of encoding the original data of video according to an embodiment of the present invention
  • MCTF is performed on original data constituting video so that a frame is generated in step SlOl.
  • the original data may be a GOP composed of a plurality of frames.
  • a motion vector is obtained through motion estimation, and a motion compensated frame is configured using the motion vector and a reference frame. Further, the difference between a current frame and the motion compensated frame is obtained so that a residual frame is obtained, thus reducing temporal redundancy.
  • various methods such as fixed size block matching or Hierarchical Variable Size Block Matching (HVSBM), can be used.
  • MCTF is one method of providing temporal scalability, and some methods of implementing the MCTF includes a method using a Haar filter, a Motion Adaptive Filtering (MAF) method, a method using a 5/3 filter.
  • the results, calculated by these methods, provide temporally scalable video data.
  • a process of generating base layer data and enhancement layer data is executed.
  • data is divided into a base layer and an enhancement layer.
  • the base layer is extracted from a frame, on which the MCTF has been performed, through sampling in step S 103.
  • the base layer can be compressed using several schemes. In the case of motion compensation video encoding, a DCT can be used.
  • the base layer becomes the basis for generating the enhancement layer so that various existing video encoding methods can be used.
  • the base layer can be generated by the transform & quantization units 201, 202 and 203 of FIG 1, or the transform unit 221, the quantization unit 222 and the encoding unit 223 of FIG. 3.
  • step S 105 residual data, obtained by the difference between the base layer, generated in step S103, and the original data generated in step SlOl, is extracted, so the enhancement layer is generated in step S 105
  • various fine-granular schemes can be used. For example, a wavelet method, a DCT method, and a matching-pursuit based method can be used. It is well known that, of these methods, the bitplane DCT coding method and the embedded zero-tree wavelet (EZW) method exhibit excellent performance.
  • step S 105 an inverse quantization procedure to inversely quantize a quantized base layer may be further required.
  • the base layer is reconstructed by the inverse quantization & inverse transform units 301, 302 and 303 of FIG. 1, or the inverse quantization unit 321 of FIG. 3, as described above.
  • video data can be obtained by adding the enhancement layer to the base layer that has been inversely quantized; the base layer must be inverse quantized to obtain the residual data in order to reduce data loss.
  • deblocking can be performed after inverse quantization has been performed. Deblocking is used to smooth the boundaries between blocks constituting frames. The difference between the base layer, which was inversely quantized, and the original data, on which MCTF was performed m step SlOl, is obtained, so that the enhancement layer is generated, as described above.
  • one or more enhancement layers may exist. As the number of enhancement layers increases, the unit of FGS is subdivided, thereby improving SNR scalability.
  • the decorder can determine the number of enhancement layers to be received and to be decoded, depending on its decoding capability or reception capability.
  • step S 110 If base layer data and enhancement layer data are generated with respect to a single frame, a procedure of adding the base layer data to the enhancement layer data and generating a new reconstructed frame is required in step S 110.
  • the reconstructed frame becomes the basis for generating other frames, or is necessary for generating a predictive frame for motion estimation. In this case, since boundaries between blocks exist in the reconstructed frame, deblocking is performed to eliminate the boundaries between blocks.
  • the reconstructed frame includes the base layer, which has been deblocked in step S 105, so that deblocking is performed at a low intensity in step S115.
  • step S 105 If it is assumed that base layer data is B, enhancement layer data is El, E2, .., En, and deblocking performed on the base layer data in step S 105 is Dl, the reconstructed frame F, obtained in step Sl 10, can be expressed as Dl(B) + El + E2 + ... + En. Further, the result of the deblocking performed in step Sl 15 is: D2 (Dl(B) + El + E2 + . + En) In this case, the deblocking coefficient df2 of D2 may be set to 1 or 2.
  • FIG. 5 shows that, after original video data is transformed to provide temporal scalability, the transformed data is divided into base layer data and enhancement layer data to provide SNR scalability.
  • this processing sequence is not necessarily performed.
  • base layer data and enhancement layer data is obtained to provide SNR scalability for original video data regardless of whether corresponding data is used to provide temporal scalability
  • a new transform procedure for providing another type of scalability may be conducted.
  • a plurality of schemes may be employed, and the present invention is not limited to these schemes.
  • FIG. 6 is a flowchart showing a process of decoding a received video stream according to an exemplary embodiment of the present invention.
  • a process of a decoder receiving and decoding a video stream is described in the following.
  • the decoder receives the video stream in step S201.
  • the decoder extracts a base layer from the received video stream, and reconstructs the base layer in step S203.
  • the reconstruction of the base layer is performed through an inverse quantization and an inverse transform.
  • the reconstructed base layer is deblocked in order to be added to other enhancement layers in step S205.
  • an enhancement layer is extracted from the received video stream, and the extracted enhancement layer is reconstructed in step S210.
  • the reconstruction of the enhancement layer is also performed through an inverse quantization and an inverse transform.
  • the base layer, deblocked in step S205, and the enhancement layer, reconstructed in step S210 are added to each other, so that a reconstructed frame is generated in step S220.
  • deblocking is performed on the reconstructed frame with a deblocking coefficient of 1 or 2 in step S230. Since the base layer has already been deblocked once in step S205, deblocking is performed at a low intensity to prevent over-smoothing in step S230.
  • FIG. 7 is a diagram showing an example of reconstruction results for a base layer and enhancement layers according to an embodiment of the present invention.
  • FIG. 7 illustrates the generation of a reconstructed frame, which has been deblocked by the deblocking unit 402 of FIG. 1, or a reconstructed frame, which has been deblocked by the deblocking unit 412 of FIG. 2. Further, FIG. 7 also illustrates the generation of a reconstructed frame, which has been deblocked by the deblocking unit 422 of FIG 3, or a reconstructed frame, which has been deblocked by the deblocking unit 432 of FIG. 4.
  • a frame 151 denotes a frame obtained by deblocking a reconstructed base layer after reconstructing the base layer again. That is, the frame 151 is obtained by performing deblocking through the deblocking unit 401 of FIG. 1, the deblocking unit 411 of FIG. 2, the deblocking unit 421 of FIG. 3, or the deblocking unit 431 of FIG. 4.
  • Reference numeral 152 or 153 is a frame obtained by reconstructing an enhancement layer.
  • the reconstruction of the enhancement layer is performed by the inverse quantization & inverse transform units 302 and 303 of FIG. 1, the inverse quantization & inverse transform units 312 and 313 of FIG. 2, the decoding unit 325 of FIG. 3, or the inverse transform unit 336 of FIG. 4.
  • the reconstructed enhancement layers and the reconstructed base layer, which has been deblocked, are added by an adder to produce a single frame 155.
  • deblocking is performed again. As described above, if a deblocking coefficient is decreased and deblocking is performed, over- smoothing may be prevented. Through this process, the original frame 157 is reconstructed.
  • the deblocking coefficient or deblocking filter is decreased to 1 or 2 to perform deblocking.
  • deblocking coefficients ranging up to 4 exist. If the deblocking coefficient is subdivided and the maximum value thereof is increased to 8 or 16, deblocking is performed using a low deblocking coefficient corresponding to the increased coefficient.
  • Table 1 shows results obtained according to an exemplary embodiment of the present invention.
  • a football moving picture is sampled at frequencies of 7.5 Hz and 15 Hz.
  • Table 1 shows the degree of improvement of PSNR when the method of decreasing the deblocking coefficient, proposed in the present invention, is applied depending on the bit rate of a network. As shown in Table 1, it can be seen that the degree of improvement of the PSNR is high at a low rate (160 kbps and 192 kbps at 7.5 Hz, and 243 kbps at 15 Hz). The degree of improvement of Table 1 is displayed graphically in FIGS. 8A and 8B. FIG.
  • FIG. 8A shows the degree of improvement of PSNR when video, sampled at a frequency of 7.5 Hz in the Quarter Common Intermediate Format (QCIF), is deblocked at a low intensity.
  • FIG. 8B shows the degree of improvement of the PSNR when video, sampled at a frequency of 15 Hz in the QCIF, is deblocked at a low intensity. As shown in the two graphs, the degree of improvement of the PSNR is high when the bit rate is low.
  • QCIF Quarter Common Intermediate Format
  • Table 2 shows results obtained according to an exemplary embodiment of the present invention.
  • Table 2 shows the degree of improvement of the PSNR when the method of decreasing a deblocking coefficient, proposed in the exemplary embodiment of the present invention, is applied depending on the bit rate of a network.
  • the degree of improvement of the PSNR is high at a low rate (588 kbps and 690 kbps at 15 Hz, and 920 kbps and 1124 Kbps at 30 Hz).
  • the degree of improvement in Table 2 is displayed graphically in FIGS. 9A and 9B.
  • FIG. 9A and 9B The degree of improvement in Table 2 is displayed graphically in FIGS. 9A and 9B.
  • FIG. 9A shows the degree of improvement of the PSNR when video, sampled at a frequency of 15 Hz in the QCIF, is deblocked at a low intensity.
  • FIG. 9B shows the degree of improvement of the PSNR when video, sampled at a frequency of 30 Hz in the QCIF, is deblocked at a low intensity.
  • the degree of improvement of the PSNR is high when the bit rate is low. That is, FGS is required when the bit rate of a network is low, so that the image quality is excellent if the degree of improvement of the PSNR is high while the bit rate is low, as shown in Tables 1 and 2 according to the method proposed in the present specification.
  • the present invention is advantageous in that it can perform deblocking at a low intensity in video encoding and decoding that support FGS, thus improving a PSNR
  • the present invention is advantageous in that it can improve the quality of video while reducing data loss caused by deblocking

Abstract

L'invention concerne un procédé de codage et décodage vidéo extensible et à granularité fine et un appareil pouvant commander le déblocage. Dans le procédé de décodage vidéo de l'invention, les données vidéo originales sont reçues et une couche de base est générée selon les données originales. La différence entre les données originales et les données obtenues par reconstruction de la couche de base et déblocage de la couche de base sont obtenues, ce qui génère une couche d'extension. Une trame reconstruite est ensuite générée en fonction des données qui sont obtenues par la reconstruction de la couche d'extension, et des données qui sont obtenues par reconstruction et déblocage de la couche de base reconstruite. Finalement, la trame reconstruite est débloquée à une intensité inférieure à celle du déblocage assuré dans les deux première étapes.
EP06715725A 2005-01-19 2006-01-17 Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage Withdrawn EP1839441A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US64458205P 2005-01-19 2005-01-19
KR1020050011423A KR100703744B1 (ko) 2005-01-19 2005-02-07 디블록을 제어하는 fgs 기반의 비디오 인코딩 및디코딩 방법 및 장치
PCT/KR2006/000168 WO2006078107A2 (fr) 2005-01-19 2006-01-17 Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage

Publications (1)

Publication Number Publication Date
EP1839441A2 true EP1839441A2 (fr) 2007-10-03

Family

ID=36692641

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06715725A Withdrawn EP1839441A2 (fr) 2005-01-19 2006-01-17 Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage

Country Status (2)

Country Link
EP (1) EP1839441A2 (fr)
WO (1) WO2006078107A2 (fr)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006078107A3 *

Also Published As

Publication number Publication date
WO2006078107A3 (fr) 2006-11-02
WO2006078107A2 (fr) 2006-07-27

Similar Documents

Publication Publication Date Title
US20060159359A1 (en) Fine granularity scalable video encoding and decoding method and apparatus capable of controlling deblocking
JP5014989B2 (ja) 基礎階層を利用するフレーム圧縮方法、ビデオコーディング方法、フレーム復元方法、ビデオデコーディング方法、ビデオエンコーダ、ビデオデコーダ、および記録媒体
JP5026965B2 (ja) ベースレイヤを含むビットストリームをプリデコーディング、デコーディングする方法及び装置
KR100703749B1 (ko) 잔차 재 추정을 이용한 다 계층 비디오 코딩 및 디코딩방법, 이를 위한 장치
JP4891234B2 (ja) グリッド動き推定/補償を用いたスケーラブルビデオ符号化
US8411753B2 (en) Color space scalable video coding and decoding method and apparatus for the same
KR100703724B1 (ko) 다 계층 기반으로 코딩된 스케일러블 비트스트림의비트율을 조절하는 장치 및 방법
US7889793B2 (en) Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
JP4922391B2 (ja) 多階層基盤のビデオエンコーディング方法および装置
US7933456B2 (en) Multi-layer video coding and decoding methods and multi-layer video encoder and decoder
EP1401211A2 (fr) Codage et décodage vidéo multirésolution
JP4844741B2 (ja) 動画像符号化装置及び動画像復号装置と、その方法及びプログラム
US20060013311A1 (en) Video decoding method using smoothing filter and video decoder therefor
US20050152611A1 (en) Video/image coding method and system enabling region-of-interest
WO2006112642A1 (fr) Procede et appareil permettant la selection adaptative d'un modele de contexte pour le codage entropique
WO2006080662A1 (fr) Procede et dispositif permettant de compresser efficacement des vecteurs de mouvements dans un codeur video sur la base de plusieurs couches
US20060250520A1 (en) Video coding method and apparatus for reducing mismatch between encoder and decoder
WO2006004305A1 (fr) Procede et appareil permettant de mettre en oeuvre l'extensibilite de mouvement
JP2008515328A (ja) 階層間フィルタリングを利用したビデオコーディングおよびデコーディング方法と、ビデオエンコーダおよびデコーダ
KR20040083450A (ko) 메모리-대역폭 효율적인 파인 그래뉼라 확장성 인코더
WO2006132509A1 (fr) Procede de codage video fonde sur des couches multiples, procede de decodage, codeur video, et decodeur video utilisant une prevision de lissage
WO2006078107A2 (fr) Procede de codage et decodage video extensible a granularite fine et appareil pouvant commander le deblocage
WO2006098586A1 (fr) Procede et dispositif de codage/decodage video utilisant une prediction de mouvement entre des niveaux temporels
WO2006028330A1 (fr) Procedes de decodage et codage video multicouches, codeur et decodeur video multicouches

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070704

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20100803