WO2006078115A1 - Procede et appareil de codage video pour la prediction efficace de trames non synchronisees - Google Patents

Procede et appareil de codage video pour la prediction efficace de trames non synchronisees Download PDF

Info

Publication number
WO2006078115A1
WO2006078115A1 PCT/KR2006/000192 KR2006000192W WO2006078115A1 WO 2006078115 A1 WO2006078115 A1 WO 2006078115A1 KR 2006000192 W KR2006000192 W KR 2006000192W WO 2006078115 A1 WO2006078115 A1 WO 2006078115A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
base layer
virtual base
unsynchronized
motion vector
Prior art date
Application number
PCT/KR2006/000192
Other languages
English (en)
Inventor
Sang-Chang Cha
Woo-Jin Han
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050020810A external-priority patent/KR100703745B1/ko
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2006078115A1 publication Critical patent/WO2006078115A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Definitions

  • Methods and apparatuses consistent with the present invention relate, in general, to video compression and, more particularly, to efficiently predicting a frame having no corresponding lower layer frame in video frames having a multi-layered structure.
  • the basic principle of compressing data involves a process of removing data redundancy.
  • Spatial redundancy in which the same color or object is repeated in an image
  • temporal redundancy in which an adjacent frame varies little in moving image frames or in which the same sound is repeated in audio data
  • psycho- visual redundancy which takes into consideration the fact that human vision and perceptivity are insensitive to high frequencies, are removed so that data can be compressed.
  • temporal redundancy is removed using temporal filtering based on motion compensation
  • spatial redundancy is removed using a spatial transform.
  • the performances of the transmission media differ.
  • Currently used transmission media have various data rates ranging from a data rate like that of an ultra high speed communication network, capable of transmitting data at a data rate of several tens of Mbit/ s, to a data rate like that of a mobile communication network, having a data rate of 384 Kbit/s.
  • a method of transmitting multimedia data at a data rate suitable for supporting transmission media having various data rates or depending on transmission different environments, that is, a scalable video coding method may be more suitable for a multimedia environment.
  • Such scalable video coding denotes an encoding method of cutting part of a previously compressed bit stream depending on surrounding conditions, such as bit rate, error rate or system resources, thus controlling the resolution, the frame rate and the bit rate of the video.
  • Moving Picture Experts Group-21 (MPEG-21) part 10 is the current standard for scalable video coding.
  • MPEG-21 Moving Picture Experts Group-21
  • many efforts have been made to realize multi-layered scalability. For example, multiple layers, including a base layer, a first enhancement layer, and a second enhancement layer, are provided, so that respective layers can be constructed to have different frame rates or different resolutions, such as the Quarter Common Intermediate Format (QCIF), CIF and 2CIF.
  • QCIF Quarter Common Intermediate Format
  • FIG. 1 is a diagram showing an example of a scalable video codec using a multi- layered structure.
  • a first layer is in the Quarter Common Intermediate Format (QCIF) and has a frame rate of 15 Hz
  • a first enhancement layer is in the Common Intermediate Format (CIF) and has a frame rate of 30 Hz
  • a second enhancement layer is in the Standard Definition (SD) and has a frame rate of 60 Hz.
  • SD Standard Definition
  • 'Scalable Video Model 3.0 of ISO/IEC 21000-13 Scalable Video Coding' (hereinafter referred to as 'SVM 3.0') defines the above method as Intra-BL prediction.
  • SVM 3.0 additionally adopts a method of predicting a current block using the correlation between a current block and a corresponding lower layer block, in addition to inter-prediction and directional intra-prediction, which are used to predict blocks or macroblocks constituting a current frame in the existing H.264 method.
  • Such a prediction method is called 'Intra-BL prediction'
  • a mode of performing encoding using the Intra-BL prediction is called 'Intra-BL mode'.
  • FIG. 2 is a schematic diagram showing the three prediction methods, which shows a case ® where intra-prediction is performed with respect to a certain macroblock 14 of a current frame 11, a case ® where inter-prediction is performed using a frame 12 placed at a temporal location differing from that of the current frame 11, and a case ® where Intra-BL prediction is performed using the texture data of an area 16 of a base layer frame 13 corresponding to the macroblock 14. Disclosure of Invention
  • an advantageous method is selected from among the three prediction methods.
  • the frame 40 having no corresponding lower layer frame may exist.
  • Intra-BL prediction cannot be used.
  • the frame 40 is encoded using only information about a corresponding layer (that is, using inter-prediction and intra-prediction) without using information about a lower layer, so that the prediction methods may be somewhat inefficient from the standpoint of encoding performance.
  • the present invention provides a video coding method, which can perform Intra-
  • the present invention also provides a scheme, which can improve the performance of a multi-layered video codec using the video coding method.
  • a multi- layered video encoding method comprising performing motion estimation by using one of two frames of a lower layer temporally closest to an unsynchronized frame of a current layer as a reference frame; generating a virtual base layer frame at the same temporal location as that of the unsynchronized frame using a motion vector obtained as a result of the motion estimation and the reference frame; subtracting the generated virtual base layer frame from the unsynchronized frame to generate a difference; and encoding the difference.
  • a multi-layered video decoding method comprising the steps of reconstructing a reference frame of two frames of a lower layer, temporally closest to an unsynchronized frame of a current layer, from a lower layer bit stream; generating a virtual base layer frame at the same temporal location as the unsynchronized frame using a motion vector, included in the lower layer bit stream, and the reconstructed reference frame; extracting texture data of the unsynchronized frame from a current layer bit stream, and reconstructing a residual frame from the texture data; and adding the residual frame to the virtual base layer frame.
  • a multi-layered video encoder comprising means for performing motion estimation by using one of two frames of a lower layer temporally closest to an unsynchronized frame of a current layer as a reference frame; means for generating a virtual base layer frame at the same temporal location as that of the unsynchronized frame using a motion vector obtained as a result of the motion estimation and the reference frame; means for subtracting the generated virtual base layer frame from the unsynchronized frame to generate a difference; and means for encoding the difference.
  • a multi-layered video decoder comprising means for reconstructing a reference frame of two frames of a lower layer, temporally closest to an unsynchronized frame of a current layer, from a lower layer bit stream; means for generating a virtual base layer frame at the same temporal location as the unsynchronized frame using a motion ve ctor, included in the lower layer bit stream, and the reconstructed reference frame; means for extracting texture data of the unsynchronized frame from a current layer bit stream, and reconstructing a residual frame from the texture data; and means for adding the residual frame to the virtual base layer frame.
  • FlG. 1 is a diagram showing an example of a scalable video codec using a multi- layered structure
  • FlG. 2 is a schematic diagram showing three conventional prediction methods
  • FlG. 3 is a schematic diagram showing the basic concept of Virtual Base-layer
  • VBP Prediction
  • FlG. 4 is a diagram showing an example of the implementation of VBP using forward inter-prediction of a base layer
  • FlG. 5 is a diagram showing an example of the implementation of VBP using backward inter-prediction of a base layer
  • FlG. 6 is a diagram showing an example of partitions constituting a frame to be inter-predicted
  • FlG. 7 is a diagram showing an example of partitions having a hierarchical variable size based on H.264;
  • FlG. 8 is a diagram showing an example of partitions constituting a macroblock and motion vectors for respective partitions
  • FlG. 9 is a diagram showing a motion vector for a specific partition
  • FlG. 10 is a diagram showing a process of configuring a motion compensated frame
  • FlG. 11 is a diagram showing a process of generating a virtual base layer frame according to a first exemplary embodiment of the present invention
  • FIG. 12 is a diagram showing various pixel areas in a virtual base layer frame generated according to the first exemplary embodiment of the present invention.
  • FIGS. 13 and 14 are diagrams showing a process of generating a virtual base layer frame according to a second exemplary embodiment of the present invention.
  • FlG. 15 is a block diagram showing the construction of a video encoder according to an exemplary embodiment of the present invention.
  • FlG. 16 is a block diagram showing the construction of a video decoder according to an exemplary embodiment of the present invention.
  • FlG. 17 is a diagram showing the construction of a system environment in which the video encoder and the video decoder are operated;
  • FlG. 18 is a flowchart showing a video encoding process according to an exemplary embodiment of the present invention.
  • FlG. 19 is a flowchart showing a video decoding process according to an exemplary embodiment of the present invention.
  • FlG. 3 is a schematic diagram showing the basic concept of Virtual Base-layer
  • VBP Prediction Prediction
  • a current layer L has a resolution of CIF and a frame rate of 30 Hz
  • a lower layer Ln- n has a resolution of QCIF and a frame rate of 15 Hz.
  • a current layer frame having no corresponding base layer frame is defined as an 'unsyn- chronized frame'
  • a current layer frame having a corresponding base layer frame is defined as a 'synchronized frame'. Since an unsynchronized frame does not have a base layer frame, the present invention proposes a method of generating a virtual base layer frame and utilizing the virtual base layer frame for Intra-BL prediction. [39] As shown in FlG.
  • a method of predicting an unsynchronized frame using a virtual base layer frame is defined as virtual base-layer prediction (hereinafter referred to as 'VBP').
  • VBP Concept of VBP according to the present invention can be applied to two layers having different frame rates. Therefore, VBP can also be applied to the case in which a current layer and a lower layer use a hierarchical inter-prediction method, such as Motion Compensated Temporal Filtering (MCTF), as well as the case in which a current layer and a lower layer use a non-hierarchical inter-prediction method (I-B-P coding of an MPEG system codec). Therefore, when a current layer uses MCTF, the concept of VBP can be applied to the temporal level of the MCTF having a frame rate higher than that of a lower layer.
  • MCTF Motion Compensated Temporal Filtering
  • FIGS. 4 and 5 are diagrams showing examples of a method of implementing VBP according to the present invention.
  • a virtual base layer frame B is generated using a motion vector between two frames B and B closest to an unsyn- chronized frame A 1 in a lower lay J er, and a reference of the frames B 0 and B 2.
  • FIG. 4 illustrates an example of implementing VBP using forward inter-prediction of a lower layer.
  • the frame B of a base layer is predicted through forward inter-prediction by using its previous frame B as a reference frame. That is, after a forward motion vector mv is obtained by using the previous frame B as a reference frame F , the reference frame is motion-compensated using the obtained motion vector, and the frame B is inter-predicted using the motion-compensated reference frame.
  • the virtual base layer frame B is generated using the forward motion vector mv , which is used for inter-prediction in the base layer, and the frame B , which is used as the reference frame F .
  • FIG. 5 illustrates an example of implementing VBP using backward inter-prediction of a base layer.
  • the frame B of a base layer is predicted through backward inter-prediction by using the subsequent frame B as a reference frame. That is, after a backward motion vector mv is obtained by using the b subsequent frame B as a reference frame F , the reference frame is motion-
  • FIGS. 6 to 12 are diagrams showing the concept of generation of a virtual base layer frame according to a first exemplary embodiment of the present invention.
  • each 'partition' means a unit area used for motion estimation, that is, for searching for a motion vector.
  • the partition may have a fixed size (for example, 4x4, 8x8, or 16x16), as shown in FlG. 6, or may have a variable size, as in the case of the H.264 codec.
  • a single macroblock 25 can be divided into sub-blocks having four modes. That is, the macroblock 25 can be divided once into sub-blocks in 16x16 mode, 8x16 mode, 16x8 mode and 8x8 mode. Each of the sub-blocks having an 8x8 size can be further sub-divided into sub-blocks in 4x8 mode, 8x4 mode and 4x4 mode (if it is not sub-divided, the 8x8 mode is used without change).
  • the selection of a combination of optimal sub-blocks constituting the single macroblock 25 can be performed by selecting a case having a minimum cost among various combinations.
  • the macroblock 25 is further sub-divided, precise block matching can be realized, while the amount of motion data (motion vectors, sub-block modes, and others) increases in proportion to the number of sub-divisions. Therefore, an optimal point can be detected between blocking matching and the amount of motion data.
  • one frame is implemented with a set of macroblocks 25 each having the above-described various combinations of partitions, and each partition has a single motion vector.
  • An example of the shape of a partition (indicated by a rectangle), determined by hierarchical variable size block matching in the single macroblock 25, and motion vectors for respective partitions (indicated by arrows) is shown in FIG. 8.
  • a 'partition' in the present invention means a unit of area to which a motion vector is assigned. It should be apparent that the size and shape of a partition can vary according to the type of codec. However, for convenience of description, the frame 50 to be inter-predicted is assumed to have fixed-size partitions, as shown in FlG. 6. Further, in the present specification, reference numeral 50 denotes the frame of a lower layer (for example, B of FlG. 4, and B of FlG. 5), and reference numeral 60 denotes a reference frame (for example, B of FlG. 4 and B of FlG. 5) used for inter-prediction.
  • an area in the reference frame 60 corresponding to the partition 1 is an area 1' at a location that is moved away from the location of the partition 1 by the motion vector.
  • a motion compensated frame 70 for the reference frame is generated by duplicating texture data of the area 1' in the reference frame 60 to the location of the partition 1, as shown in FlG. 10.
  • a virtual base layer frame 80 is generated in consideration of the principles of generating the motion compensated frame, as shown in FlG. 11. That is, since a motion vector represents a direction in which a certain object moves in a frame, motion compensation is performed to an extent corresponding to a value obtained by multiplying the motion vector by the ratio of the distance between the reference frame 60 and the location at which the virtual base layer frame 80 is to be generated, to the distance between the reference frame 60 and the frame 50 to be inter-predicted (hereinafter referred to as a 'distance ratio', 0.5 in FIGS. 4 and 5).
  • the virtual base layer frame 80 is filled with texture data in such a way that the area 1' is copied to a location away from the area 1' by -rxmv, where r is the distance ratio and mv is the motion vector.
  • the first exemplary embodiment is based on a basic assumption that a motion vector represents the movement of a certain object in a frame, and the movement may be generally continuous in a short time unit, such as a frame interval.
  • the virtual base layer frame 80 generated according to the method of the first exemplary embodiment may include, for example, an unconnected pixel area and a multi- connected pixel area, as shown in FlG. 12.
  • FlG. 12 since a single-connected pixel area includes only one piece of texture data, there is no problem.
  • a method of processing pixel areas other than the single-connected pixel area may be an issue.
  • a multi-connected pixel may be replaced with a value obtained by averaging a plurality of pieces of texture data at the corresponding location.
  • an unconnected pixel may be replaced with a corresponding pixel value in the frame 50 to be inter-predicted, with a corresponding pixel value in the reference frame 60, or with a value obtained by averaging corresponding pixel values in the frames 50 and 60.
  • Intra-BL prediction It is difficult to expect high performance when an unconnected pixel area or a multi-connected pixel area is used for Intra-BL prediction for an unsynchronized frame , compared to the single-connected pixel area.
  • inter-prediction or directional intra-prediction for an unsynchronized frame rather than Intra-BL prediction, will be selected as a prediction method for the above areas from the standpoint of costs, so that it can be predicted that a deterioration of performance will not occur.
  • Intra-BL prediction will exhibit sufficiently high performance. Accordingly, if the pixel areas are determined to be a single frame unit, an enhancement of performance can be expected when the first exemplary embodiment is applied.
  • FIGS. 13 and 14 are diagrams showing the concept of generation of a virtual base layer frame according to another exemplary embodiment (a second exemplary embodiment) of the present invention.
  • the second exemplary embodiment is proposed to solve the problem whereby an unconnected pixel area and a multi- connected pixel area exist in the virtual base layer frame 80 generated in the first exemplary embodiment.
  • the pattern of partitions of a virtual base layer frame 90 in the second exemplary embodiment uses the pattern of partitions of the base layer frame 50 to be inter-predicted without change.
  • the virtual base layer frame 90 is generated in such a way that texture data of the area 1' in the reference frame 60 is copied to the location of the partition 1, as shown in FIG. 14.
  • the virtual base layer frame 90 is completed. Since the virtual base layer frame 90 generated in this way has the same partition pattern as the base layer frame 50 to be inter-predicted, the virtual base layer frame 90 includes only single- connected pixel areas without including unconnected pixel areas or multi-connected pixel areas.
  • the first and second exemplary embodiments can be independently implemented, but one exemplary embodiment, which combines the exemplary embodiments, can also be considered. That is, the unconnected pixel area of the virtual base layer frame 80 in the first exemplary embodiment is replaced with the corresponding area of the virtual base layer frame 90 obtained in the second exemplary embodiment. Further, the unconnected pixel area and the multi-connected pixel area of the virtual base layer frame 80 in the first exemplary embodiment may be replaced with the corresponding areas of the virtual base layer frame 90 obtained in the second exemplary embodiment.
  • FlG. 15 is a block diagram showing the construction of a video encoder 300 according to an exemplary embodiment of the present invention. In FlG. 15 and FlG. 16, which will be described later, an example in which a single base layer and a single enhancement layer are used is described, but those skilled in the art will appreciate that the present invention can be applied to a lower layer and a current layer even though the number of layers used increases.
  • the video encoder 300 can be divided into an enhancement layer encoder 200 and a base layer encoder 100. First, the construction of the base layer encoder 100 is described.
  • a downsampler 110 downsamples input video at a resolution and a frame rate appropriate to a base layer. From the standpoint of resolution, downsampling can be performed using an MPEG downsampler or a wavelet downsampler. Further, from the standpoint of frame rate, downsampling can be easily performed using a frame skip method, a frame interpolation method, and others.
  • a motion estimation unit 150 performs motion estimation on a base layer frame, and obtains a motion vector mv with respect to each partition constituting the base layer frame.
  • Such motion estimation denotes a procedure of finding an area most similar to each partition of a current frame F c in a reference frame F r , that is, an area having a minimum error, and can be performed using various methods, such as a fixed size block matching method or a hierarchical variable size block matching method.
  • the reference frame F can be provided by a frame buffer 180.
  • the base layer encoder 100 of FlG. 15 adopts a scheme in which a reconstructed frame is used as a reference frame, that is, a closed loop encoding scheme.
  • the encoding scheme is not limited to the closed loop encoding method, and the base layer encoder 100 can adopt an open loop encoding scheme in which an original base layer frame, provided by the downsampler 110, is used as a reference frame.
  • a motion compensation unit 160 performs motion compensation on the reference frame using the obtained motion vector. Further, a subtractor 115 obtains the difference between the current frame F of the base layer and the motion compensated reference frame, thus generating a residual frame.
  • a transform unit 120 performs a spatial transform on the generated residual frame and generates a transform coefficient.
  • a spatial transform method a Discrete Cosine Transform (DCT), or a wavelet transform are mainly used.
  • DCT Discrete Cosine Transform
  • the transform coefficient denotes a DCT coefficient
  • a wavelet transform the transform coefficient denotes a wavelet coefficient.
  • a quantization unit 130 quantizes the transform coefficient generated by the transform unit 120. Quantization is an operation of dividing the DCT coefficient, expressed as an arbitrary real number, into predetermined intervals based on a quantization table representing the intervals as discrete values, and matching the discrete values to corresponding indices. A quantization result value obtained in this way is called a quantized coefficient.
  • An entropy encoding unit 140 performs non-lossy encoding on the quantized coefficient, generated by the quantization unit 130, and the motion vector, generated by the motion estimation unit 150, thus generating a base layer bit stream.
  • various non-lossy encoding methods such as Huffman coding, arithmetic coding or variable length coding can be used.
  • an inverse quantization unit 171 performs inverse quantization on the quantized coefficient output from the quantization unit 130.
  • Such an inverse quantization process corresponds to the inverse of the quantization process, and is a process of reconstructing values matching indices, which are generated during the quantization process, from the indices through the use of the quantization table used in the quantization process.
  • An inverse transform unit 172 performs an inverse spatial transform on an inverse quantization result value. This inverse spatial transform is the inverse to the transform process executed by the transform unit 120. In detail, an inverse DCT, an inverse wavelet transform, and others can be used.
  • An adder 125 adds the output value of the motion compensation unit 160 to the output value of the inverse transform unit 172, reconstructs the current frame, and provides the reconstructed current frame to the frame buffer 180.
  • the frame buffer 180 temporarily stores the reconstructed frame and provides the reconstructed frame as a reference frame to perform the inter-prediction on another base layer frame.
  • a virtual frame generation unit 190 generates a virtual base layer frame to perform Intra-BL prediction on an unsynchronized frame of an enhancement layer. That is, the virtual frame generation unit 190 generates the virtual base layer frame using a motion vector, generated between the two base layer frames temporally closest to the unsynchronized frame, and the reference frame of the two frames. For this operation, the virtual frame generation unit 190 receives the motion vector mv from the motion estimation unit 150, and the reference frame F r from the frame buffer 180.
  • the detailed procedure of generating the virtual base layer frame using the motion vector and the reference frame has been described with reference to FIGS. 4 to 14, and therefore detailed descriptions thereof are omitted.
  • the virtual base layer frame generated by the virtual frame generation unit 190 is selectively provided to the enhancement layer encoder 200 through an upsampler 195. Therefore, the upsampler 195 upsamples the virtual base layer frame at the resolution of the enhancement layer when the resolutions of the enhancement layer and the base layer are different. Of course, when the resolutions of the base layer and the en- hancement layer are the same, the upsampling process is omitted.
  • an input frame is an unsynchronized frame
  • the input frame and the virtual base layer frame, provided by the base layer encoder 100 are input to a subtracter 210.
  • the subtractor 210 subtracts the virtual base layer frame from the input frame and generates a residual frame.
  • the residual frame is converted into an enhancement layer bit stream through a transform unit 220, a quantization unit 230, and an entropy encoding unit 240, and the enhancement layer bit stream is output.
  • the functions and operations of the transform unit 220, the quantization unit 230 and the entropy encoding unit 240 are similar to those of the transform unit 120, the quantization unit 130 and the entropy encoding unit 140, and therefore detailed descriptions thereof are omitted.
  • the enhancement layer encoder 200 of FlG. 15 is described with respect to the encoding of an unsynchronized frame among input frames.
  • the input frame is a synchronized frame
  • three conventional prediction methods can be selectively used to perform encoding, as described above with reference to FlG. 2.
  • FlG. 16 is a block diagram showing the construction of a video decoder 600 according to an exemplary embodiment of the present invention.
  • the video decoder 600 can be divided into an enhancement layer decoder 500 and a base layer decoder 400. First, the construction of the base layer decoder 400 is described.
  • An entropy decoding unit 410 performs non-lossy decoding on a base layer bit stream, thus extracting texture data of a base layer frame and motion data (a motion vector, partition information, a reference frame number, and others).
  • An inverse quantization unit 420 performs inverse quantization on the texture data.
  • This inverse quantization process corresponds to the inverse of the quantization process executed by the video encoder 300, and is a process of reconstructing values matching indices, which are generated during the quantization process, from the indices through the use of the quantization table used in the quantization process.
  • An inverse transform unit 430 performs an inverse spatial transform on the inverse quantization result, thus reconstructing a residual frame.
  • This inverse spatial transform is the inverse of the transform process executed by the transform unit 120 of the video encoder 300.
  • the inverse DCT, inverse wavelet transform, or others can be used as the inverse transform.
  • an entropy decoding unit 410 provides motion data, including a motion vector mv, to both a motion compensation unit 460 and a virtual frame generation unit 470.
  • the motion compensation unit 460 performs motion compensation on a previously reconstructed video frame provided by a frame buffer 450, that is, a reference frame, using the motion data provided by the entropy decoding unit 410, thus generating a motion compensated frame.
  • a motion compensation procedure is applied only when a current frame is encoded through inter-prediction by the encoder.
  • An adder 415 adds a residual frame reconstructed by the inverse transform unit 430 to the motion compensated frame generated by the motion compensation unit 460, thus reconstructing a base layer video frame.
  • the reconstructed video frame can be temporarily stored in the frame buffer 450, and can be provided to the motion compensation unit 460 or the virtual frame generation unit 470 as a reference frame so as to reconstruct other subsequent frames.
  • the virtual frame generation unit 470 generates a virtual base layer frame to perform Intra-BL prediction on an unsynchronized frame of an enhancement layer. That is, the virtual frame generation unit 470 generates the virtual base layer frame using a motion vector generated between the two base layer frames temporally closest to the unsynchronized frame and the reference frame of the two frames. For this operation, the virtual frame generation unit 470 receives the motion vector mv from the entropy decoding unit 410 and the reference frame F r from the frame buffer 450.
  • the detailed procedure of generating the virtual base layer frame using the motion vector and the reference frame has been described with reference to FIGS. 4 to 14 , and therefore detailed descriptions thereof are omitted.
  • the virtual base layer frame generated by the virtual frame generation unit 470 is selectively provided to the enhancement layer decoder 500 through an upsampler 480. Therefore, the upsampler 480 upsamples the virtual base layer frame at the resolution of the enhancement layer when the resolutions of the enhancement layer and the base layer are different. Of course, when the resolutions of the base layer and the enhancement layer are the same, the upsampling process is omitted.
  • the construction of the enhancement layer decoder 500 is described. If part of an enhancement layer bit stream related to an unsynchronized frame is input to an entropy decoding unit 510, the entropy decoding unit 510 performs non-lossy decoding on the input bit stream and extracts the texture data of the unsynchronized frame.
  • the extracted texture data is reconstructed as a residual frame through an inverse quantization unit 520 and an inverse transform unit 530.
  • the function and operation of the inverse quantization unit 520 and the inverse transform unit 530 are similar to those of the inverse quantization unit 420 and the inverse transform unit 430.
  • An adder 515 adds the reconstructed residual frame to the virtual base layer frame provided by the base layer decoder 400, thus reconstructing the unsynchronized frame.
  • the enhancement layer decoder 500 of FIG. 16 has been described based on the decoding of an unsynchronized frame among input frames.
  • an enhancement layer bit stream is related to a synchronized frame
  • reconstruction methods according to three conventional prediction methods can be selectively used, as described above with reference to FlG. 2.
  • FlG. 17 is a diagram showing the construction of a system environment, in which the video encoder 300 or video decoder 600 operates, according to an exemplary embodiment of the present invention.
  • a system may be a TV, set-top box, a desktop computer, a lap-top computer, a handheld computer, a Personal Digital Assistant (PDA), or video or image storage device, for example, a Video Cassette Recorder (VCR] or a Digital Video Recorder (DVR).
  • the system may be a combination of the devices, or a specific device including another device.
  • the system may include at least one video source 910, at least one input/output device 920, a processor 940, memory 950, and a display device 930.
  • the video source 910 may be a TV receiver, a VCR, or another video storage device. Further, the video source 910 may include a connection to one or more networks for receiving video from a server using the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), a terrestrial broadcast system, a cable network, a satellite communication network, a wireless network, or a telephone network. Moreover, the video source may be a combination of the networks, or a specific network including another network as a part of the specific network.
  • WAN Wide Area Network
  • LAN Local Area Network
  • the video source may be a combination of the networks, or a specific network including another network as a part of the specific network.
  • the input/output device 920, the processor 940, and the memory 950 communicate with each other through a communication medium 960.
  • the communication medium 960 may be a communication bus, a communication network, or one or more internal connection circuits.
  • the input video data received from the source 910 may be processed by the processor 940 using one or more software programs stored in the memory 950, or it may be executed by the processor 940 to generate video to be output to the display device 930.
  • the software program stored in the memory 950 may include a multi- layered video codec for performing the method of the present invention.
  • the codec may be stored in the memory 950, be read from a storage medium, such as Compact Disc-Read Only Memory (CD-ROM) or a floppy disc, or be downloaded from a server through various networks.
  • the codec may be implemented as a hardware circuit or as a combination of hardware circuits and software.
  • FlG. 18 is a flowchart showing a video encoding process according to an exemplary embodiment of the present invention.
  • the motion estimation unit 150 performs motion estimation by using one of two lower layer frames, temporally closest to the unsynchronized frame of the current layer, as a reference frame in operation S30.
  • the motion estimation can be performed using fixed size blocks or hierarchical variable size blocks.
  • the reference frame may be a temporally previous frame of the two lower layer frames, as shown in FlG. 4, or a temporally subsequent frame, as shown in FlG. 5.
  • the virtual frame generation unit 190 generates a virtual base layer frame at the same temporal location as the unsynchronized frame using the motion vector obtained as a result of motion estimation, and the reference frame in operation S40.
  • operation S40 includes the operation of reading texture data of an area spaced apart from the location of a partition, to which the motion vector is assigned, by the motion vector, from the reference frame, and the operation of copying the read texture data to a location away, in a direction opposite the motion vector, from the area by a value obtained by multiplying the motion vector by the distance ratio.
  • an unconnected pixel area may be replaced with texture data of an area of the reference frame corresponding to the unconnected pixel area.
  • a multi-connected pixel is replaced with a value obtained by averaging texture data copied from the corresponding location.
  • operation S40 includes the operation of reading texture data of an area spaced apart from the location of the partition, to which the motion vector is assigned, by a value obtained by multiplying the motion vector by the distance ratio, from the reference frame and the operation of copying the read texture data to the location of the partition.
  • the upsampler 195 upsamples the generated virtual base layer frame at the resolution of the current layer in operation S50.
  • the subtractor 210 of the enhancement layer encoder 200 subtracts the upsampled virtual base layer frame from the unsynchronized frame to generate a difference in operation S60. Further, the transform unit 220, the quantization unit 230 and the entropy encoding unit 240 encode the difference in operation S70.
  • the upsampler 190 upsamples a base layer frame at a location corresponding to the current synchronized frame at the resolution of the current layer in operation S80.
  • the subtractor 210 subtracts the upsampled base layer frame from the synchronized frame to generate a difference in operation S90.
  • the difference is also encoded through the transform unit 220, the quantization unit 230 and the entropy encoding unit 240 in operation S70.
  • FlG. 19 is a flowchart showing a video decoding process according to an exemplary embodiment of the present invention.
  • bit stream of a current layer is input in operation SIlO, whether the current layer bit stream is related to an unsynchronized frame is determined in operation S 120.
  • the base layer decoder 400 reconstructs a reference frame of two lower layer frames temporally closest to the unsynchronized frame of the current layer from a lower layer bit stream in operation S 130.
  • the virtual frame generation unit 470 generates a virtual base layer frame at the same temporal location as the unsynchronized frame using the motion vector included in the lower layer bit stream and the reconstructed reference frame in operation S 140.
  • the first and second exemplary embodiments can be applied to operation S 140, similar to the video encoding process.
  • the upsampler 480 upsamples the generated virtual base layer frame at the resolution of the current layer in operation S145.
  • the entropy decoding unit 510 of the enhancement layer decoder 500 extracts the texture data of the unsynchronized frame from a current layer bit stream in operation S 150.
  • the inverse quantization unit 520 and the inverse transform unit 530 reconstruct a residual frame from the texture data in operation S 160.
  • the adder 515 adds the residual frame to the virtual base layer frame in operation S 170. As a result, the unsynchronized frame is reconstructed.
  • the base layer decoder 400 reconstructs a base layer frame at a location corresponding to the synchronized frame in operation S 180. Further, the upsampler 480 upsamples the reconstructed base layer frame in operation S 190. Me anwhile, the entropy decoding unit 510 extracts the texture data of the synchronized frame from the current layer bit stream in operation S200. The inverse quantization unit 520 and the inverse transform unit 530 reconstruct a residual frame from the texture data in operation S210. Then, the adder 515 adds the residual frame to the upsampled base layer frame in operation S220. As a result, the synchronized frame is reconstructed.
  • Intra-BL prediction can be performed with respect to an unsynchronized frame using a virtual base layer frame.
  • video compression efficiency can be improved by a more efficient prediction method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Cette invention concerne un procédé de codage vidéo multicouche, dans lequel on réalise une estimation de mouvement en utilisant l'une des deux trames d'une couche inférieure qui est la plus proche dans le temps d'une trame non synchronisée d'une couche courante, comme trame de référence. Une trame de couche de base virtuelle se trouvant dans la même position dans le temps que celle de la trame non synchronisée est générée au moyen d'un vecteur de mouvement obtenu en tant que résultat de l'estimation de mouvement et de la trame de référence. La trame de couche de base virtuelle ainsi générée est soustraite de la trame non synchronisée pour générer une différence, laquelle est alors codée.
PCT/KR2006/000192 2005-01-21 2006-01-18 Procede et appareil de codage video pour la prediction efficace de trames non synchronisees WO2006078115A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US64500905P 2005-01-21 2005-01-21
US60/645,009 2005-01-21
KR10-2005-0020810 2005-03-12
KR1020050020810A KR100703745B1 (ko) 2005-01-21 2005-03-12 비동기 프레임을 효율적으로 예측하는 비디오 코딩 방법 및장치

Publications (1)

Publication Number Publication Date
WO2006078115A1 true WO2006078115A1 (fr) 2006-07-27

Family

ID=36692461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/000192 WO2006078115A1 (fr) 2005-01-21 2006-01-18 Procede et appareil de codage video pour la prediction efficace de trames non synchronisees

Country Status (1)

Country Link
WO (1) WO2006078115A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2327212A2 (fr) * 2008-09-11 2011-06-01 Google Inc. Système et procédé d'encodage vidéo utilisant une image de référence construite
WO2012138571A1 (fr) * 2011-04-07 2012-10-11 Google Inc. Codage et décodage de mouvement par le biais de la segmentation d'image
US8665952B1 (en) 2010-09-15 2014-03-04 Google Inc. Apparatus and method for decoding video encoded using a temporal filter
US9014266B1 (en) 2012-06-05 2015-04-21 Google Inc. Decimated sliding windows for multi-reference prediction in video coding
CN104937932A (zh) * 2012-09-28 2015-09-23 英特尔公司 可适性视频编码的增强参考区域利用
US9262670B2 (en) 2012-02-10 2016-02-16 Google Inc. Adaptive region of interest
US9392272B1 (en) 2014-06-02 2016-07-12 Google Inc. Video coding using adaptive source variance based partitioning
US9392280B1 (en) 2011-04-07 2016-07-12 Google Inc. Apparatus and method for using an alternate reference frame to decode a video frame
US9426459B2 (en) 2012-04-23 2016-08-23 Google Inc. Managing multi-reference picture buffers and identifiers to facilitate video data coding
US9578324B1 (en) 2014-06-27 2017-02-21 Google Inc. Video coding using statistical-based spatially differentiated partitioning
US9609341B1 (en) 2012-04-23 2017-03-28 Google Inc. Video data encoding and decoding using reference picture lists
US9756331B1 (en) 2013-06-17 2017-09-05 Google Inc. Advance coded reference prediction
US9924161B2 (en) 2008-09-11 2018-03-20 Google Llc System and method for video coding using adaptive segmentation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04255169A (ja) * 1991-02-07 1992-09-10 Fujitsu Ltd 階層復元方式
US6043846A (en) * 1996-11-15 2000-03-28 Matsushita Electric Industrial Co., Ltd. Prediction apparatus and method for improving coding efficiency in scalable video coding
US20020118742A1 (en) * 2001-02-26 2002-08-29 Philips Electronics North America Corporation. Prediction structures for enhancement layer in fine granular scalability video coding
US6795501B1 (en) * 1997-11-05 2004-09-21 Intel Corporation Multi-layer coder/decoder for producing quantization error signal samples

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04255169A (ja) * 1991-02-07 1992-09-10 Fujitsu Ltd 階層復元方式
US6043846A (en) * 1996-11-15 2000-03-28 Matsushita Electric Industrial Co., Ltd. Prediction apparatus and method for improving coding efficiency in scalable video coding
US6795501B1 (en) * 1997-11-05 2004-09-21 Intel Corporation Multi-layer coder/decoder for producing quantization error signal samples
US20020118742A1 (en) * 2001-02-26 2002-08-29 Philips Electronics North America Corporation. Prediction structures for enhancement layer in fine granular scalability video coding

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9374596B2 (en) 2008-09-11 2016-06-21 Google Inc. System and method for video encoding using constructed reference frame
US9924161B2 (en) 2008-09-11 2018-03-20 Google Llc System and method for video coding using adaptive segmentation
EP2327212A4 (fr) * 2008-09-11 2012-11-28 Google Inc Système et procédé d'encodage vidéo utilisant une image de référence construite
US8385404B2 (en) 2008-09-11 2013-02-26 Google Inc. System and method for video encoding using constructed reference frame
EP2327212A2 (fr) * 2008-09-11 2011-06-01 Google Inc. Système et procédé d'encodage vidéo utilisant une image de référence construite
US8665952B1 (en) 2010-09-15 2014-03-04 Google Inc. Apparatus and method for decoding video encoded using a temporal filter
US9392280B1 (en) 2011-04-07 2016-07-12 Google Inc. Apparatus and method for using an alternate reference frame to decode a video frame
US9154799B2 (en) 2011-04-07 2015-10-06 Google Inc. Encoding and decoding motion via image segmentation
WO2012138571A1 (fr) * 2011-04-07 2012-10-11 Google Inc. Codage et décodage de mouvement par le biais de la segmentation d'image
US9262670B2 (en) 2012-02-10 2016-02-16 Google Inc. Adaptive region of interest
US9426459B2 (en) 2012-04-23 2016-08-23 Google Inc. Managing multi-reference picture buffers and identifiers to facilitate video data coding
US9609341B1 (en) 2012-04-23 2017-03-28 Google Inc. Video data encoding and decoding using reference picture lists
US9014266B1 (en) 2012-06-05 2015-04-21 Google Inc. Decimated sliding windows for multi-reference prediction in video coding
EP2901691A4 (fr) * 2012-09-28 2016-05-25 Intel Corp Utilisation améliorée d'une zone de référence pour un codage vidéo étalonnable
CN104937932A (zh) * 2012-09-28 2015-09-23 英特尔公司 可适性视频编码的增强参考区域利用
US9769475B2 (en) 2012-09-28 2017-09-19 Intel Corporation Enhanced reference region utilization for scalable video coding
US9756331B1 (en) 2013-06-17 2017-09-05 Google Inc. Advance coded reference prediction
US9392272B1 (en) 2014-06-02 2016-07-12 Google Inc. Video coding using adaptive source variance based partitioning
US9578324B1 (en) 2014-06-27 2017-02-21 Google Inc. Video coding using statistical-based spatially differentiated partitioning

Similar Documents

Publication Publication Date Title
WO2006078115A1 (fr) Procede et appareil de codage video pour la prediction efficace de trames non synchronisees
US20060165303A1 (en) Video coding method and apparatus for efficiently predicting unsynchronized frame
KR100714696B1 (ko) 다계층 기반의 가중 예측을 이용한 비디오 코딩 방법 및장치
JP4891234B2 (ja) グリッド動き推定/補償を用いたスケーラブルビデオ符号化
KR100703788B1 (ko) 스무딩 예측을 이용한 다계층 기반의 비디오 인코딩 방법,디코딩 방법, 비디오 인코더 및 비디오 디코더
KR100703760B1 (ko) 시간적 레벨간 모션 벡터 예측을 이용한 비디오인코딩/디코딩 방법 및 장치
JP4729220B2 (ja) ハイブリッドな時間的/snr的微細粒状スケーラビリティビデオ符号化
KR100763179B1 (ko) 비동기 픽쳐의 모션 벡터를 압축/복원하는 방법 및 그방법을 이용한 장치
US20060165301A1 (en) Video coding method and apparatus for efficiently predicting unsynchronized frame
US20060120448A1 (en) Method and apparatus for encoding/decoding multi-layer video using DCT upsampling
KR100704626B1 (ko) 다 계층 기반의 모션 벡터를 압축하는 방법 및 장치
US20060165302A1 (en) Method of multi-layer based scalable video encoding and decoding and apparatus for the same
US20060104354A1 (en) Multi-layered intra-prediction method and video coding method and apparatus using the same
KR20060135992A (ko) 다계층 기반의 가중 예측을 이용한 비디오 코딩 방법 및장치
KR20020090239A (ko) 미세한 그레뉼라 스케일러빌리티 비디오 코딩에서확장층에 대한 개선된 예측 구조들
KR20060063532A (ko) 다 계층 기반의 비디오 인코딩 방법, 디코딩 방법 및 상기방법을 이용한 장치
US20060250520A1 (en) Video coding method and apparatus for reducing mismatch between encoder and decoder
WO2006059847A1 (fr) Procede et appareil de codage/decodage de video multicouche par sur-echantillonnage dct
WO2006132509A1 (fr) Procede de codage video fonde sur des couches multiples, procede de decodage, codeur video, et decodeur video utilisant une prevision de lissage
WO2006078125A1 (fr) Procede et appareil de codage video permettant une prediction efficace de trame non synchronisee
WO2006078109A1 (fr) Procede et dispositif d'encodage et decodage video echelonnable multicouche
EP1847129A1 (fr) Procede et dispositif pour comprimer un vecteur de mouvement multicouche
WO2006104357A1 (fr) Procede pour la compression/decompression des vecteurs de mouvement d'une image non synchronisee et appareil utilisant ce procede
WO2006098586A1 (fr) Procede et dispositif de codage/decodage video utilisant une prediction de mouvement entre des niveaux temporels
WO2006109989A1 (fr) Procede et appareil de codage video permettant de reduire un mauvais appariement entre un codeur et un decodeur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06702873

Country of ref document: EP

Kind code of ref document: A1