WO2006085726A1 - Procede et appareil pour coder/decoder et designer une image de zone virtuelle - Google Patents

Procede et appareil pour coder/decoder et designer une image de zone virtuelle Download PDF

Info

Publication number
WO2006085726A1
WO2006085726A1 PCT/KR2006/000471 KR2006000471W WO2006085726A1 WO 2006085726 A1 WO2006085726 A1 WO 2006085726A1 KR 2006000471 W KR2006000471 W KR 2006000471W WO 2006085726 A1 WO2006085726 A1 WO 2006085726A1
Authority
WO
WIPO (PCT)
Prior art keywords
base layer
frame
layer frame
virtual area
image
Prior art date
Application number
PCT/KR2006/000471
Other languages
English (en)
Inventor
Sang-Chang Cha
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020050028248A external-priority patent/KR100703751B1/ko
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2006085726A1 publication Critical patent/WO2006085726A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • Apparatuses and methods consistent with the present invention relate to encoding and decoding referencing a virtual area image.
  • the basic principle in compressing data is to eliminate data redundancy.
  • the redundancy of data comprises spatial redundancy which repeats identical colors or objects in images; temporal redundancy, where neighboring frames in motion pictures lack differences, or identical sounds are repeated; and psycho visual redundancy, which considers the insensitivity of human vision and perception.
  • the temporal redundancy is excluded by temporal filtering based on a motion compensation
  • the spatial redundancy is excluded by a spatial transformation.
  • Transmission media have different performance characteristics.
  • Current transmission media include diverse transmission speeds (i.e., high speed communication networks for transmitting data at tens of MB/sec to mobile communication networks having a transmission speed of 384 KB/sec).
  • a scalable video coding method may be more suitable for supporting the transmission media at various speeds.
  • Scalable video coding makes it possible to transmit multimedia at a transmission rate corresponding to the transmission environment.
  • the aspect ratio may be changed to 4:3 or 16:9 according to the size or features of an apparatus that generates multimedia.
  • the scalable video coding cuts out a part of a bit stream already compressed, according to the transmission bit rate, transmission error rate, and system resources in order to adjust the resolution, frame rate and bit rate.
  • the moving picture experts group-21 (MPEG-4) Part 10 is already working on standardizing scalable video coding.
  • the standardization is based on multi-layers in order to realize scalability.
  • the multi-layers comprise a base layer, an enhanced layer 1 and an enhanced layer 2.
  • the respective layers may comprise different resolutions (QCIF, CIF and 2CIF) and frame-rates.
  • multi-layer coding requires a motion vector to exclude temporal redundancy.
  • the motion vector may be acquired from each layer or it may be acquired from one layer and applied to other layers (i.e., up/down sampling).
  • the former method provides a more precise motion vector than the latter method does, but the former method generates overhead. In the former method, it is important to more efficiently exclude redundancy between motion vectors of each layer.
  • FIG. 1 is an example of a scalable video codec employing a multi-layer structure.
  • a base layer is in the quarter common intermediate format (QCIF) at 15Hz
  • an enhanced layer 1 is in the common intermediate format (CIF) at 30Hz
  • an enhanced layer 2 is in standard definition (SD) at 60Hz.
  • a CIF 0.5 Mbps stream may be provided by cutting the bit stream from CIF_30Hz_0.7M to a 0.5 M bit rate.
  • frames of respective layers having an identical temporal position may comprise similar images.
  • current layer texture may be predicted by base layer texture, and the difference between the predicted value and the current layer texture may be encoded.
  • 'Scalable Video Model 3.0 of ISO/IEC 21000-13 Scalable Video Coding (hereinafter, referred to as 'SVM 3.0') defines the foregoing method as Intra_BL prediction.
  • SVM 3.0 additionally adopts a method of predicting a current block by using the correlation between base layer blocks corresponding to the current block, as well as adopting inter prediction and directional intra prediction to predict blocks or macro- blocks comprising the current frame in existing H.264.
  • the foregoing method may be referred to as intra BL prediction, and a coding mode which employs the foregoing prediction methods is referred to as intra BL mode.
  • FIG. 2 is a schematic view of three prediction methods.
  • the three prediction methods comprise intra prediction ® of a certain macro-block 14 of a current frame 11; inter prediction ⁇ using a frame 12 disposed in a different temporal position from the current frame 11; and intra BL prediction ⁇ using texture data of an area 16 of a base layer frame 13 corresponding to the macro-block 14.
  • the scalable video coding standards employ one of the three prediction methods by macro-block.
  • a frame 40 may exist that does not comprise the base layer frame.
  • Intra-BL prediction may be not applicable to the frame 40.
  • the frame 40 is coded by using only information of the corresponding layer (i.e., by using inter prediction and intra prediction only) without using information of the base layer; also, it is inefficient in coding performance.
  • the upper layer may not refer to video information of the base layer.
  • FIG. 3 illustrates images of upper and base positions in different sizes while coding the video of the multi-layer structure.
  • the video image is divided into two layers.
  • Base layers 101, 102 and 103 provide images that have a small width.
  • Upper layers 201, 202 and 203 provide images having a larger width than that of the base layers 101, 102 and 103.
  • the upper layers may comprise images which are not included in the video information of the base layers.
  • the upper layers refer to image or video information of the base layers when they are divided into frames to be transmitted.
  • Frame 201 refers to frame 101 (to be generated)
  • the frame 202 refers to the frame 102
  • the frame 203 refers to frame 103.
  • Frame 3 is an object that is shaped like a star and that moves in a leftward direction.
  • Frame 102 referred to by the frame 202, is shaped like a star, a part of which is excluded.
  • the star is disposed on the left side 212 of the frame 202, of the video.
  • the left video information may not refer to the base layer data when it is coded.
  • frame 103 referred to by frame 203, the star moves in the leftward direction, and more of it is missing relative to frame 102.
  • the star When the star is disposed on the left side 213 of the frame 203 (the upper layer), it may not refer to the base layer data.
  • the upper layers may not refer to the video of the base layers for some areas.
  • the upper layers may refer to a frame of a previous upper layer through an inter-mode to compensate the area that is not referred to.
  • the Intra-BL mode is not used, thereby lowering the accuracy of the data.
  • the amount of data to be compressed increases, thereby lowering compression efficiency. Thus, it is necessary to increase the compression rate of layers having images of different sizes.
  • the present invention provides a method and an apparatus for encoding and decoding a video of upper layers by using motion information in a multi-layer structure having images in variable size by layer. [17] Also, the present invention is to restore images which are not included in a base layer and to enhance compression efficiency.
  • a method for encoding referencing a virtual area image comprising (a) generating a base layer frame from an input video signal; (b) restoring a virtual area image in an outside of the base layer frame through a corresponding image of a reference frame of the base layer frame; (c) adding the restored virtual area image to the base layer frame to generate a virtual area base layer frame; and (d) differentiating the virtual area base layer frame from the video signal to generate an enhanced layer frame.
  • (b) comprises determining the virtual area image in the outside of the base layer frame as a motion vector of a block existing in a boundary area of the base layer frame.
  • the reference frame of (b) is ahead of the base layer frame.
  • (b) comprises copying motion information which exists in the boundary area of the base layer frame.
  • (b) comprises generating motion information according to a proportion of motion information of the block in the boundary area of the base layer frame and motion information of a neighboring block.
  • (d) comprises an image having a larger area than the image supplied by the base layer frame.
  • the method further comprises storing the virtual area base layer frame of the base layer frame.
  • a method for decoding referencing a virtual area image comprising (a) restoring a base layer frame from a bit stream; (b) restoring a virtual area image in an outside of the restored base layer frame through a corresponding image of a reference frame of the base layer frame; (c) adding the restored virtual area image to the base layer frame to generate a virtual area base layer frame; (d) restoring an enhanced layer frame from the bit stream; and (e) combining the enhanced layer frame and the virtual area base layer frame to generate an image.
  • (b) comprises determining the virtual area image in the outside of the base layer frame as a motion vector of a block which exists in a boundary area of the base layer frame.
  • the reference frame of (b) is ahead of the base layer frame.
  • (b) comprises copying motion information which exists in the boundary area of the base layer frame.
  • (b) comprises generating motion information according to a proportion of motion information of the block in the boundary area of the base layer frame and motion information of a neighboring block.
  • (e) comprises an image having a larger area than the image supplied by the base layer frame.
  • the method further comprises storing the virtual area base layer frame or the base layer frame.
  • an encoder comprising a base layer encoder to generate a base layer frame from an input video signal; and an enhanced layer encoder to generate an enhanced layer frame from the video signal, wherein the base layer encoder restores a virtual area image in an outside of the base layer frame through a corresponding image of a reference frame of the base layer frame and adds the restored virtual area image to the base layer frame to generate a virtual area base layer frame, and the enhanced layer encoder differentiates the virtual area base layer frame from the video signal to generate an enhanced layer frame.
  • the encoder further comprises a motion estimator to acquire motion information of an image and to determine the virtual area image in the outside of the base layer frame as a motion vector of a block which exists in a boundary area of the base layer frame.
  • the reference frame is ahead of the base layer frame.
  • the virtual area frame generator copies motion information which exists in the boundary area of the base layer frame.
  • the virtual area frame generator generates the motion information according to a proportion of motion information of a block existing in the boundary area of the base layer frame and motion information of a neighboring block.
  • the enhanced layer frame comprises an image having a larger area than the image supplied by the base layer frame.
  • the encoder further comprises a frame buffer to store the virtual area base layer frame or the base layer frame therein.
  • a decoder comprising a base layer decoder to restore a base layer frame from a bit stream; and an enhanced layer decoder to restore an enhanced layer frame from the bit stream
  • the base layer decoder comprises a virtual area frame generator to generate a virtual area base layer frame by restoring a virtual area image in an outside of the restored base layer frame through a corresponding image of a reference frame of the base layer frame and by adding the restored image to the base layer frame
  • the enhanced layer decoder combines the enhanced layer frame and the virtual area base layer frame to generate an image.
  • the decoder further comprises a motion estimator to acquire motion information of an image and to determine the virtual area image in the outside of the base layer frame as a motion vector of a block which exists in a boundary area of the base layer frame.
  • the reference frame is ahead of the base layer frame.
  • the virtual area frame generator copies motion information which exists in the boundary area of the base layer frame.
  • the virtual area frame generator generates the motion information according to a proportion of motion information of a block existing in the boundary area of the base layer frame and motion information of a neighboring block.
  • the enhanced layer frame comprises an image having a larger area than the image supplied by the base layer frame.
  • the decoder further comprises a frame buffer to store the virtual area base layer frame or the base layer frame therein.
  • FIG. 1 is an example of scalable video coding/decoding which uses a multi-layer structure
  • FIG. 2 is a schematic view of a prediction method of a block or macro-block
  • FIG. 3 illustrates upper and base images of different sizes while coding a video in a multi-layer structure
  • FIG. 4 is an example of coding data which does not exist in video information of a base layer with reference to information of a previous frame while coding a video of a upper layer according to an embodiment of the present invention
  • FIG. 5 is an example of generating a virtual area by copying motion information according to an embodiment of the present invention.
  • FIG. 6 is an example of generating a virtual area by proportionally calculating motion information according to an embodiment of the present invention
  • FIG. 7 is an example of generating a virtual area frame while it is encoded according to an embodiment of the present invention.
  • FIG. 8 is an example of generating a virtual area frame by using motion information according to an embodiment of the present invention.
  • FIG. 9 is an example of decoding base and upper layers according to an embodiment of the present invention.
  • FIG. 10 is an example of a configuration of a video encoder according to an embodiment of the present invention.
  • FIG. 11 is an example of a configuration of a video decoder according to an embodiment of the present invention.
  • FIG. 12 a flowchart of encoding a video according to an embodiment of the present invention.
  • FIG. 13 is a flowchart of decoding a video according to an embodiment of the present invention.
  • FIG. 4 is an example of coding data that does not exist in video information of a base layer with reference to information of a previous frame while coding a video of an upper layer.
  • Upper layer frames 201, 202 and 203 refer to base layer frames 111, 112 and 113, respectively.
  • a part 231 that is included in a video of the frame 201 exists in a video of the base layer frame 111.
  • the part 231 may be generated by referring to base information.
  • a part 232 that is included in a video of the frame 202 exists in the base layer frame 112 wherein a part thereof is excluded.
  • a user may recognize which area of the previous frame is referred to through motion information of the frame 112.
  • a virtual area is generated by using the motion information.
  • the virtual area may be generated by copying the motion information from neighboring areas or by extrapolation.
  • the motion information is used to generate corresponding areas from a restored image of the previous frame.
  • the area 121 of the frame 111 is externally disposed, and a frame added with image information thereof may be generated.
  • video information of the area 232 may be referred to by the base layer.
  • the video information of the area 233 is not included in the base frame 113.
  • the previous frame 112 comprises the corresponding image information.
  • the virtual area of the previous frame 112 comprises image information, thereby generating a new virtual base frame therefrom to be referred to.
  • FIG. 5 is an example of generating the virtual area by copying the motion information according to an embodiment of the present invention.
  • the frame 132 is divided into 16 areas. Each area may comprise a macro-block or a group of macro- blocks.
  • a motion vector of e, f, g and h disposed in a left boundary area of the frame 132 is the same as that of the frame 133.
  • Motion vectors mv , mv , mv and mv re- e f g h spectively of e, f, g and h are directed to the center of the frame. That is, the image moves to the outside, compared to that in the previous frame.
  • the motion vectors are shown in relation to reference frames, and they indicate where the macro-block is disposed.
  • the direction of the motion vectors is opposite to the direction that images or objects move according to the time axis when the previous frame is designated as the reference frame.
  • the direction (arrow) of the motion vectors in FIG. 5 indicates a position of the corresponding macro-block in the previous frame, as in the reference frame.
  • the virtual area is generated on the left side of e, f, g and h, and the motion vector of the area copies the motion vectors mv , mv , mv and mv thereof and refers e f g h to the information of the virtual area from the previous frame.
  • the previous frame is the frame 131, the information of the frame 131 and that of the frame 134 are combined to generate a restoration frame 135 of a new virtual area.
  • a new frame adding a, b, c and d in a left side thereof is generated and the upper frame referring to the frame 132 refers to the frame 135 to be coded.
  • the motion information of the frame 132 is directed to a right side, the motion information of the boundary area is copied and the previous frame is referred to generate a new virtual area.
  • the new virtual area may be generated by extrapolation, without copying the motion information.
  • FIG. 6 is an example of generating a virtual area by proportionally calculating the motion information according to an embodiment of the present invention. If the motion information of the boundary area is different from the motion information of a neighboring area, the motion information may be calculated by a proportion between them to generate a virtual area from the previous frame.
  • a frame 142 is provided as an example.
  • the motion vectors i.e., the motion information of e, f, g and h are defined as mv e , mv f , mv g and mv h , respectively.
  • the motion vectors of i, j, k and 1 existing in a right side of e, f, g and h are defined as mv , mv , mv and mv .
  • the motion i j k 1 information of an area to be generated in a left side may be calculated by a proportion between the motion vectors. If the motion vectors of the area to be generated in the left side are defined as mv a , mv b , mv c and mv d respectively, the rate between the motion vector of the boundary area block and the neighboring block may be calculated as follows: [69]
  • the motion vector of the frame 145 is calculated as described above, and a virtual area frame is generated by referring to the corresponding block in the frame 141 to include the virtual area.
  • the motion information may be calculated by using the difference:
  • the motion information may be calculated by using the difference between the block e of the boundary area and the block i of the neighboring area.
  • Equation 2 may be adopted when the difference of the motion vectors are uniform in the respective blocks.
  • FIG. 7 is an example of generating a virtual area frame while it is encoded according to an embodiment of the present invention.
  • Base layer frames 151, 152 and 153, upper layer frames 251, 252 and 253, and virtual area frames 155 and 156 are provided.
  • the frame 251 comprises 28 blocks from a block zl to a block t. Sixteen blocks from a block a to a block p may refer to the base layers.
  • the frame 252 comprises blocks z5 through x.
  • frame 252 is frame 152 comprising blocks e through t.
  • a virtual area frame 155 may be generated by using the motion information of blocks e, f, g and h of frame 152.
  • frame 252 may refer to 20 blocks of frame 155.
  • the base frame of the frame 253 is a frame 153 comprising blocks i through x.
  • a virtual area frame 156 may be generated by using the motion information of blocks i, j, k and 1 of frame 153.
  • the motion information may be supplied by the previous virtual area frame 155.
  • a virtual area frame comprising 24 blocks may be referred to, thereby providing higher compression efficiency than the method that references frame 153 comprising 16 blocks.
  • the virtual area frame may be predicted in the intra BL mode in order to enhance compression efficiency.
  • FIG. 8 is an example of generating the virtual area frame by using the motion information according to an embodiment of the present invention.
  • the boundary area of the frame 161 may comprise up, down, left and right motion information. If far right blocks comprise motion information directed to a left side, the virtual area frame may be generated by referencing a right block of the previous frame. That is, the virtual area frame added with blocks a, b, c and d to the right side is generated like in the frame 163.
  • the upper layer frame of the frame 162 may reference the frame 163 (to be coded).
  • top blocks of the frame 164 comprise motion information in a downward direction
  • the virtual area frame may be generated by referencing upper blocks in the previous frame. That is, blocks a, b, c and d are added to an upper part of the virtual area frame like in the frame 165.
  • the upper layer frame of the frame 164 may reference the frame 165 (to be coded).
  • an image in a diagonal direction may generate the virtual area frame through the motion information.
  • FIG. 9 is an example of decoding base and upper layers according to an embodiment of the present invention.
  • a bit stream that is supplied from data stored in networks or the storing medium is divided into a base layer bit stream and an enhanced layer bit stream to generate a scalable video.
  • the base layer bit stream in FIG. 9 is in a 4:3 aspect ratio while the enhanced layer bit stream is in a 16:9 aspect ratio.
  • the respective bit streams provide scalability according to size of the screen.
  • a frame 291, to be output, is restored (decoded) from a frame 171 supplied through the base layer bit stream and from a frame 271 supplied from the enhanced layer bit stream.
  • parts a, b, c and d of the frame 272 are coded through the virtual area frame, a virtual area frame 175 is generated by the frame 172 and the previous frame 171.
  • a frame 292 is restored (decoded) from the frame 175 and the frame 272 to be output.
  • a virtual area frame 176 is generated by the frame 173 received by the base layer bit stream from the frame 175.
  • a frame 293 is restored (decoded) by the frame 176 and the frame 273 to be output.
  • a module may advantageously be configured to reside on an addressable storage medium and to be executed on one or more processors.
  • a module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
  • FIG. 10 is an example of a configuration of a video encoder according to an exemplary embodiment of the present invention.
  • One base layer and one enhanced layer are provided and usage thereof is described with reference to FIGS. 10 and 11 by way of example, but the present invention is not limited thereto. Alternatively, the present invention may be applied to more layers.
  • a video encoder 500 may be divided into an enhanced layer encoder 400 and a base layer encoder 300.
  • a configuration of the base layer encoder 300 will be described.
  • a down sampler 310 down-samples an input video using a resolution and frame rate suitable for the base layer, or according to the size of the video.
  • the down sampling may apply an MPEG down sampler or a wavelet down sampler for better resolution.
  • the down sampling may be performed through frame skip or frame interpolation to produce a better frame rate.
  • the video image originally input at the 16:9 aspect ratio is displayed at the 4:3 aspect ratio by excluding corresponding boundary areas from the video information or reducing the video information according to the corresponding screen size.
  • a motion estimator 350 estimates motions of the base layer frames to calculate motion vectors mv by partition, which is included in the base layer frames.
  • the motion estimation is used to search an area in a reference frame Fr' that is the most similar to respective partitions of a current frame Fc, i.e., the area with the least errors.
  • the motion estimation may use fixed size block matching or layer variable size block matching.
  • the reference frame Fr' may be provided by a frame buffer 380.
  • a base layer encoder 300 shown in FIG. 10, adopts a method of using the restored frame as the reference frame, i.e., closed loop coding, but the present invention is not limited thereto.
  • the base layer encoder 300 may adopt open loop coding, which uses an original base layer frame supplied by the down sampler 310 as the reference frame.
  • the motion vector mv of the motion estimator 350 is transmitted to a virtual area frame generator 390, thereby generating a virtual area frame added with a virtual area if the motion vector of the boundary area block of the current frame is directed to the center of the frame.
  • a motion compensator 360 uses the calculated motion vector to perform motion compensation on the reference frame.
  • a differentiator 315 differentiates the current frame of the base layer and the motion-compensated reference frame to generate a residual frame.
  • a transformer 320 performs a spatial transform on the generated residual frame to generate a transform coefficient.
  • the spatial transform comprises a discrete cosine transform, wavelet transform, etc. If the DCT is used, the transform coefficient refers to a DCT coefficient. If the wavelet transform is used, the transform coefficient refers to a wavelet coefficient.
  • a quantizer 330 quantizes the transform coefficient generated by the transformer
  • quantization refers to an operation in which the DCT coefficient is divided into predetermined areas according to a quantization table to be provided as a discrete value, and matched to a corresponding index.
  • the quantized value is referred to as a quantized coefficient.
  • An entropy coder 340 lossless-codes the quantized coefficient generated by the quantizer 330 and the motion vector generated by the motion estimator 350 to generate the base layer bit stream.
  • the lossless-coding may be Huffman coding, arithmetic coding, variable length coding, or another type of coding known in the art.
  • a reverse quantizer 371 reverse-quantizes the quantized coefficient output by the quantizer 330.
  • the reverse-quantization restores a matching value from the index generated by the quantization through the quantization table used in the quantization.
  • a reverse transformer 372 performs a reverse spatial transform on the reverse- quantized value.
  • the reverse spatial transform is performed in an opposite manner to the transforming process of the transformer 320.
  • the reverse spatial transform may be a reverse DCT transform, a reverse wavelet transform, or others.
  • a calculator 325 calculates an output value of the motion compensator 360, and an output value of the reverse transformer 372 to restore the current frame Fc,' and to supply it to the frame buffer 380.
  • the frame buffer 380 temporarily stores the restored frame therein and supplies it as the reference frame for the inter-prediction of other base layer frames.
  • a virtual area frame generator 390 generates the virtual area frame using the Fc', which restores the current frame, the reference frame Fr' of the current frame and the motion vector mv. If the motion vector mv of the boundary area block of the current frame is directed to the center of the frame as shown in FIG. 8, the screens moves.
  • a virtual area frame is generated by copying a part of the blocks from the reference frame Fr'. The virtual area may be generated by copying the motion vectors as used in FIG. 5, or by the extrapolation through the proportion of motion vector values, as used in FIG. 6. If a virtual area is not generated, the current frame Fc' may be selected to encode the enhanced layers, without adding the virtual areas.
  • the frame extracted from the virtual area frame generator 390 is supplied to the enhanced layer encoder 400 through an upsampler 395.
  • the upsampler 395 up-samples the resolution of the virtual base layer frame to that of the enhanced layer if the resolution of the enhanced layer is different from that of the base layer. If the resolution of the base layer is identical to that of the enhanced layer, the upsampling can be omitted. Also, if part of the video information of the base layer is excluded compared to the video information of the enhanced layer, the upsampling can be omitted.
  • the frame supplied by the base layer encoder 300 and an input frame are supplied to the differentiator 410.
  • the differentiator 410 differentiates the base layer frame comprising the input virtual area from the input frame to generate the residual frame.
  • the residual frame is transformed into the enhanced layer bit stream through the transformer 420, quantizer 430 and the entropy coder 440, and is then output.
  • Functions and operations of the transformer 420, the quantizer 430 and the entropy coder 440 are the same as those of the transformer 320, the quantizer 330 and the entropy coder 340. Thus, the description thereof is omitted.
  • the enhanced layer encoder 400 in FIG. 10 encodes the base layer frame added to the virtual area through the Intra-BL prediction. Also, the enhanced layer encoder 400 may encode the base layer frame added to the virtual area through inter-prediction or intra-prediction.
  • FIG. 11 is an example of a configuration of the video decoder according to an embodiment of the present invention.
  • the video decoder 550 may be divided into an enhanced layer decoder 700 and a base layer decoder 600.
  • a configuration of the base layer decoder 600 will be described.
  • An entropy decoder 610 losslessly-decodes the base layer bit stream to extract texture data and motion data (i.e., motion vectors, partition information, and reference frame numbers) of the base layer frame.
  • a reverse quantizer 620 reverse-quantizes the texture data.
  • the reverse quantization restores a matching value from the index generated by the quantization through the quantization table used in the quantization.
  • a reverse transformer 630 performs a reverse spatial transform on the reverse- quantized value to restore the residual frame.
  • the reverse spatial transform is performed in an opposite manner to the transform of the transformer 320 in the video encoder 500.
  • the reverse transform may comprise the reverse DCT transform, the reverse wavelet transform, and others.
  • An entropy coder 610 supplies the motion data comprising the motion vector mv to the motion compensator 660 and the virtual area frame generator 670.
  • the motion compensator 660 uses the motion data supplied by the entropy coder
  • the 610 to motion-compensate the restored video frame, i.e., the reference frame, supplied by the frame buffer 650 and to generate the motion compensation frame.
  • a calculator 615 calculates the residual frame restored by the reverse transformer
  • the restored video frame may be temporarily stored in the frame buffer 650 or supplied to the motion compensator 660 or to the virtual frame generator 670 as the reference frame to restore other frames.
  • a virtual area frame generator 670 generates the virtual area frame with the Fc' restoring the current frame, the reference frame Fr' of the current frame and the motion vector mv. If the motion vector mv of the boundary area block of the current frame is directed to the center of the frame as shown in FIG. 8, the screens moves.
  • a virtual area frame is generated by copying a part of blocks of the reference frame Fr'. The virtual area may be generated by copying the motion vectors as used in FIG. 5 or by extrapolation through calculating the proportional values of the motion vector values as used in FIG. 6. If no virtual area to generate is provided, the current frame Fc' may be selected to decode the enhanced layers, without adding the virtual areas.
  • the frame extracted from the virtual area frame generator 670 is supplied to the enhanced layer decoder 700 through an upsampler 680.
  • the upsampler 680 up-samples the resolution of the virtual base layer frame to that of the enhanced layer if the resolution of the enhanced layer is different from that of the base layer. If the resolution of the base layer is identical to that of the enhanced layer, the upsampling can be omitted. If part of the video information of the base layer is excluded compared to the video information of the enhanced layer, the upsampling can be omitted.
  • the entropy decoder 710 losslessly-decodes the input bit stream to extract texture data of an asynchronous frame.
  • the extracted texture data is restored as the residual frame through the reverse quantizer 720 and the reverse transformer 730.
  • Functions and operations of the reverse transformer 720 and the reverse quantizer 730 are the same as those of the reverse transformer 620 and the reverse quantizer 630, respectively. Thus, the descriptions thereof are omitted.
  • a calculator 715 calculates the restored residual frame and the virtual area base layer frame supplied by the base layer decoder 600 to restore the frame.
  • the enhanced layer decoder 700 in FIG. 11 decodes the base layer frame added to the virtual area through the Intra-BL prediction, but the present invention is not limited thereto. Alternatively, the enhanced layer decoder 700 may decode the base layer frame added to the virtual area through the inter-prediction or the intra-prediction.
  • FIG. 12 is a flowchart showing the encoding of video according to an exemplary embodiment of the present invention.
  • Video information is received to generate the base layer frame in operation SlOl.
  • the base layer frame of the multi-layer frame may be down-sampled according to resolution, frame rate and size of the video images. If the size of the video is different by layer, for example, if the base layer frame provides an image in the 4:3 aspect ratio, and if the enhanced layer frame provides an image in the 16:9 aspect ratio, the base layer frame is encoded to the image with a part thereof excluded. As described in FIG. 10, the motion estimation, the motion compensation, the transform and the quantization are performed to encode the base layer frame.
  • the base layer frame generated in operation SlOl detects whether the image is moving towards the outside in operation S 105; this may be determined by the motion information in the boundary area of the base layer frame. If the motion vector of the motion information is directed toward the center of the frame, it is determined that the image moves towards the outside from the boundary area of the frame.
  • the frame buffer 380 may store the previous frame or the frame added to the virtual area of the previous frame therein to restore the virtual area image in operation SI lO.
  • the virtual area base layer frame adding the restored virtual area image to the base layer frame is generated in operation SI lO.
  • the methods shown in FIG. 5 or 6 may be used.
  • the virtual area base layer frames 155 and 156 in FIG. 7 are generated.
  • the enhanced layer frame is generated by the differentiation of the video information in operation S 120.
  • the enhanced layer frame is transmitted to the enhanced layer bit stream to be decoded by the decoder.
  • the base layer frame is differentiated from the video information to generate the enhanced layer frame in operation S 130.
  • FIG. 13 is a flowchart showing the decoding of video according to an exemplary embodiment of the present invention.
  • the base layer frame is extracted from the bit stream generated in FIG. 12.
  • the coding, the reverse quantization and reverse transform are performed while extracting the base layer frame. It is detected whether the extracted base layer frame comprises an image moving toward the outside in operation S205. It may be determined by the motion information of the blocks in the boundary area of the base layer frame. If the motion vectors of the boundary area blocks are directed toward the center or the inside of the frame, a part or all of the image is moving toward the outside of the frame compared to the previous frame. Accordingly, the virtual area image that does not exist in the base layer frame is restored through the previous frame or another previous frame in operation S210.
  • the virtual area base layer frame adding the virtual area image to the base layer frame is generated in operation S215.
  • Frames 175 and 176 in FIG. 9 are examples of the virtual area base layer frame.
  • the enhanced layer frame is extracted from the bit stream in operation S220.
  • the enhanced layer frame and the virtual area base layer frame are combined to generate a frame in operation S225.
  • the enhanced layer frame is extracted from the bit stream in operation S230.
  • the enhanced layer frame and the base layer frame are combined to generate the frame in operation S235.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention porte sur un procédé et sur un appareil pour coder/décoder et désigner une image de zone virtuelle. Le procédé pour coder et désigner l'image de zone virtuelle consiste à générer une trame de couche de base à partir d'un signal vidéo d'entrée, restaurer une image de zone virtuelle dans une zone extérieure de la trame de couche de base par l'intermédiaire d'une image correspondante d'une trame de désignation de la trame de couche de base, ajouter l'image de zone virtuelle, restaurer la trame de couche de base afin de générer une trame de couche de base de zone virtuelle et différencier la trame de couche de base de la zone virtuelle tirée du signal vidéo afin de générer une trame de couche améliorée.
PCT/KR2006/000471 2005-02-14 2006-02-09 Procede et appareil pour coder/decoder et designer une image de zone virtuelle WO2006085726A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US65200305P 2005-02-14 2005-02-14
US60/652,003 2005-02-14
KR1020050028248A KR100703751B1 (ko) 2005-02-14 2005-04-04 가상 영역의 영상을 참조하여 인코딩 및 디코딩 하는 방법및 장치
KR10-2005-0028248 2005-04-04

Publications (1)

Publication Number Publication Date
WO2006085726A1 true WO2006085726A1 (fr) 2006-08-17

Family

ID=36793277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/000471 WO2006085726A1 (fr) 2005-02-14 2006-02-09 Procede et appareil pour coder/decoder et designer une image de zone virtuelle

Country Status (1)

Country Link
WO (1) WO2006085726A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100186268B1 (ko) * 1995-09-07 1999-05-01 김주용 가상 영상 영역을 이용한 영상의 가장자리 부분 보상 방법 및 구조
US6049362A (en) * 1996-02-14 2000-04-11 International Business Machines Corporation Dual prime motion estimation system and method
US6115070A (en) * 1997-06-12 2000-09-05 International Business Machines Corporation System and method for DCT domain inverse motion compensation using shared information
US20030138045A1 (en) * 2002-01-18 2003-07-24 International Business Machines Corporation Video decoder with scalable architecture
US20040013201A1 (en) * 2002-07-18 2004-01-22 Samsung Electronics Co., Ltd Method and apparatus for estimating a motion using a hierarchical search and an image encoding system adopting the method and apparatus
KR20050008046A (ko) * 2003-07-14 2005-01-21 한국전자통신연구원 유비쿼터스 환경에 적합한 스케일러블 비디오 코딩 방법
KR20060006720A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 비디오 코딩 및 디코딩 방법, 비디오 인코더 및 디코더

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100186268B1 (ko) * 1995-09-07 1999-05-01 김주용 가상 영상 영역을 이용한 영상의 가장자리 부분 보상 방법 및 구조
US6049362A (en) * 1996-02-14 2000-04-11 International Business Machines Corporation Dual prime motion estimation system and method
US6115070A (en) * 1997-06-12 2000-09-05 International Business Machines Corporation System and method for DCT domain inverse motion compensation using shared information
US20030138045A1 (en) * 2002-01-18 2003-07-24 International Business Machines Corporation Video decoder with scalable architecture
US20040013201A1 (en) * 2002-07-18 2004-01-22 Samsung Electronics Co., Ltd Method and apparatus for estimating a motion using a hierarchical search and an image encoding system adopting the method and apparatus
KR20050008046A (ko) * 2003-07-14 2005-01-21 한국전자통신연구원 유비쿼터스 환경에 적합한 스케일러블 비디오 코딩 방법
KR20060006720A (ko) * 2004-07-15 2006-01-19 삼성전자주식회사 비디오 코딩 및 디코딩 방법, 비디오 인코더 및 디코더

Similar Documents

Publication Publication Date Title
US8520962B2 (en) Method and apparatus for effectively compressing motion vectors in video coder based on multi-layer
KR100763181B1 (ko) 기초계층과 향상계층의 데이터를 바탕으로 예측 정보를코딩하여 코딩율을 향상시키는 방법 및 장치
US7944975B2 (en) Inter-frame prediction method in video coding, video encoder, video decoding method, and video decoder
KR100703740B1 (ko) 다 계층 기반의 모션 벡터를 효율적으로 부호화하는 방법및 장치
KR100781525B1 (ko) 가중 평균합을 이용하여 fgs 계층을 인코딩 및디코딩하는 방법 및 장치
CN100593339C (zh) 在多层结构中有效压缩运动向量的方法和装置
KR100679025B1 (ko) 다 계층 기반의 인트라 예측 방법, 및 그 방법을 이용한비디오 코딩 방법 및 장치
KR100714689B1 (ko) 다 계층 구조 기반의 스케일러블 비디오 코딩 및 디코딩방법, 이를 위한 장치
KR100703774B1 (ko) 인트라 코딩을 선택적으로 적용하여 인트라 bl 예측모드의 비디오 신호를 인코딩 및 디코딩하는 방법 및 장치
US20060120448A1 (en) Method and apparatus for encoding/decoding multi-layer video using DCT upsampling
KR100763194B1 (ko) 단일 루프 디코딩 조건을 만족하는 인트라 베이스 예측방법, 상기 방법을 이용한 비디오 코딩 방법 및 장치
JP2008541653A (ja) スムージング予測を用いた多階層基盤のビデオエンコーディング方法、デコーディング方法、ビデオエンコーダ及びビデオデコーダ
EP1779666A1 (fr) Systeme et procede de prediction des mouvements en codage video a geometrie variable
CA2543947A1 (fr) Methode et appareil de selection adaptative de modele contextuel pour le codage entropique
JP2006304307A5 (fr)
KR100703746B1 (ko) 비동기 프레임을 효율적으로 예측하는 비디오 코딩 방법 및장치
EP1659797A2 (fr) Procède et appareil de compression efficace de vecteur de mouvement dans une structure a couches multiples
WO2007081162A1 (fr) Procédé et dispositif de prédiction de mouvement au moyen d'une transformation inverse de mouvement
US20060182315A1 (en) Method and apparatus for encoding/decoding and referencing virtual area image
WO2006059847A1 (fr) Procede et appareil de codage/decodage de video multicouche par sur-echantillonnage dct
JP2002058032A (ja) 画像符号化装置および方法、画像復号装置および方法、ならびに画像処理装置
WO2006085726A1 (fr) Procede et appareil pour coder/decoder et designer une image de zone virtuelle
WO2006104357A1 (fr) Procede pour la compression/decompression des vecteurs de mouvement d'une image non synchronisee et appareil utilisant ce procede
WO2006080663A1 (fr) Procede et dispositif pour coder efficacement des vecteurs de mouvement multicouche

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06715923

Country of ref document: EP

Kind code of ref document: A1