MX2008012360A - Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same. - Google Patents

Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same.

Info

Publication number
MX2008012360A
MX2008012360A MX2008012360A MX2008012360A MX2008012360A MX 2008012360 A MX2008012360 A MX 2008012360A MX 2008012360 A MX2008012360 A MX 2008012360A MX 2008012360 A MX2008012360 A MX 2008012360A MX 2008012360 A MX2008012360 A MX 2008012360A
Authority
MX
Mexico
Prior art keywords
quality
layers
layer
image
current image
Prior art date
Application number
MX2008012360A
Other languages
Spanish (es)
Inventor
Manu Mathew
Kyo-Hyuk Lee
Woo-Jin Han
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060048979A external-priority patent/KR100772878B1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of MX2008012360A publication Critical patent/MX2008012360A/en

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of assigning a priority for controlling a bit rate of a bitstream having a plurality of quality layers is provided. The method includes composing first quality layers for a reference picture, composing second quality layers for a current picture that is encoded with reference to the reference picture, and assigning a priority each of the first and second quality layers, wherein a low priority is assigned to a quality layer having a small influence on a video quality reduction of the current picture when the quality layer is truncated.

Description

METHOD OF ASSIGNING PRIORITY TO CONTROL THE BINARY SPEED OF A BIT FLOW, METHOD TO CONTROL THE BINARY SPEED OF A BITS FLOW, VIDEO DECODING METHOD AND APPARATUS USING THE BIT FIELD OF THE INVENTION The present invention relates to a video coding technology, and more particularly, to a method for controlling a bit rate of a bit stream composed of a plurality of quality layers.
BACKGROUND OF THE INVENTION With the development of information and communication technologies, multimedia communications are increasing in addition to text and voice communications. Existing text-based communication systems are insufficient to meet the diverse needs of clients, and in this way multimedia services that can receive various forms of information, such as text, image, music and others, are increasing. Since multimedia data is large, mass storage media and wide bandwidths are required respectively to store and transmit them. Accordingly, compression coding techniques are required to transmit the multimedia data.
REF. : 196414 The basic principle of data compression is to eliminate data redundancy. Data can be compressed by eliminating spatial redundancy such as a repetition of the same color or object in images, temporal redundancy such as similar adjacent frames in moving images or continuous repetition of sounds, and visual / perceptual redundancy that considers human insensitivity to high frequencies. In a general video coding method, temporary redundancy is eliminated by time filtering based on motion compensation and spatial redundancy is eliminated by spatial transformation. To transmit multimedia data after data redundancy is eliminated, transmission means are required, the performances of which differ. The currently used transmission media have different transmission speeds. For example, an ultra-high-speed communication network can transmit several tenths of megabits of data per second and a mobile communication network has a transmission speed of 384 kilobits per second. To support the transmission means in this transmission environment, and to transmit multimedia with a transmission speed suitable for the transmission environment, a scalable video coding method is very suitable.
The scalable video coding method is a coding method that can adjust a video resolution, a frame rate and a signal-to-noise ratio (SNR), that is, a coding method that supports various scalabilities by truncating a part of a compressed bit stream according to peripheral conditions such as a bit rate of transmission, a rate of transmission errors and system resources. In the current scalable video coding standard (SVC), created by the Joint Video Team (JVT), which is a joint working group of the Group of Experts in Moving Images (MPEG) and the International Telecommunications Union ( ITU), is based on H.264. The SVC standard contains fine granularity scalability (FGS) technology to support SNR scalability. Figure 1 shows an example of a scalable video decoder using a multi-layer structure. Referring to figure 1, a first layer has a resolution of Common Intermediate Format of Quarter (QCIF) and a frame rate of 15 Hz, a second layer has a resolution of Common Intermediate Format (CIF) and a frame rate of 30 Hz, and a third layer has a Standard Definition (SD) resolution and a frame rate of 60 Hz. A layer correlation can be used to encode multi-layer video frames that have multiple resolutions and / or frame rates. For example, an area 12 of a first improvement layer frame is efficiently coded through a prediction of an area 13, corresponding to area 12, of a base layer frame. An area 11 of a second enhancement layer frame can be efficiently encoded through a prediction using area 12. Figure 2 is a schematic diagram for explaining intra-base inter prediction and prediction of a scalable video coding method. A block 24 in a current layer frame 21 can be predicted with reference to a block 25 in another current layer frame 22, which is called inter prediction. The inter prediction includes calculation of movement to obtain a motion vector that shows a corresponding block. The block 24 can be predicted with reference to a block 26 in the low layer frame (base layer) 23 which is located in the same temporal position and image order count (POC) as the frame 21, which is called prediction intra-base In intra-base prediction, movement calculation is not required. Figure 3 illustrates an example of applying FGS to a residual image through the prediction of Figure 2. The residual image 30 can be represented as a plurality. of quality layers to support SNR scalability. These quality layers are required to express differently a video quality, which is different from the layer for resolutions and / or frame rates. The plurality of quality layers may consist of an individual layer 31 and at least one of the FGS layers 32, 33 and 34. The video quality measured in the video decoder is the lowest when only a single layer 31 is received, followed by when the individual layer 31 and a first layer FGS 32 are received, when the individual layer 31 and the first and second layers FGS 32 and 33, and when all the layers 31, 32, 33 and 34 are received. Figure 4 illustrates a process for expressing a single image or segment as a single layer and two FGS layers. An original image (or segment) 41 is quantized by a first quantization parameter QPX (SI). The quantized image 42 forms an individual layer. The quantized image 42 is inversely quantized (S2) and provided to a subtracter 44. The subtracter 44 subtracts the image 43 provided with the original image 41 (S3). The result of the subtraction is quantized again using a second quantization parameter QP2 (S4). The quantized result 45 forms the first FGS layer. The quantified result 45 is quantified inversely (S5), and provided to an adder 47. The image 46 provided and the image 43 provided are summed by the adder 47 (S6), and are provided to a subtracter 48. The subtracter 48 subtracts the summed result of the original image 41 (S7). The subtracted result is quantized again using a third parameter of quan t i f i ca t ion QP3 (S8). The quantized result 49 forms the second FGS layer. Through the above operations, the plurality of quality layers can be formed as illustrated in Figure 3. Figures 5 and 6 illustrate the quality layer truncation method used in the current SVC standard. As illustrated in Figure 5, a current image 30 is expressed as a residual image when predicted from a reference image 35 through inter prediction or intra-base prediction. The current image 30 expressed as the residual image consists of a plurality of quality layers 31, 32, 33 and 34. The reference image 35 also consists of a plurality of quality layers 36, 37, 38 and 39.
BRIEF DESCRIPTION OF THE INVENTION According to the current SVC standard, a bitstream extractor truncates a part of quality layers for this way control SNR bit streams as illustrated in FIG. 6. That is, the bit stream extractor truncates the quality layers of the current image 30 which is located in the frame rate layer and / or high resolution ( hereinafter referred to as 'layer' to distinguish it from the 'quality layer') from the highest and down. After all the quality layers of the current image 30 are truncated, the quality layers of the reference image 35 are truncated from the highest and downwards. The above truncation is better for reconstructing an image (reference image) of a lower layer (for example, QCIF), but it is not better to reconstruct an image (current image) of a higher layer (for example, CIF). The quality layers of some low layer images may be less important than those of high layer images. Accordingly, it is required that an efficient SNR can be incorporated by truncating quality layers according to whether a video encoder is primarily focused on a high layer image or a low layer image. The present invention provides a method and apparatus for controlling the SNR of a bit stream that focuses on high layers. The present invention also provides a method and apparatus for controlling the SNR according to whether a video encoder is primarily focused on a high layer image or a low layer image. In accordance with one aspect of the present invention, a method for assigning a priority to control a bit rate of a bitstream is provided, the method includes composing first quality layers for a reference image, composing second quality layers for a current image that is encoded with reference to the reference image and assign a priority to each of the first and second quality layers, where a low priority is assigned to a quality layer that has a small influence on a video quality reduction of the current image when the layer of quality is truncated. According to another aspect of the present invention, a method for controlling a bit rate of a bit stream is provided, the method includes receiving a video bit stream, setting an objective bit rate for the video bitstream, reading first quality layers for a reference image and second quality layers for a current image, and truncation of a quality layer having a low priority and upwards, between the first and second quality layers based on the target bit rate . According to another aspect of this invention, there is provided a video decoding method that includes receiving a video bit stream; read first quality layers for a reference image, second quality layers for a current image and dependence IDs of the first and second quality layers; adjust the dependency ID as indicating the highest quality layer of the first quality layers, if there is no quality layer indicated by the dependency ID between the first quality layers; and reconstruct the current image according to a relationship indicated by the dependency ID. According to another aspect of the present invention, an apparatus for assigning a priority to control a bit rate of a bitstream is provided, the apparatus includes a reference image encoder that composes first quality layers for a reference image, a current image encoder that composes second quality layers for a current image that is encoded with reference to the reference image and a quality level allocator that assigns a priority to each of the first and second quality layers, where a low priority is assigned to a quality layer that has a small influence on a reduction in video quality of the current image when the quality layer is truncated. According to another aspect of this invention, an apparatus for controlling a bit rate of a bit stream is provided, the apparatus includes a bitstream input unit that receives a video bit stream, an objective bit rate setting unit that adjusts a bit rate target for the video bitstream, a bitstream analyzer that reads first quality layers for a reference image and second quality layers for a current image, and a bitstream truncator that truncates a quality layer which has a low priority and upwards, between the first and second quality layers based on the target bit rate. According to another aspect of the present invention, a video decoding apparatus is provided that includes an entropy decoding unit that receives a video bit stream, a bit stream analyzer that reads first quality layers for an image reference, second quality layers for a current image and dependency IDs of the first and second quality layers, a dependency ID adjustment unit that adjusts the dependency ID as indicating the highest quality layer of the first layers of quality, if there is not a quality layer indicated by the dependency ID between the first quality layers, and a current image decoder that reconstructs the current image according to a relationship indicated by the dependency ID.
BRIEF DESCRIPTION OF THE FIGURES The foregoing and other aspects of the present invention will be made apparent by describing in detail exemplary embodiments thereof with reference to the accompanying figures, in which: Figure 1 illustrates an example of a video coding method scalable using a multi-layer structure. Figure 2 is a schematic diagram for explaining the intra-prediction and intra-base prediction of a scalable video coding method. Figure 3 illustrates an example of applying FGS to a residual image through the prediction of Figure 2. Figure 4 illustrates a process for expressing a single image or segment as a single layer and two FGS layers. Figures 5 and 6 illustrate a method of truncation of quality layers used in a current SVC standard. Figure 7 illustrates a configuration of a current SVC system. Figure 8 illustrates a configuration of an SVC system in accordance with an exemplary embodiment of the present invention Figure 9 illustrates an example of truncating a quality layer according to an exemplary embodiment of the present invention. Figure 10 illustrates a bit stream to which a priority ID is assigned in accordance with an exemplary embodiment of the present invention. Figure 11 illustrates a case in which a quality layer indicated by a dependency ID does not exist in a reference image, in accordance with an exemplary embodiment of the present invention. Fig. 12 is a block diagram showing a configuration of a priority assignment apparatus according to an exemplary embodiment of the present invention. Fig. 13 is a block diagram showing a configuration of a bitstream extractor according to an exemplary embodiment of the present invention and Fig. 14 is a block diagram showing a configuration of a video decoder according to an exemplary embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION The advantages and characteristics of the aspects of the present invention and methods to achieve them can be understood more easily by referring to the following detailed description of exemplary modalities and the accompanying figures. The aspects of the present invention can, however, be incorporated in many different forms and should not be considered as being limited to the embodiments shown herein. Instead, these embodiments are provided in such a way that this description is detailed and complete and fully conveys the concept of the invention to those skilled in the art, and the present invention will only be defined by the appended claims. Figure 7 illustrates a configuration of a current SVC system. Figure 8 illustrates a configuration of an SVC system according to an exemplary embodiment of the present invention. Referring to Figure 7, a video encoder 61 generates a scalable multilayer bit stream, for example, a CIF bit stream. A bit stream extractor 62 can transmit the generated CIF bit stream to a first video decoder 63 as such, or it can extract a QCIF bitstream having a low resolution by truncating part of higher layers, and transmitting the flow of bits. QCIF bits to a second video decoder 64. In the two previous cases, the bit stream extractor 62 can make the resolution of the bit streams the same and change only its SNR. Referring to Fig. 8, a quality level (priority) of a CIF bitstream generated in a video encoder 50 is assigned by a quality level assignor 100. That is, a priority ID is assigned by a unit Network Abstraction Layer (NAL) layer composing the CIF bit stream, which follows a priority ID allocation method considering multiple layers of the exemplary embodiment of the present invention. When a bit stream is transmitted to a second video decoder 300b, a bitstream extractor 200 truncates an upper layer and transmits the optimized bitstream to QCIF (a lower layer). When a quality layer is truncated to control the SNR, the traditional method is used. When a bitstream is transmitted to a first video decoder 300a, however, the bit stream extractor 200 transmits a CIF bit stream that includes all the layers. When a quality layer is truncated to control the SNR, the bitstream extractor 200 truncates quality layers that have a low priority ID and upwards, based on the priority ID assigned by the quality level dispatcher 100 Figure 9 illustrates an example of truncating a quality layer according to an exemplary embodiment of the present invention. Referring to Figure 9, the quality level allocator 100 allocates priority IDs according to the following order and the bit stream extractor 200 incorporates an SNR scalability by truncating a quality layer having a low priority ID. and up. The quality assigner 100 verifies a reference relation in the entered bitstream. The reference relation is used for a prediction that includes inter prediction and intra-base prediction. For the prediction method, an image to which it is referring is a current image 30, and an image to which it has been referred is a reference image 35. Especially, Figure 9 illustrates that the number of quality layers of the current image 30 is identical to that of the reference image 35, the number of quality layers of the current image 30 can also be different from that of the reference image 35. To assign the priority ID, after comparing a first case , in which the highest quality layer of the current image 30 is truncated, and a second case, in which the highest quality layer of the reference image 30 is truncated, the most suitable case is selected, in terms of image quality. The first case means a case of reconstructing an image of a layer to which the current image belongs from three layers of quality 31, 32 and 33 of the current image 30 and four quality layers 36, 37, 38 and 39 of the reference image 35. The second case means a case of reconstructing an image of a layer to which the current image belongs from four quality layers 31, 32, 33 and 34 of the current image 30 and three quality layers 36, 37 and 38 of the reference image 35. A detailed process for reconstructing an image includes reconstructing the reference image 35. from quality layers that make up the reference image 35, reconstruct a residual signal of the current image 30 from the quality layers that compose the current image 30 and add the reconstructed reference image 35 and the reconstructed residual signal. When the first and second cases are obtained, the cost of each case is compared. The speed distortion function is generally used as a method for calculating cost. The following equation 1 shows a process to calculate cost: C = E + ???. (Equation 1) where 'C is cost and ??' is the difference of an original signal (for example, which can be calculated by mean square error (MSE)). ?? ' Is the bit rate consumed when the data is compressed and? It is a Lagrangian multiplier. The Lagrangian multiplier Use to control a reflection speed of E and B. C becomes smaller when E and B are reduced, which means that efficient coding was carried out. When a case is selected from which the cost is less than the first and second cases, a priority ID is assigned according to the selected case. If the first case is selected, a lower priority ID, i.e., "0" is assigned to the quality layer 34 of the current image 30 because the quality layer 34 has a smaller influence on an image quality. total video, when the quality layer 34 is truncated. The priority ID is assigned for the remaining quality layers 31, 32 and 33 of the current image 30, and the remaining quality layers 36, 37, 38 and 39 of the reference image 35, which is the same as the comparison of the first and the second cases. That is, after comparing a first case, in which the highest quality layer of the remaining quality layers 31, 32 and 33 is truncated, and a second case, in which the highest quality layer of the layers of remaining image quality 36, 37, 38 and 39 is truncated, the case of which the cost is less is selected. In the process of selecting one from a case, in which the highest quality layer of the remaining quality layers where a priority ID is not assigned is truncated in the current image 30, and a case, in which the highest quality layer of the remaining quality layers where a priority ID is not assigned is truncated in the reference image 35, repeated, the priority ID is assigned for each quality layer of the current image 30 and the reference image 35. The quality level allocator 100 records the priority ID in a NAL unit header (NAL header) that corresponds to each quality layer. Fig. 10 illustrates a bit stream 80 in which a priority ID is assigned in accordance with an exemplary embodiment of the present invention. The quality layers for the current image 30 are recorded as a plurality of NAL units 81, 82, 83 and 84. The quality layers for the reference image 35 are recorded as a plurality of NAL units 86, 87, 88 and 89 A NAL unit consists of a NAL header and a NAL data field. The NAL header includes the priority ID as a part that indicates additional information about the NAL data. The coding data corresponding to each quality layer is recorded in the NAL data field. In Figure 10, the priority ID assigned by the quality level allocator 100 is shown in the NAL header. The bitstream extractor 200 controls the SNR of the bitstream with reference to the priority ID. The bitstream extractor 200 truncates the NAL units of the Lower priority ID and up (ie, in the order of 81, 82, 86, 83, 87, 84, 88 and 89), thus minimizing video quality reduction due to truncation of the units' NAL . The above process is optimized for a video quality of the upper layer (when a bitstream is transmitted to the first video decoder 300a in Figure 8). To optimize video quality of the lowest layer (when a bitstream is transmitted to the second video decoder 300b in Figure 8), the traditional method of truncation from the highest quality layer and down can be used, notwithstanding the priority ID. As suggested in the exemplary embodiment of the present invention, the quality layers of a base layer (to which the reference image belongs) can be truncated than those of a current layer (to which the current image belongs). In this case, a quality layer of the base layer, indicated by a dependency ID of a quality layer of the current layer may not exist. The dependency ID shows a dependency relationship between data, which is first decoded and referenced, to decode data. Accordingly, if a quality layer of the base layer referred by the dependency ID does not exist in the video decoding process, the dependency ID may use a method for refer to the highest quality layer of the remaining quality layers. Referring to Figure 11, the highest quality layer 34 of the current image 30 and the highest quality layer 39 of the reference image 35 are truncated by the bit stream extractor 200. According to an exemplary embodiment of the present invention, since a quality layer of the lower layer can be truncated before all the quality layers of the upper layer are truncated, a dependency ID of the quality layer 33 in the current image 30 can indicate the quality layer 39 that was already truncated. This case has to be modified since the dependency ID of the quality layer in the video decoder indicates the highest quality layer 38 among the remaining quality layers 36, 37 and 38 of the reference image 35. The figures 12 to 14 are block diagrams showing configurations of an apparatus in accordance with an exemplary embodiment of the present invention. Fig. 12 is a block diagram showing a configuration of a priority assignment apparatus according to an exemplary embodiment of the present invention. The priority assignment apparatus 100 assigns a priority for a quality level in order to control a bit rate of the bit stream.
The priority assignment apparatus 100 may include a current image header 100, a reference image header 120, a quality level dispatcher 140 and an entropy coding unit 150. The reference image encoder 120 comprises layers of quality (referred to as first quality layers) for a reference image. The reference image encoder 120 includes a predictor 121, a transformer 122, a quantizer 123 and a quality layer generator 124. A predictor 121 obtains a residual signal by subtracting the predicted image according to a predetermined prediction. As the predetermined prediction, there is intra prediction and intra-base prediction as illustrated in Figure 2. The inter prediction includes the calculation of motion obtained by a motion vector to express a relative movement between a current image and an image having the Same resolution as the current image and a temporary position different from the current image. The current image can be predicted with reference to a lower layer (base layer) that is located on the same temporal position with the current image and has a different resolution, which is called the intra-base prediction. In the intra-base prediction, the calculation in motion is not required.
The transformer 122 carries out a spatial transformation method in the residual frame to create a transformation coefficient. The spatial transformation method may include an Individual Cosine Transformation (DCT), or small wave transformation. Specifically, DCT coefficients can be created in case DCT is used, and small wave coefficients can be created in case small wave transformation is employed. The quantizer 123 quantifies the transformation coefficient received from the transformer 122. Quantification means the process of expressing the transformation coefficients formed in arbitrary real values by individual values. Like quantification, there is scalar quantification and quantification by vectors. Scalar quantification means the process of dividing the transformation coefficients between a quantization parameter and rounding to an integer. The quality layer generator 124 generates a plurality of quality layers through the process described in Figure 4. The plurality of quality layers may consist of a single layer and at least two FGS layers. The current image encoder 110, like the reference image encoder 120, includes a predictor 111, a transformer 112, a quantizer 113 and a quality layer generator 114. The operations of each element of the current image encoder 110 are the same as those of the reference image encoder 120. However, the image input of reference to the reference image encoder 120 is used as an image used to predict the current image in the predictor 111. The predictor 111 performs inter-prediction or intra-base prediction using the entered reference image, and generates a signal residual. The current image encoder 110 composes quality layers (referred to as second quality layers) for the current image, more precisely, for a residual signal of the current image. The reference image entered may be different from the current image in its resolution, in the case of intra-base prediction, and its temporal level, in case of inter prediction. The quality assigner 140 assigns a priority ID to each of the first and second quality layers. The priority assignment is carried out by a method of assigning a lower priority to a quality layer that has a small influence on a reduction in video collection of the current image, and assigning a higher priority to a quality layer that has a great influence (see figure 9).
As a standard for determining video quality reduction, a cost function like equation 1 can be used. The cost function can be expressed by adding a difference of an original image and a bit rate consumed in coding. The entropy coding unit 150 encodes the priority ID determined by the quality level assigner 140, the first quality layers for the reference image and the second quality layers for the current image to generate a bit stream. Entropy coding is a lossless coding method that uses a statistical data feature, which includes arithmetic coding, variable length coding, and so on. Figure 13 is a block diagram showing a configuration of a bit stream extractor according to an exemplary embodiment of the present invention. The bitstream extractor 200 includes a bitstream input unit 210, a bitstream analyzer 220, a bitstream truncate 230, a target bit rate adjustment unit 240, and a streamflow unit of bits 250. The bitstream input unit 210 receives a video bit stream from the priority assignment apparatus 100. The bitstream transmission unit 250 transmits the bit stream from which the bit rate is changed to a video decoder. The bitstream input unit 210 corresponds to a receiving unit of a network interface. The bitstream transmission unit 250 corresponds to a transmission unit of the network interface. The target bit rate adjustment unit 240 establishes an objective bit rate of the video bit stream. The target bit stream can be determined in collective consideration of a bit rate of the transmission bit stream, a network state and a reception layer device function (video decoder). The bitstream analyzer 220 reads priority IDs of the first quality layers for the reference image and the second quality layers for the current images. The priority ID is assigned by the quality level allocator 140 of the priority allocation apparatus 100. The bit-stream truncator 230 truncates a lower quality layer and upwards of the first and second quality layers according to with the target binary speed. The truncation is repeated until the target bit rate is achieved. Figure 14 is a block diagram showing a configuration of a video decoder according to an exemplary embodiment of the present invention. The video decoder 300 includes an entropy decoding unit 310, a bitstream analyzer 320, a current image decoding unit 330, a reference image decoding unit 340 and a dependency ID adjustment unit 350. The entropy decoding unit 310 receives a video bit stream from the bit stream extractor 200, and decodes it without loss. The lossless decoding is carried out as an inverse operation of the lossless coding in the entropy coding unit 150 of the priority assignment apparatus 100. The bitstream analyzer 320 reads coded data for the reference image (first quality layers), encoded data for the second images (second quality layers), dependence IDs of the first quality layers for the reference image and dependence IDs of the second quality layers for the current images. The dependency ID shows that information about a quality layer of the reference image is required to reconstruct a quality layer of the current layer, that is, a dependency relationship. As described in figure 11, since quality layers of a lower layer can be truncated before those of a higher layer, a quality layer of the current layer may indicate a quality layer from which the dependency ID has already been truncated. The dependency ID setting unit 350 set as the dependency ID indicates the highest quality layer of the remaining quality layers. The reference image decoding unit 340 includes a reverse quantizer 341, a reverse transformer 342 and a reverse predictor 343 to thereby decode the coded data from the reference image. The inverse quantizer 341 inverse quantizes the coded data of the reference image. The inverse transformer 342 performs an inverse transformation of the quantized result in inverse fashion. The inverse transformation is carried out as an inverse operation of the transformation in the transformer 122 of Fig. 12. The inverse predictor 343 adds the reconstructed residual signal provided by the inverse transformer 342 and a prediction signal to reconstruct the reference image. At this time, the prediction signal is obtained by inter prediction or intra-base prediction as in the video encoder. The current image decoding unit 330 decodes encoded data of the current image according to the dependency ID. The current image decoding unit 330 includes a reverse quantizer 331, a reverse transformer 332 and a reverse predictor 333. The operations of each element of the current image decoding unit 330 are the same as those of the reference image decoder 340. However, the reverse predictor 333 reconstructs the current image from of the reconstructed residual signal of the current image provided by the inverse transformer 332 using the reconstructed reference image as the prediction signal, i.e., adding the residual signal and the prediction signal. At this time, a dependency ID read by the bitstream analyzer 320 or a modified dependency ID is used. The dependency ID indicates the first quality layers for the reference image that is required to reconstruct the second quality layers for the current image. The term 'image', as used herein, means a single picture. However, the image may be a 'segment', which will be understood by those skilled in the art. The components shown in Figures 12 and 14 can be implemented in software such as a task, class, sub-routine, process, object, chain of executions or program, which is carried out in a certain memory area, and / or hardware such as a Field Programmable Gate Provision (FPGA) or an Application Specific Integrated Circuit (ASIC). The components can also be implemented as a combination of software and hardware. In addition, the components can be properly configured to reside on computer readable storage media, or to run one or more processors.
Industrial Applicability As described above, an exemplary embodiment of the present invention can control the bitstream of bit streams simply by focusing on a video quality of a high layer image. Although the present invention has been shown and described particularly with reference to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention as the one defined by the following claims. It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (19)

  1. CLAIMS Having described the invention as above, the content of the following claims is claimed as property: 1. A method for assigning a priority to control a bit rate of a bitstream, characterized in that it comprises: composing first quality layers for an image reference; composing second quality layers for a current image that is encoded with reference to the reference image and assigning a priority to each of the first and second quality layers, where a low priority is assigned to a quality layer having a small influence on a reduction in video quality of the current image, when the quality layer is truncated.
  2. 2. The method according to claim 1, characterized in that the reference image and the current image are frames or segments.
  3. 3. The method according to claim 1, characterized in that the reference image and the current image have a different resolution or a different temporal level.
  4. 4. The method according to claim 1, characterized in that the first and second quality layers comprise a single layer and at least two layers of fine granularity scalability (FGS), respectively.
  5. 5. The method according to claim 4, characterized in that it comprises composing the first quality layers and composing the second quality layers comprising: obtaining a residual signal by predicting the reference image or the current image; generate a transformation coefficient when transforming the residual signal; compose the individual layer by quantifying the transformation coefficient through a first quantization parameter; subtract the quantified result of the residual signal and compose the at least two FGS layers by quantifying the subtracted result through a second quantization parameter.
  6. 6. The method according to claim 1, characterized in that the quality layer that has a small influence on the reduction of video quality is a quality layer of which the cost for coding is smaller than that of other quality layers. The method according to claim 6, characterized in that the cost is to add a difference of an original image and a bit rate consumed in coding. 8. A method for controlling a bit rate of a bit stream, characterized in that it comprises: receiving a video bit stream; set an objective bit rate for the video bitstream; read first quality layers for a reference image and second quality layers for a current image and truncate a quality layer that has a low priority and upwards, between the first and second quality layers, based on the speed objective binary. 9. The method according to claim 8, characterized in that the reference image and the current image are frames or segments. 10. The method of compliance with the claim 8, characterized in that the reference image and the current image have different resolution or a different temporal level. 11. The method according to claim 8, characterized in that the first and second layers of Quality comprise an individual layer and at least two layers of fine granularity scalability (FGS), respectively. 12. A video decoding method, characterized in that it comprises: receiving a video bit stream; read first quality layers for a reference image, second quality layers for a current image and dependency IDs of the first and second quality layers, set the dependency ID as indicating the 'highest quality layer of the first layers of quality, if there is no quality layer indicated by the dependency ID between the first quality layers and reconstructing the current image according to a relationship indicated by the dependency ID. The method according to claim 12, characterized in that the reference image and the current image are frames or segments. 14. The method according to the claim 12, characterized in that the reference image and the current image have different resolutions or different time levels. 15. The method according to claim 12, characterized in that the first quality layers and the second quality layers comprise an individual layer and at least two layers of fine granularity scalability (FGS), respectively. The method according to claim 12, characterized in that the reconstruction of the current image comprises: reconstructing the reference image according to the dependency ID; reconstruct a residual signal from the current image and add the reconstructed reference image and the reconstructed residual signal. 1
  7. 7. An apparatus for assigning a priority for controlling a bit rate of a bit stream, characterized in that it comprises: a reference image encoder that composes first quality layers for a reference image; a current image encoder that composes second quality layers for a current image that is encoded with reference to the reference image and a quality level allocator that assigns a priority to each of the first and second quality layers, in where a low priority is assigned to a layer of quality that has a small influence on a reduction in video quality of the current image when the quality layer is truncated. 1
  8. 8. An apparatus for controlling a bit rate of a bitstream, characterized in that it comprises: a bitstream input unit that receives a video bit stream; an objective bit rate adjustment unit that adjusts an objective bit rate for the video bit stream; a bitstream analyzer that reads first quality layers for a reference image and second quality layers for a current image, and a truncator of bitstreams that truncates a quality layer that has a low priority and upwards, between the first and the second quality layers, based on the objective binary speed. 1
  9. 9. A video decoding apparatus, characterized in that it comprises: an entropy decoding unit that receives a video bit stream; a bitstream analyzer that reads first quality layers for a reference image, second quality layers for a current image and dependency IDs of the first and second quality layers; a dependency ID adjustment unit that adjusts the dependency ID as indicating the highest quality layer of the first quality layers, if there is no quality layer indicated by the dependency ID between the first quality layers, and a decoder of current images that reconstructs the current image according to a relationship indicated by the dependency ID.
MX2008012360A 2006-03-27 2007-03-27 Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same. MX2008012360A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US78602306P 2006-03-27 2006-03-27
KR1020060048979A KR100772878B1 (en) 2006-03-27 2006-05-30 Method for assigning Priority for controlling bit-rate of bitstream, method for controlling bit-rate of bitstream, video decoding method, and apparatus thereof
PCT/KR2007/001473 WO2007111460A1 (en) 2006-03-27 2007-03-27 Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same

Publications (1)

Publication Number Publication Date
MX2008012360A true MX2008012360A (en) 2008-10-09

Family

ID=40941336

Family Applications (1)

Application Number Title Priority Date Filing Date
MX2008012360A MX2008012360A (en) 2006-03-27 2007-03-27 Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same.

Country Status (3)

Country Link
JP (1) JP5063678B2 (en)
CN (1) CN101411194B (en)
MX (1) MX2008012360A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101432777B1 (en) 2009-09-03 2014-08-22 에스케이텔레콤 주식회사 Video coding Method and Apparatus using second prediction based on reference image, and Recording Medium therefor
CN104902275B (en) * 2015-05-29 2018-04-20 宁波菊风系统软件有限公司 A kind of method for controlling video communication quality dessert

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MXPA06006107A (en) * 2003-12-01 2006-08-11 Samsung Electronics Co Ltd Method and apparatus for scalable video encoding and decoding.

Also Published As

Publication number Publication date
CN101411194B (en) 2011-06-29
JP2009531941A (en) 2009-09-03
CN101411194A (en) 2009-04-15
JP5063678B2 (en) 2012-10-31

Similar Documents

Publication Publication Date Title
US8406294B2 (en) Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same
KR100596705B1 (en) Method and system for video coding for video streaming service, and method and system for video decoding
KR100781525B1 (en) Method and apparatus for encoding and decoding FGS layers using weighting factor
CN101336549B (en) Scalable video coding method and apparatus based on multiple layers
KR100954816B1 (en) Method of coding video and video signal, apparatus and computer readable recording medium for coding video, and method, apparatus and computer readable recording medium for decoding base layer data-stream and enhancement layer data-stream
CN108337522B (en) Scalable decoding method/apparatus, scalable encoding method/apparatus, and medium
KR100718133B1 (en) Motion information encoding/decoding apparatus and method and scalable video encoding apparatus and method employing the same
EP2428042B1 (en) Scalable video coding method, encoder and computer program
KR100704626B1 (en) Method and apparatus for compressing multi-layered motion vectors
CN108650513B (en) Method, apparatus, and computer-readable medium for encoding/decoding image
US20080025399A1 (en) Method and device for image compression, telecommunications system comprising such a device and program implementing such a method
MX2008012863A (en) Video coding method and apparatus supporting independent parsing.
KR20060122671A (en) Method for scalably encoding and decoding video signal
KR20130107861A (en) Method and apparatus for inter layer intra prediction
CN112243128A (en) Inter-layer prediction method and method for transmitting bit stream
KR20050061483A (en) Scalable video encoding
KR100678907B1 (en) Method and apparatus for encoding and decoding FGS layer using reconstructed data of lower layer
MX2008012360A (en) Method of assigning priority for controlling bit rate of bitstream, method of controlling bit rate of bitstream, video decoding method, and apparatus using the same.
WO2007024106A1 (en) Method for enhancing performance of residual prediction and video encoder and decoder using the same
KR20140076508A (en) Method and Apparatus for Video Encoding and Video Decoding
KR102271878B1 (en) Video encoding and decoding method and apparatus using the same
WO2014092434A2 (en) Video encoding method, video decoding method, and device using same
EP1359762A1 (en) Quantizer for scalable video coding
KR20100138735A (en) Video encoding and decoding apparatus and method using context information-based adaptive post filter

Legal Events

Date Code Title Description
FG Grant or registration