KR20140048804A - Method and apparatus of controlling bit-rate for coding/decoding 3d video - Google Patents

Method and apparatus of controlling bit-rate for coding/decoding 3d video Download PDF

Info

Publication number
KR20140048804A
KR20140048804A KR1020130119601A KR20130119601A KR20140048804A KR 20140048804 A KR20140048804 A KR 20140048804A KR 1020130119601 A KR1020130119601 A KR 1020130119601A KR 20130119601 A KR20130119601 A KR 20130119601A KR 20140048804 A KR20140048804 A KR 20140048804A
Authority
KR
South Korea
Prior art keywords
current block
view
block
complexity
image
Prior art date
Application number
KR1020130119601A
Other languages
Korean (ko)
Inventor
남정학
유은경
조현호
심동규
Original Assignee
광운대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광운대학교 산학협력단 filed Critical 광운대학교 산학협력단
Publication of KR20140048804A publication Critical patent/KR20140048804A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Abstract

Disclosed are an apparatus and a method for controlling a bit rate for encoding/decoding a three-dimensional image. The apparatus for encoding a multi-view image includes an extending time point bit rate control unit for determining quantization parameter for a current block of at an extending time point by calculating complexity of a current block at the extending time point referring to complexity of a reference block at a base time point corresponding to the current block at an extending time point, and a quantizing unit for performing quantization by using a determined quantization parameter. Thus, complexity can be exactly predicted so that the bit rate control can be more exactly performed. [Reference numerals] (100) Base time point bit rate control unit; (100-1) Extending time point bit rate control unit; (120,120-1) Converting unit; (121,121-1) Inversion unit; (130,130-1) Quantization unit; (131,131-1) Inverse quantization unit; (140,140-1) Entropy encoding unit; (160,160-1) In-loop filter; (170,170-1) Frame memory; (180,180-1) Intra prediction unit; (190,190-1) Motion compensation unit; (300) Multiplexer; (AA) Extending time point image; (BB) Bit stream; (CC) Base time point image

Description

Apparatus and method for bit rate control for encoding / decoding multi-view image {METHOD AND APPARATUS OF CONTROLLING BIT-RATE FOR CODING / DECODING 3D VIDEO}

The present invention relates to encoding / decoding of three-dimensional images, and more particularly, to an apparatus and method for controlling a bit rate for encoding / decoding three-dimensional images.

MPEG, a video specialist group of ISO / IEC, has recently started to standardize 3DV (3D Video). Standardization for 3DV is based on encoding technology (H.264 / AVC) for 2D 2D single view video, 2D Based on the encoding technology (MVC) for multi-view video, and the HEVC encoding technology recently started by JCT-VC, a joint video coding standard organization of MPEG and ITU-T.

In particular, HEVC defines a coding unit (CU), a prediction unit (PU), and a transform unit (TU) having a quadtree structure, and a sample adaptive offset (SAO). Additional in-loop filters such as Sample Adaptive Offset and Deblocking filter are applied. In addition, the conventional intra prediction and inter prediction are improved to improve compression encoding efficiency.

In addition, MPEG and ITU-T jointly decided to standardize 3DV and formed a new joint standardization group called JCT-3V. In JCT-3V, advanced syntax definition for depth image encoding / decoding in existing MVC, new color image based on H.264 / AVC, and 3D-AVC and HEVC based multiview color image, which are encoding / decoding methods for depth image, 3D-HEVC, which is a sub / decoding method and a sub / decoding method for HEVC-based multi-view color image and depth image, is standardized together.

Unlike 2D video, for 3D video service, image data of various viewpoints must be simultaneously transmitted, and depth image data must also be transmitted depending on the application. Since a large amount of data is required for the 3D video service, there is a need for a bit-rate control algorithm in consideration of a network situation changing in real time in the 3D video service field.

An object of the present invention for solving the above problems is to provide a bit rate control device for encoding a 3D video.

Another object of the present invention for solving the above problems is to provide a bit rate control apparatus for decoding a 3D image.

An apparatus for encoding a multiview image according to an embodiment of the present invention for achieving the above object, by calculating the complexity of the current block of the extended view with reference to the complexity of the reference block of the base view corresponding to the current block of the extended view An extended view bit rate controller for determining a quantization parameter for the current block of the extended view and a quantizer for performing quantization using the determined quantization parameter.

Here, the extended view bit rate controller may calculate a residual bit rate based on a current block of the extended view based on a target bit amount of the current picture including the current block of the extended view.

Here, the extended view bit rate controller may determine a quantization parameter for the current block at the extended view based on the remaining bit rate and the complexity of the current block.

Here, the extended view bit rate controller may calculate the complexity of the current block at the extended view by referring to the complexity of the reference block at the base view corresponding to the neighboring block of the current block at the extended view.

Herein, the neighboring blocks of the current block at the extension time point may be located at any one of a lower left side, a left side, an upper left side, an upper side, and an upper right side with respect to the current block at the extension time point.

Here, the extended view bit rate controller may determine a reference block of the base view corresponding to the neighboring block of the current block at the extended view based on disparity information of the neighboring block of the current block at the extended view.

In this case, the complexity of the reference block at the base view may be represented by a mean absolute difference (MAD) or a mean square error (MSE).

In accordance with another aspect of the present invention, there is provided an apparatus for decoding a multiview image, comprising: an entropy decoder configured to receive and entropy decode a bitstream, and a reference block of a base view corresponding to a current block of an extended view. And an inverse quantizer for dequantizing the quantized bitstream with the quantization parameter for the current block of the extended view determined by calculating the complexity of the current block at the extended view with reference to the complexity.

The apparatus for encoding / decoding a multi-view image according to the embodiment of the present invention as described above, can accurately predict the complexity, thereby enabling more accurate bit-rate control.

1 is a block diagram illustrating a multi-view video encoding apparatus using bit rate control according to an embodiment of the present invention.
2 is a flowchart illustrating a quantization parameter determination method according to an embodiment of the present invention.
3 is a conceptual diagram for explaining quantization parameter determination according to an embodiment of the present invention.
4 is a conceptual diagram illustrating a method of calculating a complexity of a current block at an extension time using a neighboring block of the current block at the extension time according to an embodiment of the present invention.
5 is a block diagram illustrating a multiview image decoding apparatus using bit rate control according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

When a component is referred to as being "connected" or "connected" to another component, it may be directly connected to or connected to that other component, but it may be understood that other components may be present in between. Should be. On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this application, the terms "comprise" or "have" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

The Video Encoding Apparatus and the Video Decoding Apparatus to be described below may be implemented as a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP) Such as a portable multimedia player (PSP), a PlayStation Portable (PSP), a wireless communication terminal, a smart phone, a TV application server and a service server. A communication device such as a user terminal or a communication modem for performing communication with a wired or wireless communication network, a memory for storing various programs and data for inter-screen or intra-screen prediction for coding or decoding an image, coding or decoding, And a microprocessor for computing and controlling It can mean a variety of devices.

In addition, the image encoded by the video encoding apparatus can be transmitted in real time or in non-real time through a wired or wireless communication network such as the Internet, a local area wireless communication network, a wireless LAN network, a WiBro network, a mobile communication network, A serial bus, and the like, and can be decoded and reconstructed into an image and reproduced by an image decoding apparatus.

The moving picture may be generally composed of a series of pictures, and each picture may be divided into a predetermined area such as a frame or a block. When a region of an image is divided into blocks, the divided blocks may be classified into intra blocks and inter blocks according to an encoding method. An intra picture block refers to a block that is encoded by using an intra prediction coding method. An intra picture prediction encoding indicates a pixel of blocks previously encoded, decoded, and reconstructed in a current picture that performs current encoding. A prediction block is generated by predicting pixels of the current block using the prediction block, and a difference value with the pixels of the current block is encoded. Inter-block refers to a block that is coded using Inter Prediction Coding. Inter-prediction coding refers to one or more past pictures or a future picture to generate a prediction block by predicting a current block in the current picture, And the difference value is encoded. Here, a frame to be referred to in encoding or decoding a current picture is referred to as a reference frame. In addition, the term "picture" described below may be used interchangeably with other terms having equivalent meanings, such as an image, a frame, or the like. If you grow up, you can understand. In addition, the reference picture in the present invention means a reconstructed picture can be understood by those skilled in the art.

In addition, in the present invention, a block may be a concept corresponding to a coding unit, a prediction unit, or a transform unit in HEVC.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram illustrating a multiview image encoding apparatus using bitrate control according to an embodiment of the present invention, and FIG. 5 is a view illustrating a multiview image decoding apparatus using bitrate control according to an embodiment of the present invention. It is a block diagram.

Referring to FIG. 1, a multi-view image encoding apparatus according to an embodiment of the present invention includes a basic view image encoding apparatus 11 for encoding a basic view image and an extended view image encoding apparatus 12 for encoding an extended view image. It may include.

Here, the base view image may mean an image for providing a 2D single view image, and the extension view image may mean an image for providing an image of an extended view such as 3D.

For example, the multi-view image encoding apparatus may include a basic view image encoding apparatus 11 and an extended view image encoding apparatus 12, and the extended view image encoding apparatus 12 may include a view point. The number may increase depending on the number.

In addition, the base view image encoding apparatus 11 and the extended view image encoding apparatus 12 may perform encoding by dividing a color image and a depth map.

The multiview image encoding apparatus may transmit a bitstream obtained by encoding the multiview image to the multiview image decoding apparatus.

Referring to FIG. 5, a multiview image decoding apparatus that receives a bitstream may include a bitstream extractor 29, a basic view image decoder 21, and an extended view image decoder 22.

For example, the multi-view image decoding apparatus 20 may be configured to include a base view image decoding apparatus 21 and an extended view image decoding apparatus 22, and the number of extended view image decoding apparatuses may vary depending on the number of views. Of course it can increase.

In detail, the bitstream extractor 29 may classify the bitstream for each view point, and transfer the divided bitstreams to the base view image decoding apparatus 21 and the extended view image decoding apparatus 22, respectively. Can be.

According to an embodiment of the present invention, the decoded base view image may have reverse suitability displayed on a conventional 2D display apparatus. Also, the decoded base view image and the decoded at least one extended view image may be displayed on the stereo display apparatus or the multi-view display apparatus.

Meanwhile, the input camera position information and the like may be transmitted as auxiliary information to the stereo display apparatus or the multi-view display apparatus through the bitstream.

In detail, referring to FIG. 1, each of the basic view image encoding apparatus 11 and the extended view image encoding apparatus 12 may include a subtraction unit 110 and 110-1, a transform unit 120 and 120-1, Quantization unit (130, 130-1), inverse quantization unit (131, 131-1), inverse transform unit (121, 121-1), entropy coding unit (140, 140-1), addition unit (150, 150-1) ), The in-loop filter unit 160 and 160-1, the frame memories 170 and 170-1, the intra predictor 180 and 180-1, and the motion compensator 190 and 190-1.

Each of the base view video encoding apparatus 11 and the extended view video encoding apparatus 12 may include a base view bit rate controller 100 and an extended view bit rate controller 100-1.

The subtraction units 110 and 110-1 subtract a prediction image generated by intraprediction or inter prediction from a target image (current image) to be encoded, which is a received input image, to generate a residue image between a current image and a prediction image. Create

The converters 120 and 120-1 convert the residual image generated by the subtractors 110 and 110-1 from the spatial domain to the frequency domain. Here, the transformers 120 and 120-1 may use the residuals using a technique of transforming an image signal of a spatial axis into a frequency axis such as a Hadamard transform, a discrete cosine transform, a discrete cosine transform, or the like. The image can be converted into the frequency domain.

The bit rate controllers 100 and 100-1 may determine the quantization parameter of the video codec in consideration of the target bit amount and the image complexity available in the network. In addition, when determining the quantization parameter in each unit unit, not only the available bit amount but also the complexity or error amount of the corresponding unit may be used as an input value.

Since the complexity or the amount of error is a value that can be obtained by actual encoding, the bit rate controllers 100 and 100-1 use the mean absolute difference (MAD) value of the block at the same position of the previous frame in time to determine the current block. The quantization parameter for may be determined. Here, the bit-quantization model used in the bit rate controllers 100 and 100-1 may be a one-dimensional linear model or a two-dimensional linear model.

In particular, the extended view bit rate control unit 100-1 calculates the complexity of the current block at the extended view by referring to the complexity of the reference block at the base view corresponding to the current block at the extended view, thereby quantizing parameters for the current block at the extended view. Can be determined.

For example, the extended view bit rate controller 100-1 may calculate a residual bit rate based on a current block of the extended view, based on a target bit amount of the current picture including the current block of the extended view.

In addition, the extended view bit rate controller 100-1 may determine a quantization parameter for the current block at the extended view based on the remaining bit rate and the complexity of the current block.

The quantizers 130 and 130-1 perform quantization using the quantization parameters calculated by the bit rate controllers 100 and 100-1 on the converted data (frequency coefficients) provided by the converters 120 and 120-1. do. That is, the quantization units 130 and 130-1 divide frequency coefficients, which are data converted by the transform units 120 and 120-1, by a quantization step size, and approximate them to calculate a quantization result value.

The entropy encoders 140 and 140-1 generate a bitstream by entropy encoding the quantization result values calculated by the quantizers 130 and 130-1. The entropy encoding units 140 and 140-1 may convert the quantization result values calculated by the quantization units 130 and 130-1 into a Context Adaptive Variable Length Coding (CAVLC) or a Context-Adaptive Binary Arithmetic Coding (CABAC) And the entropy encoding of the information necessary for decoding the image in addition to the quantization result value can be performed.

The inverse quantization units 131 and 131-1 dequantize the quantization result values calculated by the quantization units 130 and 130-1. In other words, the inverse quantization units 131 and 131-1 restore the values (frequency coefficients) in the frequency domain from the quantization result.

The inverse transformers 121 and 121-1 reconstruct the residual image by converting the values of the frequency domain (frequency coefficients) provided to the inverse quantization units 131 and 131-1 from the frequency domain into the spatial domain, , 150-1 generates a reconstructed image of the input image by adding the residual image reconstructed by the inverse transformers 121 and 121-1 to the predicted image generated by intra prediction or inter prediction, thereby generating a reconstructed image of the input image. Store in -1).

The basic view image and the extended view image are images generated by different view points, but dependencies exist. Therefore, the encoding efficiency of the multi-view image can be improved based on the dependence between the base view image and the expansion view image.

The intra predictors 180 and 180-1 perform intra prediction and the motion compensators 190 and 190-1 compensate a motion vector for inter prediction. Here, the intra predictors 180 and 180-1 and the motion compensators 190 and 190-1 may be collectively called predictors.

The in-loop filter units 160 and 160-1 perform filtering on the reconstructed image, and may include a deblocking filter (DF) and a sample adaptive offset (SAO). have.

The multiplexer 300 may receive the bitstream of the encoded base view image and the encoded extended view image to output the expanded bitstream.

2 is a flowchart illustrating a quantization parameter determination method according to an embodiment of the present invention.

Referring to FIG. 2, a process of deriving a quantization parameter by the bit rate controllers 100 and 100-1 will be described.

In order to encode the base view, first, a quantization parameter must be determined in each GOP unit.

First, it is determined whether the GOP is the start of the input image (S210). If the current image to be encoded is the start frame of the GOP, the quantization parameter value of the GOP unit is determined by applying the GOP unit bit-quantization model (S220), and the determined quantization parameter is used in the start frame of each GOP.

If it is not the start frame of the GOP, it is determined whether the start of the frame (S230). In the case of the beginning of a frame, the quantization parameter for each frame is determined before the encoding of each frame is started. That is, the quantization parameter value of the frame unit is determined by applying the frame-based bit-quantization model (S240), and the determined quantization parameter is used in the first coding unit of each frame.

If it is not the start of the frame, the unit quantization parameter value is determined by applying the unit unit bit-quantization model (S250), and the determined quantization parameter is applied to the unit currently being encoded.

Here, the determination of the quantization parameter value in units of units may be repeated until all units of one frame are finished (S260), and all of the series of processes may be repeated until the frames of all sequences are finished (S270).

Accordingly, quantization is performed using quantization parameter values calculated by applying bit-quantization models of GOP units, frame units, and unit units (S280).

3 is a conceptual diagram for explaining quantization parameter determination according to an embodiment of the present invention.

Referring to Fig. 3, the application of the unit unit bit-quantization model 30 will be described.

The unit bit-quantization model 30 may calculate a quantization parameter using a one-dimensional linear model or a two-dimensional linear model by inputting a target bit amount and image complexity for each unit.

Here, the target bit amount may be determined in units of units in consideration of information such as the target bit amount in the target frame unit, the bit amount used in the target frame, and the remaining bit amount in the target frame.

Also, the image complexity is an input value for adaptively allocating the bit amount in consideration of how complicated the image for the target unit is. For the 2D image, in order to calculate the image complexity, a mean absolute difference (MAD) of a unit at a same position of a previous frame may be used based on a target unit.

4 is a conceptual diagram illustrating a method of calculating a complexity of a current block at an extension time using a neighboring block of the current block at the extension time according to an embodiment of the present invention.

For example, in the two-dimensional image, the complexity of the image for the unit to be encoded at time t may be predicted by the MAD value of the unit at the same position of the previously encoded time t-1.

Referring to FIG. 4 for the 3D image, the complexity of the image for the encoding target unit 400 at t time may use the MAD value of the unit 410 at the same position of the previously encoded t-1 time. It may be predicted by the MAD value of the unit 420 at the corresponding position of the time t of the previously encoded base view.

The unit corresponding to the encoding target unit 400 may be found at the base view using information of the neighboring units 401, 402, 403, 404, and 405 of the current encoding target unit 400.

The corresponding unit may be determined from disparity information of a unit having a disparity vector among the neighboring units 401, 402, 403, 404, and 405. For example, when there are a large number of units having parallax vectors among the peripheral units 401, 402, 403, 404, 405, a selective operation among the maximum, minimum, average, and median of the parallax vectors may be performed. The corresponding unit can be determined.

Accordingly, the extended view bit rate controller 100-1 refers to the complexity of the reference block 420 of the base view corresponding to the neighboring blocks 401, 402, 403, 404, 405 of the current block 400 of the extended view. The complexity of the current block 400 at the expansion time can be calculated. In particular, the neighboring blocks 401, 402, 403, 404, and 405 include the lower left side 401, the left side 402, the upper left side 403, the upper side 404, and the current block 400 based on the expansion time point. It may be located on any one of the upper right side 405. In this case, the complexity of the reference block 420 of the base view may be represented by a mean absolute difference (MAD) or a mean square error (MSE).

In addition, the expansion time bit rate control unit 100-1 may determine the current time of the expansion time based on disparity information of the neighboring blocks 401, 402, 403, 404, and 405 of the current block 400 of the expansion time. The reference block 420 of the base view corresponding to the neighboring blocks 401, 402, 403, 404, 405 of the block 400 may be determined.

5 is a block diagram illustrating a multiview image decoding apparatus using bit rate control according to an embodiment of the present invention.

Referring to FIG. 5, a multiview image decoding apparatus according to an embodiment of the present invention may include a bitstream extractor 29, a base view image decoder 21, and an extended view image decoder 22. have.

The bitstream extractor 29 may classify the bitstream for each view point, and the divided bitstreams may be transmitted to the basic view image decoding device 21 and the extended view image decoding device 22, respectively.

Each of the base view image decoding apparatus 21 and the extended view image decoding apparatus 22 includes the entropy decoding units 210 and 210-1, the inverse quantization units 220 and 220-1, and the inverse transform units 230 and 230-2. ), Adders 240 and 240-1, in-loop filters 250 and 250-1, frame memories 260 and 260-1, intra predictors 270 and 270-1, and motion compensators 280. , 280-1). Here, the intra predictors 270 and 270-1 and the motion compensators 280 and 280-1 may be collectively referred to as predictors.

The entropy decoding units 210 and 210-1 may receive the bitstream and entropy decode the bitstream.

The inverse quantization unit 220-1 refers to the complexity of the reference block 420 of the base view corresponding to the current block 400 of the extension time point and calculates the complexity of the current block 400 of the extension time point of the expansion time point. The quantized bitstream may be inversely quantized with the quantization parameter for the current block 400.

The quantization parameter for the current block of the extended view may include the current block 400 of the extended view calculated based on the target bit amount of the current picture including the current block 400 of the extended view, Can be determined based on complexity.

In addition, the quantization parameter for the current block of the extended view refers to the complexity of the reference block 420 of the base view corresponding to the neighboring blocks 401, 402, 403, 404, 405 of the current block 400 of the extended view. Can be determined. Here, the complexity of the reference block at the base view may be expressed as a mean absolute difference (MAD) or mean square error (MSE).

For example, the neighboring blocks 401, 402, 403, 404, and 405 of the current block at the extended time point may have a lower left side 401, a left side 402, a left upper side 403, based on the current block at the extended time point. It may be located at any one of the upper side 404 and the upper right side 405.

In addition, the reference block 420 of the base view corresponding to the neighboring blocks 401, 402, 403, 404, and 405 of the current block of the extended view is a neighboring block 401, 402, 403, 404 of the current block of the extended view. , 405 may be determined based on disparity information.

Meanwhile, since each component of the multiview image decoding apparatus 20 may be understood to correspond to each of the components of the multiview image encoding apparatus of FIG. 1, detailed description thereof will be omitted.

The apparatus for encoding / decoding a multi-view image according to the above-described embodiment of the present invention can accurately predict the complexity, and thus, there is an advantage of enabling more accurate bit-rate control.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the present invention as defined by the following claims It can be understood that

10: encoding apparatus 20: decoding apparatus
11: basic view image encoding apparatus 12: extended view image encoding apparatus
21: basic view image decoding device 22: extended view image decoding device
29: bitstream extracting unit 30: unit unit bit-quantization model
100: basic viewpoint bit rate controller 100-1: extended viewpoint bit rate controller
110, 110-1: Subtraction part 120, 120-1: Conversion part
121, 121-1, 230, 230-1: inverse transform unit 130, 130-1: quantization unit
131, 131-1, 220, 220-1: inverse quantization unit 140, 140-1: entropy encoding unit
150, 150-1, 240, 240-1: Adder 160, 160-1, 250, 250-1: In-loop filter
170, 170-1, 260, 260-1: frame memory 180, 180-1, 270, 270-1: intra prediction unit
190, 190-1, 280, 280-1: motion compensation unit 210, 210-1: entropy decoding unit
300: multiplexer

Claims (13)

In encoding for a multiview image,
An extended view bit rate controller for determining a quantization parameter for the current block at the extended view by calculating a complexity of the current block at the extended view with reference to the complexity of the reference block at the base view corresponding to the current block at the extended view; And
And a quantization unit configured to perform quantization using the determined quantization parameter.
The method according to claim 1,
The extended view bit rate control unit,
And a residual bit rate is calculated based on a current block of the extended view based on a target bit amount of the current picture including the current block of the extended view.
The method according to claim 2,
The extended view bit rate control unit,
And determining a quantization parameter for the current block of the extended view based on the residual bit rate and the complexity of the current block.
The method according to claim 1,
The extended view bit rate control unit,
And a complexity of the current block of the extended view is calculated by referring to the complexity of the reference block of the base view corresponding to a neighboring block of the current block of the extended view.
The method of claim 4,
The neighboring block of the current block at the expansion time point is
The apparatus for encoding a multi-view image, characterized in that located on any one of the lower left side, left side, upper left side, upper side, and upper right side with respect to the current block of the extended view.
The method of claim 4,
The extended view bit rate control unit,
Encoding a multiview image, wherein a reference block of the base view corresponding to a neighboring block of the current block of the extension time is determined based on disparity information of the neighboring block of the current block of the extension time Device.
The method according to claim 1,
The complexity of the reference block at the base view is
Apparatus for encoding a multiview image, characterized in that represented by the mean absolute difference (MAD) or Mean Square Error (MSE).
In decoding on a multiview image,
An entropy decoder configured to receive the bitstream and entropy decode the bitstream; And
Dequantizing the bitstream quantized with a quantization parameter for the current block of the extended view determined by calculating the complexity of the current block at the extended view with reference to the complexity of the reference block of the base view corresponding to the current block at the extended view An apparatus for decoding a multiview image including an inverse quantization unit.
The method according to claim 8,
The quantization parameter for the current block of the extension time point,
Decoding a multiview image, characterized in that the current block of the extended view calculated based on the target bit amount of the current picture including the current block of the extended view is determined based on the remaining bit rate of the unit and the complexity of the current block. Device.
The method according to claim 8,
The quantization parameter for the current block of the extension time point,
The apparatus for decoding a multi-view image, characterized in that determined by referring to the complexity of the reference block of the base view corresponding to the neighboring block of the current block of the extended view.
The method of claim 10,
The neighboring block of the current block at the expansion time point is
The apparatus for decoding a multi-view image, characterized in that located on any one of the lower left side, left side, upper left side, upper side and upper right side with respect to the current block of the extended view.
The method of claim 10,
The reference block of the base view corresponding to the neighboring block of the current block of the extension time is,
The apparatus for decoding a multi-view image, characterized in that determined based on the disparity information of the neighboring blocks of the current block of the expansion time.
The method according to claim 8,
The complexity of the reference block at the base view is
Apparatus for decoding a multiview image, characterized in that represented by the mean absolute difference (MAD) or Mean Square Error (MSE).
KR1020130119601A 2012-10-09 2013-10-08 Method and apparatus of controlling bit-rate for coding/decoding 3d video KR20140048804A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020120111952 2012-10-09
KR20120111952 2012-10-09

Publications (1)

Publication Number Publication Date
KR20140048804A true KR20140048804A (en) 2014-04-24

Family

ID=50654735

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130119601A KR20140048804A (en) 2012-10-09 2013-10-08 Method and apparatus of controlling bit-rate for coding/decoding 3d video

Country Status (1)

Country Link
KR (1) KR20140048804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572653A (en) * 2019-09-27 2019-12-13 腾讯科技(深圳)有限公司 Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, storage medium, and electronic apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572653A (en) * 2019-09-27 2019-12-13 腾讯科技(深圳)有限公司 Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, storage medium, and electronic apparatus

Similar Documents

Publication Publication Date Title
US11722669B2 (en) Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters
KR102550448B1 (en) Method of encoding/decoding motion vector for multi-view video and apparatus thereof
US8948243B2 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
KR20220024817A (en) Encoders, decoders and their methods
CN113170143B (en) Encoder, decoder and corresponding deduction method of boundary strength of deblocking filter
EP2685717A1 (en) Video image encoding method and video image decoding method
CN103069800A (en) Method and apparatus for encoding video, and decoding method and apparatus
KR20130004173A (en) Method and apparatus for video encoding and decoding
EP2908529A1 (en) Video data decoding method and video data decoding apparatus
KR20130067280A (en) Decoding method of inter coded moving picture
JP7231759B2 (en) Optical flow-based video interframe prediction
KR20160072101A (en) Method and apparatus for decoding multi-view video
KR101680674B1 (en) Method and apparatus for processing multiview video signal
KR20140124919A (en) A method for adaptive illuminance compensation based on object and an apparatus using it
KR20140048804A (en) Method and apparatus of controlling bit-rate for coding/decoding 3d video
WO2013111977A1 (en) Deblocking method and deblocking apparatus for block on which intra prediction is performed
WO2013005966A2 (en) Video encoding and decoding methods and device using same
KR20160064845A (en) Method and apparatus for sub-predition unit level inter-view motion predition for depth coding
CN118042136A (en) Encoding and decoding method and device
KR20140038315A (en) Apparatus and method for coding/decoding multi-view image
KR20130086980A (en) Methods and apparatuses of deblocking on intra prediction block
CN116647683A (en) Quantization processing method and device
KR20170126817A (en) Fast video encoding method and apparatus for the same
CN116708787A (en) Encoding and decoding method and device
KR20140124045A (en) A method for adaptive illuminance compensation based on object and an apparatus using it

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination