KR20140124045A - A method for adaptive illuminance compensation based on object and an apparatus using it - Google Patents

A method for adaptive illuminance compensation based on object and an apparatus using it Download PDF

Info

Publication number
KR20140124045A
KR20140124045A KR20130040909A KR20130040909A KR20140124045A KR 20140124045 A KR20140124045 A KR 20140124045A KR 20130040909 A KR20130040909 A KR 20130040909A KR 20130040909 A KR20130040909 A KR 20130040909A KR 20140124045 A KR20140124045 A KR 20140124045A
Authority
KR
South Korea
Prior art keywords
depth information
compensating
sample
brightness
block
Prior art date
Application number
KR20130040909A
Other languages
Korean (ko)
Inventor
김경용
박광훈
배동인
이윤진
허영수
Original Assignee
인텔렉추얼디스커버리 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사 filed Critical 인텔렉추얼디스커버리 주식회사
Priority to KR20130040909A priority Critical patent/KR20140124045A/en
Publication of KR20140124045A publication Critical patent/KR20140124045A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

A method for brightness compensation according to an embodiment of the present invention comprises the steps of: receiving a bit stream including an encoded image; performing prediction decoding of the bit stream according to an intra mode or an inter mode; and compensating for brightness of the current picture to be decoded according to brightness of a previously decoded prediction picture, wherein the step of compensating for the brightness includes the step of adaptively compensating for the brightness per pixel unit based on depth information included in the bit stream.

Description

[0001] The present invention relates to an object-based adaptive brightness compensation method and apparatus,

The present invention relates to a method of efficiently encoding and decoding an image using depth information.

3D video provides users with a stereoscopic effect as if they are seeing and feeling in the real world through a 3D stereoscopic display device. As a result of this research, the Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V), a joint standardization group of ISO / IEC Moving Picture Experts Group (MPEG) and ITU-T VCEG (Video Coding Experts Group) Video standards are in progress. The 3D video standard includes standards for advanced data formats and related technologies that can support playback of autostereoscopic images as well as stereoscopic images using real images and their depth information maps.

The present invention proposes a method for efficiently performing brightness compensation applied to image encoding / decoding using depth information.

According to another aspect of the present invention, there is provided a brightness compensation method including: receiving a bitstream including an encoded image; Performing predictive decoding on the bitstream according to an Intra mode or an Inter mode; And compensating the brightness of the current picture to be decoded according to the previously decoded predictive picture brightness, wherein the step of compensating for the brightness comprises adaptively compensating for each pixel unit based on the depth information included in the bitstream .

The present invention can enhance the coding efficiency of an image by deriving a compensation value for each object using a depth information map as a sample in performing brightness compensation.

1 is a diagram showing an example of a basic structure and a data format of a 3D video system.
2 is a view showing an example of an actual image and a depth information map image.
3 is a block diagram showing an example of the configuration of the image encoding apparatus.
4 is a block diagram showing an example of a configuration of an image decoding apparatus.
5 is a block diagram for explaining an example of a brightness compensation method.
6 is a diagram for explaining the relationship between the texture luminance and the depth information map.
7 is a diagram showing an example of a method of constructing a sample for brightness compensation in inter-view prediction.
FIG. 8 is a diagram for explaining an object-based adaptive brightness compensation method according to an embodiment of the present invention.
9 is a diagram illustrating an exemplary method of constructing a sample for brightness compensation using depth information values.
10 is a view for explaining a brightness compensation method according to the first embodiment of the present invention.
10A is a flowchart illustrating a brightness compensation method according to the first embodiment of the present invention.
11 is a view for explaining a brightness compensation method according to a second embodiment of the present invention.
11A is a flowchart illustrating a brightness compensation method according to a second embodiment of the present invention.
12 is a diagram illustrating an exemplary method of setting a sample of a current picture and a predictive picture of a texture when object-based brightness compensation is performed.
13 is a diagram showing examples of a depth information map.
FIG. 14 is a diagram illustrating embodiments of a method of setting a depth value interval. FIG.

The following merely illustrates the principles of the invention. Thus, those skilled in the art will be able to devise various apparatuses which, although not explicitly described or shown herein, embody the principles of the invention and are included in the concept and scope of the invention. Furthermore, all of the conditional terms and embodiments listed herein are, in principle, intended only for the purpose of enabling understanding of the concepts of the present invention, and are not intended to be limiting in any way to the specifically listed embodiments and conditions .

It is also to be understood that the detailed description, as well as the principles, aspects and embodiments of the invention, as well as specific embodiments thereof, are intended to cover structural and functional equivalents thereof. It is also to be understood that such equivalents include all elements contemplated to perform the same function irrespective of the currently known equivalents as well as the equivalents to be developed in the future, i.e., the structure.

Thus, for example, it should be understood that the block diagrams herein represent conceptual views of exemplary circuits embodying the principles of the invention. Similarly, all flowcharts, state transition diagrams, pseudo code, and the like are representative of various processes that may be substantially represented on a computer-readable medium and executed by a computer or processor, whether or not the computer or processor is explicitly shown .

The functions of the various elements shown in the figures, including the functional blocks depicted in the processor or similar concept, may be provided by use of dedicated hardware as well as hardware capable of executing software in connection with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which may be shared.

Also, the explicit use of terms such as processor, control, or similar concepts should not be interpreted exclusively as hardware capable of running software, and may be used without limitation as a digital signal processor (DSP) (ROM), random access memory (RAM), and non-volatile memory. Other hardware may also be included.

In the claims hereof, the elements represented as means for performing the functions described in the detailed description include all types of software including, for example, a combination of circuit elements performing the function or firmware / microcode etc. , And is coupled with appropriate circuitry to execute the software to perform the function. It is to be understood that the invention defined by the appended claims is not to be construed as encompassing any means capable of providing such functionality, as the functions provided by the various listed means are combined and combined with the manner in which the claims require .

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, in which: There will be. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.

Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 shows an example of a basic structure and a data format of a 3D video system.

The basic three-dimensional video system considered in the three-dimensional video standard is as shown in FIG. 1. As shown in FIG. 1, the depth information image being used in the three-dimensional video standard is encoded together with a general image and transmitted to the terminal as a bit stream. On the transmitting side, the image content of N (N ≥ 2) viewpoints is acquired by using a stereo camera, a depth information camera, a multi-view camera, and a two-dimensional image into a three-dimensional image. The obtained image content may include video information of the N view point, its depth map information, camera-related additional information, and the like. The video content at time point N is compressed using the multi-view video encoding method, and the compressed bitstream is transmitted to the terminal through the network. The receiving side decodes the transmitted bit stream using the multi-view video decoding method, and restores the N view image. The reconstructed N-view image generates virtual view images at N or more viewpoints by a depth-image-based rendering (DIBR) process. The generated virtual viewpoint images are reproduced in accordance with various stereoscopic display devices to provide stereoscopic images to the user.

The depth information map used to generate the virtual viewpoint image is a representation of the distance between the camera and the actual object in the real world (depth information corresponding to each pixel at the same resolution as the real image) in a fixed number of bits. As an example of a depth information map, FIG. 2 shows a "balloons" image (FIG. 2 (a)) and its depth information map (FIG. 2 (b)) being used in the MPEG standard of 3D video coding standard Giving. The depth information map shown in FIG. 2 actually represents depth information on the screen in 8 bits per pixel.

As an example of a method of encoding an actual image and its depth information map, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) having the highest coding efficiency among the video coding standards developed so far jointly standardize Encoding can be performed using HEVC (High Efficiency Video Coding).

3 is a block diagram of an example of the configuration of the image encoding apparatus, and shows a coding structure of H.264.

Referring to FIG. 3, the unit for processing data in the H.264 coding scheme is a macroblock having a size of 16 x 16 pixels, and is encoded in an Intra mode or an Inter mode by receiving an image. And outputs the bit stream.

In the intra mode, the switch is switched to the intra mode, and in the inter mode, the switch is switched to the inter mode. The main flow of the encoding process is to generate a prediction block for the inputted block image, and then to obtain the difference between the input block and the prediction block and to code the difference.

First, the generation of the prediction block is performed according to the intra mode and the inter mode. In case of the intra mode, a prediction block is generated by spatial prediction using the already encoded neighboring pixel values of the current block in the intra prediction process. In the inter mode, in the motion prediction process, A motion vector is obtained by searching an area where the best match with the current input block is obtained, and motion compensation is performed using the obtained motion vector to generate a prediction block.

As described above, the difference between the currently input block and the prediction block is calculated to generate a residual block, and then the residual block is encoded. A method of encoding a block is roughly divided into an intra mode and an inter mode. 8x8, 8x8, and 8x8 inter modes for the inter mode, and 8x8, 8x8, and 8x8 inter modes for the 8x8 inter mode. 4x8, and 4x4 sub inter modes.

The encoding for the residual block is performed in the order of conversion, quantization, and entropy encoding. First, a block encoded in the 16x16 intra mode performs conversion to the difference block to output a transform coefficient, and only the DC coefficient is collected from the output transform coefficients to perform Hadamard transform to output Hadamard transformed DC coefficient.

In a block encoded in a coding mode other than the 16x16 intra mode, the transform process receives the input residual block, transforms the block, and outputs a transform coefficient.

In the quantization process, the input transform coefficient is quantized according to a quantization parameter, and outputs a quantized coefficient. In the entropy encoding process, the input quantized coefficients are output as a bitstream by performing entropy encoding according to a probability distribution. Since H.264 performs inter-frame predictive coding, it is necessary to decode and store the currently encoded image in order to use it as a reference image of a subsequent input image.

Therefore, the quantized coefficients are dequantized and the inverse transform is performed to generate reconstructed blocks through the predictive image and the adder. Then, the blocking artifacts generated in the encoding process are removed through the deblocking filter, and the reconstructed blocks are stored in the reference image buffer do.

FIG. 4 is a block diagram of an example of a configuration of a video decoding apparatus, and shows a decoding structure of H.264.

Referring to FIG. 4, the unit for processing data in the H.264 decoding structure is a macroblock having a size of 16 x 16 pixels, and the decoding is performed in an Intra mode or an Inter mode by receiving a bitstream. And outputs the reconstructed image.

In the intra mode, the switch is switched to the intra mode, and in the inter mode, the switch is switched to the inter mode. The main flow of the decoding process is to generate a reconstructed block by adding a block and a prediction block as a result of decoding a bitstream after generating a prediction block.

First, the generation of the prediction block is performed according to the intra mode and the inter mode. First, in the intra mode, a spatial prediction is performed using the already encoded neighboring pixel values of the current block in the intra prediction process to generate a prediction block,

In the inter mode, a motion vector is used to search for a region in a reference image stored in a reference image buffer, and motion compensation is performed to generate a prediction block.

In the entropy decoding process, the input bitstream is entropy-decoded according to a probability distribution to output a quantized coefficient. The quantized coefficients are dequantized and inverse transformed to generate a reconstructed block through a predictive image and an adder. Blocking artifacts are removed through a deblocking filter, and the reconstructed blocks are stored in a reference image buffer.

As an example of another method of encoding a real image and its depth information map, HEVC (High Efficiency Video Coding), which is currently being jointly standardized by MPEG (Moving Picture Experts Group) and VCEG (Video Coding Experts Group) have. In addition to HD and UHD images, 3D video and mobile communication networks can provide high-quality images with lower bandwidths.

HEVC includes various new algorithms such as coding unit and structure, inter prediction, intra prediction, interpolation, filtering, and transform.

When using predictive encoding in 3D video coding, the luminance between the current picture to be encoded and the previously-coded predictive picture is totally or partially different. The reason for this is that the position and state of the camera or lighting changes every moment. In order to compensate for this, a brightness compensation method has been proposed.

5 is a block diagram for explaining an example of a brightness compensation method.

Referring to FIG. 5, the brightness compensation methods are methods for calculating the brightness difference between the samples using the pixels around the current block and the pixels around the prediction block in the reference image as samples, and calculating the brightness compensation weight and the offset value therefrom.

These conventional brightness compensating methods are compensated in units of blocks, and the same brightness weight and offset values are applied to all pixel values in one block.

Figure pat00001

In the above equation (1), Pred [x, y] denotes a brightness compensated prediction block, and Rec [x, y] denotes a prediction block of a reference image. In the equation, α and β mean weight and offset, respectively.

Pixels within a block to be subjected to brightness compensation are not flat and are often composed of a plurality of different areas such as a background and an object. Since the degree of change in brightness varies depending on the position of the object, it is not optimal to use the same compensation value for all the pixels in the block as in the conventional method.

Therefore, it is necessary to use a compensation value for each object by classifying the objects in the block.

According to the embodiment of the present invention, since the objects can be distinguished by using the depth information map used as the additional information in the three-dimensional video coding, the brightness compensation of the object unit can be effectively used through the proposed method.

However, the present invention proposes an object-based adaptive brightness compensation using a depth information map.

When performing the brightness compensation on the texture luminance in the 3D video coding, the degree of the luminance change due to the camera movement may be different depending on the position of the object. Therefore, it is possible to improve the efficiency by performing brightness compensation on an object basis.

6 is a diagram for explaining the relationship between the texture luminance and the depth information map.

As shown in FIG. 6, the texture luminance and the depth information map are almost coincident with the object boundary, and depth values belonging to different objects on the depth information map are clearly distinguished based on a specific critical point. Therefore, it is possible to perform brightness compensation on an object-by-object basis based on a depth information map.

On the other hand, when the weight and offset values for brightness compensation are included in the bitstream, the bit amount increases. Therefore, in order to solve the increase of the bit amount, the weight value and the offset value for brightness compensation are obtained through the neighboring block of the current block and the neighboring block of the corresponding block of the reference image. That is, the conventional adaptive brightness compensation method uses the current block on the texture and the pixels on the periphery of the prediction block in order to not explicitly transmit the compensation value.

FIG. 7 shows an example of a method of constructing a sample for brightness compensation in inter-view prediction.

Referring to FIG. 7, since the pixels of the current block are not known at the decoding time, the neighboring pixel values of the current block and the prediction block are sampled, and the compensation value is derived based on the difference between the samples.

Here, the sample means the pixels around the current block, and the predictive sample means the pixels around the prediction block.

Current sample = current block surrounding pixel set

Predictive sample = prediction picture (reference picture)

Compensation value = f (current sample, predictive sample). f = arbitrary function that calculates the compensation value using two samples

The object-based adaptive brightness compensation method according to an exemplary embodiment of the present invention is further used to derive a compensation value for each object using a depth information map as a sample.

In one embodiment of the present invention, the key point of distinguishing objects is the assumption that the depth information values of each object will be the same.

FIG. 8 is a diagram for explaining an object-based adaptive brightness compensation method according to an embodiment of the present invention.

The definitions of the terms used in Fig. 8 are as follows.

Current sample = current block surrounding pixel set

Predictive sample = prediction picture (reference picture)

Current depth sample = current depth block surrounding depth value set

Predicted Depth Sample = Predicted Depth in the Predicted Depth Map (Reference Depth Information Image)

Object-based compensation value = g (current sample, predicted sample, current depth sample, predicted depth sample). g = arbitrary function that calculates the compensation value using texture and depth samples

According to an embodiment of the present invention, when the objects are classified, depth information as well as textures are used. Here, the method of deriving the brightness compensation value of the texture using the depth information map as additional information can be applied in various ways.

Pixel-based brightness compensation using depth information

According to an embodiment of the present invention, depth information values of neighboring blocks of a depth information map block corresponding to a texture block are formed as a sample, and independent compensation values for a pixel set of each pixel or a predetermined section in a current texture block Lt; / RTI >

FIG. 9 shows an embodiment of a method of constructing a sample for brightness compensation using depth information values.

9, X, A, and B represent a current block, a left block of the current block, and an upper block of the current block, respectively.

Since the pixel information of the current block can not be known at the time of decoding, pixels located in the periphery of the current block X and pixels located in the periphery of the prediction block XR are used as samples for the texture. For example, all or some of the pixels in the neighboring blocks A, B, AR, and BR of X and XR can be used as a sample for the texture.

In addition, pixels located around the current depth information block DX and the prediction depth information block DXR are used as a sample of depth information. For example, all or some of the pixels in the peripheral blocks DA, DB, DAR, and DBR of DX and DXR can be used as a sample of depth information.

First, Ek, which is the brightness compensation value of texture pixels for each depth information value, is obtained from the depth information sample. Where k means any value or any range within the entire range of depth information values. For example, when the entire range of the depth information value is the closed interval [0, 255], k may be any value such as 0, 1, 2, 3 or [0, 15], [16, 31] [32, 47], and the like.

A description will be given of the arbitrary range on the basis of FIG.

10 is a view for explaining a brightness compensation method according to the first embodiment of the present invention.

Referring to FIG. 10, in order to obtain Ek as shown in Equation (2), depth information values corresponding to each pixel in the sample ST for the current picture and the sample ST 'for the predictive picture shown in FIG. 10 k can be used.

Figure pat00002

In this case, STk and ST'k denote a set of pixels having a depth information value k in STk and ST ', respectively.

Figure pat00003

Then, the brightness compensation is performed by applying Equation (3) to each pixel of the current texture block X whose depth information value is k.

10A is a flowchart illustrating a brightness compensation method according to the first embodiment of the present invention.

The pixel-based brightness compensation method is processed in the following order.

(1) Let N be the number of samples, and ST [i], ST '[i], and i = 0..N-1 be the current and the predicted samples, respectively. The current depth and the depth depth are defined as SD [i], SD '[i], and i = 0..N-1, respectively.

(2) Define the current block as T [x, y]. Also, the current depth information block is defined as D [x ', y']. x = 0..X, y = 0..Y, x '= 0..X', y '= 0..Y'.

In this case, the values X, Y, X ', and Y' for determining the size of the block may be arbitrary values.

(3) For the current sample and the predictive sample, STk and ST'k are defined as STk (k = 0..K), respectively, in which the initial value for storing the average value of the pixels having the depth information value k is zero. Also, for the current sample and the predictive sample, Nk and N'k are defined as an array having an initial value of 0, which stores the number of pixels having a depth information value k, respectively (k = 0..K).

At this time, the value K for determining the range of the depth information value may be an arbitrary value.

(4) Define an array that stores the difference between the average of the current and the predicted sample as Ek.

(5) Repeat steps (6) to (7) for s = 0..N-1.

(6) k = DT [s], Nk = Nk + 1, STk = STk + k

(7) k = DT '[s], N'k = N'k + 1, ST'k = ST'k + k

(8) Repeat step (9) for k = 0..K.

(9) STk = STk / Nk, ST'k = ST'k / N'k, Ek = STk - ST'k

(10) Repeat step (11) for x = 0..X, y = 0..Y.

(11) k = D [x, y], T [x, y] = T [

All of the above methods can be applied differently depending on the block size or the CU depth. The variable (i.e., size or depth information) for determining the coverage can be set to use a predetermined value by the encoder or decoder, use a predetermined value according to the profile or level, If the bit stream is described, the decoder may use this value from the bit stream. If the application range is different according to the CU depth, as shown in the table below, the method A) applies only to a depth above a given depth, B) the method applied only to a given depth or less, C) There can be a way.

Table 1 shows an example of the range determination method applying the methods of the present invention when the given CU depth is 2. (O: applied to the depth, X: not applied to the depth.)

CU depth indicating coverage Method A Method B Method C 0 X O X One X O X 2 O O O 3 O X X 4 O X X

When the methods of the present invention are not applied to all the depths, they may be indicated by using an optional flag, or a value one greater than the maximum value of the CU depth may be expressed by signaling with a CU depth value indicating the application range have.

In addition, the above-described method can be applied to color difference blocks differently depending on the size of a luminance block, and can be applied to a luminance signal image and a chrominance image differently.

Luminance block size Color difference block size Apply brightness Color difference application Methods 4 (4x4, 4x2, 2x4) 2 (2x2) O or X O or X 1, 2, ... 4 (4x4, 4x2, 2x4) O or X O or X I, 1, 2, ... 8 (8x8, 8x4, 4x8, 2x8, etc.) O or X O or X Every 1, 2, .. 16 (16x16, 16x8, 4x16, 2x16, etc.) O or X O or X La 1, 2, .. 32 (32x32) O or X O or X Ma 1, 2, .. 8 (8x8, 8x4, 2x8, etc.) 2 (2x2) O or X O or X Bars 1, 2, .. 4 (4x4, 4x2, 2x4) O or X O or X Four, two, ... 8 (8x8, 8x4, 4x8, 2x8, etc.) O or X O or X Oh, 1, 2, .. 16 (16x16, 16x8, 4x16, 2x16, etc.) O or X O or X 1, 2, ... 32 (32x32) O or X O or X Car 1, 2, .. 16 (16x16, 8x16, 4x16, etc.) 2 (2x2) O or X O or X 1, 2, .. 4 (4x4, 4x2, 2x4) O or X O or X Wave 1, 2, .. 8 (8x8, 8x4, 4x8, 2x8, etc.) O or X O or X 1, 2, .. 16 (16x16, 16x8, 4x16, 2x16, etc.) O or X O or X Dogs 1, 2, ... 32 (32x32) O or X O or X My 1, 2, ..

Table 2 shows an example of a combination of methods.

Among the modified methods of Table 2, when the method of "Issue 1 " is the case where the size of the luminance block is 8 (8x8, 8x4, 2x8, etc.) and the size of the color difference block is 4 (4x4, 4x2, 2x4) The method of the present invention can be applied to a luminance signal and a color difference signal.

Among the above modified methods, the method "wave 2" is a case where the size of a luminance block is 16 (16 × 16, 8 × 16, 4 × 16, etc.) and the size of a color difference block is 4 (4 × 4, 4 × 2, 2 × 4) The method of the specification may be applied to the luminance signal and not to the color difference signal.

In another modified method, the method of the specification is applied only to the luminance signal and may not be applied to the color difference signal. Conversely, the method of the specification is applied only to the color difference signal, and may not be applied to the luminance signal.

Although the method and apparatus according to the embodiment of the present invention have been described with reference to the encoding method and the encoding apparatus, the present invention is also applicable to the decoding method and apparatus. In this case, the decoding method according to the embodiment of the present invention can be performed by performing the method according to the embodiment of the present invention in the reverse order.

The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).

The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (10)

Receiving a bitstream including an encoded image;
Performing predictive decoding on the bitstream according to an Intra mode or an Inter mode; And
Compensating the brightness of the current picture to be decoded according to the previously decoded predicted picture brightness,
Wherein the step of compensating for the brightness comprises adaptively compensating for each object unit based on pixel information included in the bitstream.
The method according to claim 1,
Wherein the step of compensating comprises constructing a depth information value corresponding to the texture block as a sample.
The method according to claim 1,
Wherein the step of compensating comprises determining a range of depth information values.
The method according to claim 1,
Wherein the step of compensating includes storing the difference between the average value of the current sample and the average value of the predictive sample with respect to the depth information as an array.
The method according to claim 1,
Wherein the step of compensating comprises constructing all or some of the surrounding pixels corresponding to the current block as a sample of depth information.
A receiver for receiving a bitstream including an encoded image;
A decoding unit which performs predictive decoding on the bitstream according to an intra mode or an inter mode; And
And a compensator for compensating the brightness of the current picture to be decoded according to the previously decoded predictive picture brightness,
Wherein the compensation unit uses depth information that adaptively compensates for each object unit based on pixel information included in the bitstream.
The method according to claim 6,
Wherein the compensating unit uses depth information that constitutes depth information values corresponding to texture blocks as a sample.
The method according to claim 6,
Wherein the compensator uses depth information to determine a range of depth information values.
The method according to claim 6,
Wherein the compensation unit stores the difference between the average value of the current sample and the average value of the depth information as an array.
The method according to claim 6,
Wherein the compensating unit comprises depth information for forming all or a part of surrounding pixels corresponding to a current block as a sample of depth information.
KR20130040909A 2013-04-15 2013-04-15 A method for adaptive illuminance compensation based on object and an apparatus using it KR20140124045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130040909A KR20140124045A (en) 2013-04-15 2013-04-15 A method for adaptive illuminance compensation based on object and an apparatus using it

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130040909A KR20140124045A (en) 2013-04-15 2013-04-15 A method for adaptive illuminance compensation based on object and an apparatus using it

Publications (1)

Publication Number Publication Date
KR20140124045A true KR20140124045A (en) 2014-10-24

Family

ID=51994412

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130040909A KR20140124045A (en) 2013-04-15 2013-04-15 A method for adaptive illuminance compensation based on object and an apparatus using it

Country Status (1)

Country Link
KR (1) KR20140124045A (en)

Similar Documents

Publication Publication Date Title
JP6026534B2 (en) Coding a motion depth map with varying depth range
JP6178017B2 (en) Improved depth recognition for stereo video
KR101773693B1 (en) Disparity vector derivation in 3d video coding for skip and direct modes
KR102254599B1 (en) Method predicting view synthesis in multi-view video coding and method for constituting merge candidate list by using same
JP2021168479A (en) Efficient multi-view coding using depth-map estimation and update
JP5575908B2 (en) Depth map generation technique for converting 2D video data to 3D video data
JP6446488B2 (en) Video data decoding method and video data decoding apparatus
US20130271565A1 (en) View synthesis based on asymmetric texture and depth resolutions
WO2013067435A1 (en) Differential pulse code modulation intra prediction for high efficiency video coding
JP2022179505A (en) Video decoding method and video decoder
US20170070751A1 (en) Image encoding apparatus and method, image decoding apparatus and method, and programs therefor
KR102105323B1 (en) A method for adaptive illuminance compensation based on object and an apparatus using it
CN113196783B (en) Deblocking filtering adaptive encoder, decoder and corresponding methods
KR20160072101A (en) Method and apparatus for decoding multi-view video
RU2571511C2 (en) Encoding of motion depth maps with depth range variation
JP2024028598A (en) Content adaptive segmentation prediction
KR20140124434A (en) A method of encoding and decoding depth information map and an apparatus using it
KR20220065880A (en) Use of DCT-based interpolation filters and enhanced bilinear interpolation filters in affine motion compensation
RU2801326C2 (en) Coder, decoder and corresponding methods using allocated ibc buffer and default value updated brightness and colour component
KR20140124045A (en) A method for adaptive illuminance compensation based on object and an apparatus using it
RU2809192C2 (en) Encoder, decoder and related methods of interframe prediction
KR101672008B1 (en) Method And Apparatus For Estimating Disparity Vector
KR20140124040A (en) A method for encoding/decoding and an apparatus using it
KR102234851B1 (en) Method and apparatus for decoding of intra prediction mode
CN116647683A (en) Quantization processing method and device

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination