KR20150136018A - A method and an apparatus for processing a multi-view video signal - Google Patents

A method and an apparatus for processing a multi-view video signal Download PDF

Info

Publication number
KR20150136018A
KR20150136018A KR1020150071664A KR20150071664A KR20150136018A KR 20150136018 A KR20150136018 A KR 20150136018A KR 1020150071664 A KR1020150071664 A KR 1020150071664A KR 20150071664 A KR20150071664 A KR 20150071664A KR 20150136018 A KR20150136018 A KR 20150136018A
Authority
KR
South Korea
Prior art keywords
depth
value
offset
mode
block
Prior art date
Application number
KR1020150071664A
Other languages
Korean (ko)
Inventor
이배근
김주영
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of KR20150136018A publication Critical patent/KR20150136018A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • H04N13/0048
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Abstract

According to the present invention, a method for processing a multi-view video signal: determines a SDC mode of a depth block among a plurality of candidate modes based on mode identifier information about the depth block; induces a representative difference depth value of the depth block based on the determined SDC mode; and restores the depth block by using the induced representative difference depth value.

Description

TECHNICAL FIELD [0001] The present invention relates to a multi-view video signal processing method and apparatus,

The present invention relates to a method and apparatus for coding a video signal.

Recently, the demand for high resolution and high quality images such as high definition (HD) image and ultra high definition (UHD) image is increasing in various applications. As the image data has high resolution and high quality, the amount of data increases relative to the existing image data. Therefore, when the image data is transmitted using a medium such as a wired / wireless broadband line or stored using an existing storage medium, The storage cost is increased. High-efficiency image compression techniques can be utilized to solve such problems as image data becomes high-resolution and high-quality.

An inter picture prediction technique for predicting a pixel value included in a current picture from a previous or a subsequent picture of a current picture by an image compression technique, an intra picture prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture, There are various techniques such as an entropy encoding technique in which a short code is assigned to a value having a high appearance frequency and a long code is assigned to a value having a low appearance frequency. Image data can be effectively compressed and transmitted or stored using such an image compression technique.

On the other hand, demand for high-resolution images is increasing, and demand for stereoscopic image content as a new image service is also increasing. Video compression techniques are being discussed to effectively provide high resolution and ultra-high resolution stereoscopic content.

It is another object of the present invention to provide a method and apparatus for performing inter-view prediction using a disparity vector in encoding / decoding a multi-view video signal.

An object of the present invention is to provide a method and apparatus for deriving a variation vector of a texture block using depth data of a depth block in encoding / decoding a multi-view video signal.

The present invention aims to provide a method and apparatus for deriving a variation vector from a neighboring block of a current texture block in encoding / decoding a multi-view video signal.

An object of the present invention is to provide a method and apparatus for coding a depth image according to a segment-based depth coding scheme in encoding / decoding a multi-view video signal.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a method and apparatus for encoding and decoding a multi-view video signal, which obtains an absolute value of an offset through entropy decoding based on context-based adaptive binary arithmetic coding.

A method and apparatus for decoding a multi-layer video signal according to the present invention includes the steps of: acquiring mode identifier information on a current depth block from a bitstream; determining, based on the obtained mode identifier information, SDC mode, deriving a representative difference depth value of the current depth block based on the determined SDC mode, and restoring the current depth block using the derived representative difference depth value.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the mode identifier information specifies one of a plurality of candidate modes used to derive a representative difference depth value of the current depth block.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the plurality of candidate modes include at least one of a first segment Wise DC candidate mode, a second segment Wise DC candidate mode, or a third segment Wise DC candidate mode .

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the first segment Wise DC candidate mode is a mode for coding the representative difference depth value into an offset absolute value and offset sign information, and the second segment Wise DC candidate Mode is a mode for encoding an index mapped to the representative difference depth value using a depth lookup table, and the third segment Wise DC candidate mode is a mode for encoding the representative difference depth value using a depth quantization parameter .

In the method and apparatus for decoding a multi-layer video signal according to the present invention, when the SDC mode of the current depth block is determined to be the first segment Wise DC candidate mode, deriving the representative difference depth value comprises: Obtaining the offset absolute value and the offset code information, deriving an offset value using the obtained absolute offset value and the offset code information, and setting the offset value as the representative difference depth value.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, when the SDC mode of the current depth block is determined to be the second segment Wise DC candidate mode, deriving the representative difference depth value comprises: Deriving an offset absolute value and the offset code information, deriving an offset value using the obtained offset absolute value and offset sign information, deriving the representative difference depth value using the derived offset value and a depth lookup table, .

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the step of deriving a representative difference depth value using the depth lookup table may include: calculating a prediction depth value of the current depth block using the depth lookup table, Transforming the transformed first index to a first index corresponding to the depth value, obtaining a second index by adding the transformed first index and the offset value, transforming the transformed depth value into a restoration depth value corresponding to the second index using the depth lookup table, And a value obtained by subtracting the prediction depth value from the restoration depth value is set as the representative difference depth value.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the first index may minimize a difference between the depth value defined in the depth lookup table or the depth value equal to the predicted depth value or the predicted depth value And is an index assigned to a depth value.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, the prediction depth value is derived as an average value of samples located at corners of the current depth block.

In the method and apparatus for decoding a multi-layer video signal according to the present invention, samples located at the corners include at least two of an upper left corner sample, a upper right corner sample, a lower left corner sample, or a lower right corner sample in the current depth block .

According to the present invention, it is possible to efficiently perform inter-view prediction using a disparity vector.

According to the present invention, the variation vector of the current texture block can be effectively derived from the depth data of the current depth block or the variation vector of the neighboring texture block.

According to the present invention, the difference depth value of the depth image can be efficiently coded according to the segment-based depth coding technique.

According to the present invention, the absolute value of the offset can be effectively decoded through entropy decoding based on context-based adaptive binary arithmetic coding.

FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
FIG. 2 illustrates a method of performing an inter-view prediction based on a disparity vector according to an embodiment to which the present invention is applied.
FIG. 3 illustrates a method of deriving a variation vector of a current texture block using depth data of a depth image according to an embodiment of the present invention. Referring to FIG.
FIG. 4 illustrates a candidate spatial / temporal neighbor block of a current texture block according to an embodiment of the present invention. Referring to FIG.
FIG. 5 illustrates a method of restoring a depth image based on a segment-based depth coding technique according to an embodiment of the present invention. Referring to FIG.
FIG. 6 illustrates a method of deriving a representative difference depth value according to a first SDC candidate mode according to an embodiment of the present invention. Referring to FIG.
FIG. 7 illustrates a method of deriving a representative difference depth value according to a second SDC candidate mode according to an embodiment of the present invention. Referring to FIG.
FIG. 8 illustrates a method of deriving a representative difference depth value according to a third SDC candidate mode according to an embodiment of the present invention. Referring to FIG.
FIG. 9 illustrates a method of acquiring a depth quantization parameter based on default quantization identification information (isDefault_depth_resQp) according to an embodiment of the present invention.
FIG. 10 illustrates a method of adaptively obtaining a depth quantization parameter based on a depth quantization flag (depth_resQp_flag) according to an embodiment of the present invention.
FIG. 11 illustrates a method of acquiring depth quantization parameters based on a depth quantization flag (depth_resQp_flag) and default quantization identification information (isDefault_depth_resQp) according to an embodiment of the present invention.
FIG. 12 illustrates a method of obtaining an absolute value of an offset through entropy decoding based on context-based adaptive binary arithmetic coding according to an embodiment of the present invention. Referring to FIG.
13 to 15 illustrate a method of binarizing an absolute value of an offset according to the maximum number cMax of bins according to an embodiment of the present invention.

The technique of compression-encoding or decoding multi-view video signal data considers spatial redundancy, temporal redundancy, and redundancy existing between viewpoints. Also, in case of a multi-view image, a multi-view texture image captured at two or more viewpoints can be coded to realize a three-dimensional image. Further, the depth data corresponding to the multi-viewpoint texture image may be further encoded as needed. In coding the depth data, it is needless to say that compression coding can be performed in consideration of spatial redundancy, temporal redundancy, or inter-view redundancy. The depth data expresses the distance information between the camera and the corresponding pixel. In the present specification, the depth data can be flexibly interpreted as depth-related information such as a depth value, a depth information, a depth image, a depth picture, a depth sequence and a depth bit stream . In the present specification, coding may include both concepts of encoding and decoding, and may be flexibly interpreted according to the technical idea and technical scope of the present invention.

FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.

1, the video decoder includes a NAL parsing unit 100, an entropy decoding unit 200, an inverse quantization / inverse transformation unit 300, an intra prediction unit 400, an in-loop filter unit 500, A buffer unit 600, and an inter-prediction unit 700.

The NAL parsing unit 100 may receive the bitstream including the texture data again. In addition, when depth data is required for coding texture data, a bitstream including encoded depth data may be further received. The texture data and the depth data input at this time may be transmitted in one bit stream or in a separate bit stream. The NAL parsing unit 100 may perform parsing in units of NALs to decode the input bitstream. If the input bitstream is multi-point related data (e.g., 3-Dimensional Video), the input bitstream may further include camera parameters. The camera parameters may include intrinsic camera parameters and extrinsic camera parameters and the unique camera parameters may include focal length, aspect ratio, principal point, etc., and the camera parameter of the note may include position information of the camera in the world coordinate system, and the like.

The entropy decoding unit 200 can extract quantized transform coefficients through entropy decoding, coding information for predicting a texture picture, and the like.

The inverse quantization / inverse transformation unit 300 can obtain a transform coefficient by applying a quantization parameter to the quantized transform coefficient, and invert the transform coefficient to decode the texture data or the depth data. Here, the decoded texture data or depth data may mean residual data according to prediction processing. In addition, the quantization parameter for the depth block can be set in consideration of the complexity of the texture data. For example, when a texture block corresponding to a depth block is a region with a high complexity, a low quantization parameter may be set, and in the case of a low complexity region, a high quantization parameter may be set. The complexity of the texture block can be determined based on the difference value between adjacent pixels in the reconstructed texture picture as shown in Equation (1).

Figure pat00001

In Equation (1), E represents the complexity of the texture data, C represents the restored texture data, and N may represent the number of pixels in the texture data region for which the complexity is to be calculated. Referring to Equation 1, the complexity of the texture data corresponds to the difference value between the texture data corresponding to the (x, y) position and the texture data corresponding to the position (x-1, y) Can be calculated using the difference value between the texture data and the texture data corresponding to the position (x + 1, y). In addition, the complexity can be calculated for the texture picture and the texture block, respectively, and the quantization parameter can be derived using Equation (2) below.

Figure pat00002

Referring to Equation (2), the quantization parameter for the depth block can be determined based on the ratio of the complexity of the texture picture and the complexity of the texture block. [alpha] and [beta] may be a variable integer that is derived from the decoder, or it may be a predetermined integer within the decoder.

The intra prediction unit 400 may perform intra prediction using the restored texture data in the current texture picture. In-depth prediction can also be performed on the depth map in the same manner as the texture picture. For example, the coding information used for intra-picture prediction of the texture picture can be used equally in the tep-picture. Here, the coding information used for intra-picture prediction may include intra-prediction mode and partition information of intra-prediction.

The in-loop filter unit 500 may apply an in-loop filter to each coded block to reduce block distortion. The filter can smooth the edges of the block to improve the picture quality of the decoded picture. The filtered texture or depth pictures may be output or stored in the decoded picture buffer unit 600 for use as a reference picture. On the other hand, when the texture data and the depth data are coded using the same in-loop filter, the coding efficiency may deteriorate because the characteristics of the texture data and the characteristics of the depth data are different from each other. Thus, a separate in-loop filter for depth data may be defined. Hereinafter, an area-based adaptive loop filter and a trilateral loop filter will be described as an in-loop filtering method capable of efficiently coding depth data.

In the case of an area-based adaptive loop filter, it may be determined whether to apply an area-based adaptive loop filter based on the variance of the depth block. Here, the variation amount of the depth block can be defined as a difference between the maximum pixel value and the minimum pixel value in the depth block. It is possible to decide whether to apply the filter by comparing the variation amount of the depth block with the predetermined threshold value. For example, if the variation of the depth block is greater than or equal to the predetermined threshold value, it means that the difference between the maximum pixel value and the minimum pixel value in the depth block is large, so that it can be determined to apply the area-based adaptive loop filter . Conversely, when the depth variation is smaller than the predetermined threshold value, it can be determined that the area-based adaptive loop filter is not applied. When the filter is applied according to the comparison result, the pixel value of the filtered depth block may be derived by applying a predetermined weight to the neighboring pixel value. Here, the predetermined weight may be determined based on the positional difference between the currently filtered pixel and the neighboring pixel and / or the difference value between the currently filtered pixel value and the neighboring pixel value. In addition, the neighboring pixel value may mean any one of the pixel values included in the depth block excluding the pixel value currently filtered.

The Tracheal Loop Filter according to the present invention is similar to the region-based adaptive loop filter, but differs in that it additionally considers the texture data. Specifically, the trilateral loop filter compares the following three conditions and extracts depth data of neighboring pixels satisfying the following three conditions.

Condition 1.

Figure pat00003

Condition 2.

Figure pat00004

Condition 3.

Figure pat00005

Condition 1 represents the positional difference between the current pixel p and the neighboring pixel q in the depth block as a predetermined parameter

Figure pat00006
And condition 2 is to compare the difference between the depth data of the current pixel p and the depth data of the neighboring pixel q to a predetermined parameter
Figure pat00007
And condition 3 is to compare the difference between the texture data of the current pixel p and the texture data of the neighboring pixel q to a predetermined parameter
Figure pat00008
.

It is possible to extract neighboring pixels satisfying the above three conditions and to filter the current pixel p with an intermediate value or an average value of these depth data.

A decoded picture buffer unit (600) performs a function of storing or opening a previously coded texture picture or a depth picture to perform inter-picture prediction. At this time, frame_num and picture order count (POC) of each picture can be used to store or open the picture in the decoding picture buffer unit 600. Further, among the previously coded pictures in the depth coding, since there are depth pictures that are different from the current depth picture, in order to utilize such pictures as a reference picture, time identification information for identifying the time of the depth picture may be used have. The decoded picture buffer unit 600 can manage reference pictures using a memory management control operation method and a sliding window method in order to more flexibly perform inter-picture prediction. This is to uniformly manage the memories of the reference picture and the non-reference picture into one memory and efficiently manage them with a small memory. In the depth coding, the depth pictures may be marked with a separate mark in order to distinguish them from the texture pictures in the decoding picture buffer unit, and information for identifying each depth picture in the marking process may be used.

The inter prediction unit 700 may perform motion compensation of the current block using the reference picture and the motion information stored in the decoding picture buffer unit 600. [ In the present specification, the motion information can be understood as a broad concept including motion vectors and reference index information. In addition, the inter prediction unit 700 may perform temporal inter prediction to perform motion compensation. Temporal inter prediction may refer to inter prediction using motion information of a current texture block and a reference picture located at the same time and at a different time in the current texture block. Also, in the case of a multi-view image captured by a plurality of cameras, temporal inter prediction as well as inter-view prediction may be performed. The motion information used for the inter-view prediction may include a disparity vector or an inter-view motion vector. A method of performing inter-view prediction using the disparity vector will be described with reference to FIG.

FIG. 2 illustrates a method of performing an inter-view prediction based on a disparity vector according to an embodiment to which the present invention is applied.

Referring to FIG. 2, a disparity vector of a current texture block may be derived (S200).

For example, the disparity vector may be derived from the depth image corresponding to the current texture block, which will be described in detail with reference to FIG.

It may also be derived from a neighboring block that is spatially adjacent to the current texture block or may be derived from a temporal neighbor block located at a different time zone from the current texture block. A method of deriving a variation vector from a spatial / temporal neighboring block of a current texture block will be described with reference to FIG.

Referring to FIG. 2, inter-view prediction of the current texture block may be performed using the transition vector derived in operation S200 (S210).

For example, the texture data of the current texture block can be predicted or restored using the texture data of the reference block specified by the variation vector. Here, the reference block may belong to the time point used for the inter-view prediction of the current texture block, that is, the reference point. The reference block may belong to a reference picture located in the same time zone as the current texture block.

In addition, a reference block belonging to a reference time point may be specified using the variation vector, and a temporal motion vector of the current texture block may be derived using the temporal motion vector of the specified reference block. Here, the temporal motion vector means a motion vector used for temporal inter prediction, and can be distinguished from a transition vector used for inter-view prediction.

FIG. 3 illustrates a method of deriving a variation vector of a current texture block using depth data of a depth image according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 3, position information of a depth block (hereinafter referred to as a current depth block) in a depth picture corresponding to a current texture block may be obtained based on position information of a current texture block (S300).

The position of the current depth block can be determined in consideration of the spatial resolution between the depth picture and the current picture.

For example, if the depth picture and the current picture are coded with the same spatial resolution, the current depth block may be determined to be the same block as the current texture block of the current picture. On the other hand, the current picture and the depth picture may be coded with different spatial resolutions. Since the depth information indicating the distance information between the camera and the object has the characteristic that the coding efficiency may not decrease significantly even if the spatial resolution is lowered. Thus, if the spatial resolution of the depth picture is coded lower than the current picture, the decoder may involve an upsampling process on the depth picture before acquiring the position information of the current depth block. In addition, when the aspect ratio between the upsampled depth picture and the current picture does not exactly coincide, offset information may be additionally considered in acquiring the position information of the current depth block in the upsampled depth picture. Here, the offset information may include at least one of top offset information, left offset information, right offset information, and bottom offset information. The top offset information may indicate a positional difference between at least one pixel located at the top of the upsampled depth picture and at least one pixel located at the top of the current picture. Left, right, and bottom offset information may also be defined in the same manner, respectively.

Referring to FIG. 3, the depth data corresponding to the position information of the current depth block may be obtained (S310).

If there are a plurality of pixels in the current depth block, the depth data corresponding to the corner pixels of the current depth block may be used. Alternatively, depth data corresponding to the center pixel of the current depth block may be used. Alternatively, one of a maximum value, a minimum value, and a mode value may be selectively used among a plurality of depth data corresponding to a plurality of pixels, or an average value of a plurality of depth data may be used.

Referring to FIG. 3, the variation vector of the current texture block may be derived using the depth data obtained in operation S310 (S320).

For example, the transition vector of the current texture block may be derived as: < EMI ID = 3.0 >

Figure pat00009

Referring to Equation (3), v represents depth data, a represents a scaling factor, and f represents an offset used to derive a variation vector. The scaling factor a and the offset f may be signaled in a video parameter set or a slice header, or may be a pre-set value in a decoder. n is a variable indicating the value of the bit shift, which can be variably determined according to the accuracy of the variation vector.

FIG. 4 illustrates a candidate spatial / temporal neighbor block of a current texture block according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 4A, the spatial neighboring block includes a left neighbor block A1, an upper neighbor block B1, a lower left neighbor block A0, an upper right neighbor block B0, And block B2.

Referring to FIG. 4B, the temporally neighboring block may denote a block in the same position as the current texture block. Specifically, the temporal neighboring block is a block belonging to a picture located at a time zone different from the current texture block, and includes a block (BR) corresponding to the lower right pixel of the current texture block, a block (CT) corresponding to the center pixel of the current texture block or And a block TL corresponding to the upper left pixel of the current texture block.

The displacement vector of the current texture block may be derived from a disparity-compensated prediction block (hereinafter referred to as a DCP block) among the spatially / temporally neighboring blocks. Here, the DCP block may be a block encoded through inter-view texture prediction using a transition vector. In other words, the DCP block can perform the inter-view prediction using the texture data of the reference block specified by the disparity vector. In this case, the transition vector of the current texture block can be predicted or restored by using the transition vector used for the inter-view texture prediction of the DCP block.

Alternatively, the disparity vector of the current texture block may be derived from a disparity vector based motion compensation prediction block (hereinafter referred to as a DV-MCP block) based on the disparity vector among the spatially neighboring blocks. Here, the DV-MCP block may mean a block coded through inter-view motion prediction using a disparity vector. In other words, the DV-MCP block can perform temporal inter prediction using the temporal motion vector of the reference block specified by the disparity vector. In this case, the disparity vector of the current texture block may be predicted or reconstructed using the disparity vector used for obtaining the temporal motion vector of the reference block of the DV-MCP block.

The current texture block may be searched for whether a spatial / temporal neighbor block corresponds to a DCP block according to a predefined priority, and a variation vector may be derived from the first searched DCP block. As an example of the predefined priority, a search can be performed with priority of a spatial neighbor block -> temporal neighbor block, and among the spatial neighbor blocks, a priority order of A1-> B1-> B0-> A0-> B2 It is possible to search whether it corresponds to a DCP block. However, it is to be understood that this is merely one embodiment of the priority order and can be determined differently within a range that is obvious to a person skilled in the art.

If any of the spatially / temporally neighboring blocks does not correspond to the DCP block, the spatial neighboring block may be additionally searched for whether it corresponds to the DV-MCP block, and the variation vector may be derived from the first searched DV-MCP block .

In the present invention, the segment-based depth coding technique is a technique in which the differential depth values of a depth image (e.g., a coding block or a prediction block) are not separately encoded, and one differential depth value Quot; depth value "). One depth image may be composed of at least one segment. If the depth image is composed of two or more segments, the representative difference depth value may be encoded for each segment.

When the above-described segment-based depth coding technique is used, the bit rate related to the difference depth value can be reduced as compared with the case of coding the difference depth value for each pixel.

In the encoder, the representative difference depth value may be derived by averaging the difference between the original depth value and the prediction depth value. For example, the difference value between the original depth value and the prediction depth value can be obtained for each pixel of the depth image, and the average value of the obtained difference values can be defined as the representative difference depth value. Alternatively, the difference value between the average value of the original depth values of the depth image and the average value of the prediction depth values may be defined as the representative difference depth value. The representative difference depth value determined in the encoder may be encoded in the form of an offset absolute value (depth_dc_abs) and offset sign information (depth_dc_sign_flag) and transmitted to the decoder. Hereinafter, a method of restoring a depth image based on a segment-based depth coding technique in a decoder will be described with reference to FIGS. 5 to 8. FIG.

FIG. 5 illustrates a method of restoring a depth image based on a segment-based depth coding technique according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 5, the mode identifier information on the current depth block may be obtained from the bitstream (S500).

Here, the mode identifier information may specify any one of a plurality of candidate modes available for deriving the representative difference depth value of the current depth block.

The plurality of candidate modes may include a first segment Wise DC candidate mode (hereinafter referred to as a first SDC candidate mode), a second segment Wise DC candidate mode (hereinafter referred to as a second SDC candidate mode) And a DC candidate mode (hereinafter referred to as a third SDC candidate mode).

Specifically, the first SDC candidate mode is a method of directly encoding the representative difference depth value using the offset absolute value (depth_dc_abs) and offset sign information (depth_dc_sign_flag) without using a depth look-up table. The second SDC candidate mode does not directly encode the representative difference depth value but encodes it into an index mapped to the representative difference depth value using a depth lookup table. The third SDC candidate mode is a method of quantizing and encoding the representative difference depth value using a quantization parameter.

Referring to FIG. 5, a segment Wise DC mode (hereinafter referred to as SDC mode) of a current depth block among a plurality of candidate modes may be determined based on the mode identifier information obtained in step S500 (S510).

The representative difference depth value of the current depth block may be derived based on the SDC mode determined in operation S510 (S520).

As described above, the SDC mode of the current depth block may be determined to be one of the first SDC code mode to the third SDC candidate mode, and a method of deriving the representative difference depth value based on each SDC candidate mode is as shown in FIG. 6 Referring to FIG. 8, FIG.

The current depth block may be restored using the representative difference depth value derived in step S520 (S530).

For example, it is possible to derive the restoration depth value of the current depth block by adding the representative difference depth value to the predicted depth value of the depth image. Alternatively, the prediction depth value may be corrected by adding a representative difference depth value to a prediction depth value derived through intra / inter prediction of the depth image. Alternatively, the representative difference depth value may be added to the restored depth image to compensate the restored depth image to be similar to the original depth image.

5, a method of selectively using the SDC mode of the current depth block on the basis of the mode identifier information and the plurality of candidate modes has been described. However, it is also possible to determine which one of the first SDC candidate mode to the third SDC candidate mode to use, Any one of the modes can be selectively used.

For example, a syntax indicating whether the current depth block derives a representative difference depth value using the depth quantization parameter is confirmed. If the depth quantization parameter is used as a result of verification, the representative difference depth value is derived according to the third SDC candidate mode . However, when the depth quantization parameter is not used, a syntax for specifying whether or not a depth lookup table is used is checked. If the depth lookup table is not used as a result of the check, a depth lookup table is used in accordance with the first SDC candidate mode The representative difference depth values can be respectively decoded according to the second SDC candidate mode.

FIG. 6 illustrates a method of deriving a representative difference depth value according to a first SDC candidate mode according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 6, an offset absolute value (depth_dc_abs) and offset sign information (depth_dc_sign_flag) can be obtained from a bit stream (S600).

Here, the offset absolute value and the offset code information are the syntax used to derive the offset value DcOffset. Specifically, the absolute value of the offset means the absolute value of the offset value (DcOffset), and the offset code information can indicate the sign of the offset value (DcOffset).

On the other hand, in order to improve the coding efficiency, the offset absolute value and the offset code information may be adaptively signaled based on the depth DC flag (depth_dc_flag). Here, the depth DC flag may indicate whether the offset absolute value and the offset code information are signaled. For example, when the value of the depth DC flag is 1, the offset absolute value and the offset code information are obtained from the bit stream. If the value is 0, the offset absolute value and offset code information may not be obtained from the bit stream. If the offset absolute value and the offset code information are not signaled according to the value of the depth DC flag, a default value set in the decoder may be allocated to the offset absolute value and the offset code information for use.

Referring to FIG. 6, an offset value DcOffset may be derived using the offset absolute value and offset sign information obtained in operation S600 (S610).

For example, the offset value (DcOffset) can be derived as shown in Equation (4).

Figure pat00010

In Equation (4), the variable dcNumSeg denotes the number of segments constituting the current depth block, and is a constant value that is variably determined according to the number of segments. However, since the number of segments constituting the current depth block may be determined differently according to the intra prediction mode, in this case, the value of the variable dcNumSeg may be derived in consideration of the intra prediction mode. Alternatively, the variable dcNumSeg may be limited to have a value within a certain range (for example, 1 or 2) in order to improve the coding efficiency of the segmentwise coding scheme.

The representative difference depth value may be derived using the offset value (DcOffset) derived in step S610 (S620). For example, the derived offset value DcOffset can be set as the representative difference depth value.

FIG. 7 illustrates a method of deriving a representative difference depth value according to a second SDC candidate mode according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 7, an offset absolute value (depth_dc_abs) and offset sign information (depth_dc_sign_flag) can be obtained from the bit stream (S700).

As shown in FIG. 6, the offset absolute value and the offset code information are the syntax used to derive the offset value DcOffset. That is, the absolute value of the offset means the absolute value of the offset value DcOffset, and the offset code information can indicate the sign of the offset value DcOffset. In addition, the offset absolute value and the offset code information are adaptively signaled according to the depth DC flag, and may be set to a default value set in the decoder if the signal is not signaled.

Referring to FIG. 7, an offset value DcOffset may be derived using the offset absolute value and offset sign information obtained in step S700 (S710).

When the representative difference depth value is coded using a depth look-up table according to the second SDC candidate mode, the offset value DcOffset is mapped to a representative difference depth value instead of the representative difference depth value itself Lt; / RTI > index. The depth lookup table defines a mapping relationship between a depth value of a video image and an index assigned to the depth value. In this manner, when the depth lookup table is used, the encoding efficiency can be improved by encoding only the index assigned to the depth value without encoding the depth value itself.

Referring to FIG. 7, the representative difference depth value may be derived using the offset value DcOffset derived in step S710 and the depth lookup table (step S720).

Specifically, the representative difference depth value corresponding to the offset value (DcOffset) can be derived using the depth lookup table. For example, the representative difference depth value may be derived by the following equation (5).

Figure pat00011

In Equation (5), DltIdxToVal [] denotes a function of converting an index into a depth value using a depth lookup table, and DltValToIdx [] denotes a function of converting a depth value into an index using a depth lookup table.

First, the predictive depth value dcPred of the current depth block can be converted to a corresponding first index DltValToIdx [dcPred] using a depth lookup table. For example, a depth value that minimizes the difference between the depth value defined in the depth lookup table and the predicted depth value (dcPred) or the predicted depth value (dcPred) is selected, and the selected depth value The assigned index can be determined as the first index.

Here, the prediction depth value dcPred may be derived as an average value of the samples located at the corners of the current depth block. In this case, the samples located at the corner may include at least two of the upper left corner sample, the upper right corner sample, the lower left corner sample, or the lower right corner sample in the depth block.

A second index is obtained by adding the converted first index DltValToIdx [dcPred] and an offset value DcOffset, and a depth value corresponding to the second index (hereinafter referred to as a restoration depth value) is calculated using a depth lookup table, . ≪ / RTI > A value obtained by subtracting the prediction depth value dcPred from the restoration depth value may be determined as the representative difference depth value dcVal.

FIG. 8 illustrates a method of deriving a representative difference depth value according to a third SDC candidate mode according to an embodiment of the present invention. Referring to FIG.

The third SDC candidate mode of the present invention quantizes the representative difference depth values of each segment of the current depth block with quantization parameters and encodes the quantized representative difference depth values into the offset absolute value depth_dc_abs and the offset sign information depth_dc_sign_flag . Therefore, in the decoder, it is required to derive the representative difference depth value by inverse-quantizing the offset value DcOffset derived using the offset absolute value (depth_dc_abs) and the offset sign information (depth_dc_sign_flag).

Referring to FIG. 8, an offset absolute value (depth_dc_abs) and offset sign information (depth_dc_sign_flag) can be obtained from the bit stream (S800).

As described above, the offset absolute value and the offset code information are used to derive the offset value DcOffset, the absolute value of the offset means the absolute value of the offset value DcOffset, and the offset sign information indicates the offset value DcOffset). In addition, the offset absolute value and the offset code information are adaptively signaled according to the depth DC flag, and may be set to a default value set in the decoder if the signal is not signaled.

Referring to FIG. 8, an offset value DcOffset may be derived using the offset absolute value and offset sign information obtained in operation S800 (S810).

The offset value DcOffset derived in step S810 may be scaled by the depth quantization parameter depth_resQp to derive the dequantized offset value in operation S820.

Here, the depth quantization parameter is a syntax that is encoded to dequantize the offset value (DcOffset). This may be signaled at a high level of syntax (e.g., a video parameter set) to be common to all layers or viewpoints, and may be signaled layer by layer to be applied separately for each layer. A method of acquiring the depth quantization parameter will be described with reference to FIGS. 9 to 11. FIG. However, the present invention is not limited to this, and the depth quantization parameter may be set to a predetermined value in the decoder.

The representative difference depth value may be derived using the dequantized offset value derived in step S820 (S830). For example, the dequantized offset value may be set as a representative difference depth value.

FIG. 9 illustrates a method of acquiring a depth quantization parameter based on default quantization identification information (isDefault_depth_resQp) according to an embodiment of the present invention.

One depth quantization parameter may be applied to the depth block coded by the Segment-based DC coding method belonging to all layers or a viewpoint, or a different depth quantization parameter may be applied to each layer or picture.

Referring to FIG. 9, default quantization identification information (isDefault_depth_resQp) may be obtained from the bitstream (S900).

Here, the default quantization identification information may indicate whether or not the same depth quantization parameter is applied at all layers or time points. For example, if the value of the default quantization identification information is 1, the same quantization parameter is applied to all layers or viewpoints, and if the value is 0, different quantization parameters may be applied to each layer or viewpoint.

Referring to FIG. 9, if the value of the default quantization identification information acquired in step S900 is 1, the first depth quantization parameter (default_depth_resQp) may be obtained (S910). Here, the first depth quantization parameter means a depth quantization parameter applied equally to all layers or time points. If the value of the default quantization identification information is 0, the first depth quantization parameter is not obtained, and in this case, the value of the first depth quantization parameter may be derived to be zero.

Referring to FIG. 9, in step S920, the second depth quantization parameter may be obtained for each layer or view only when the value of the first depth quantization parameter obtained in step S910 is zero.

Here, the second depth quantization parameter may mean a depth quantization parameter applied to the layer or the viewpoint. In FIG. 9, the first depth quantization parameter is obtained only when the value of the first depth quantization parameter is 0. However, it is needless to say that it may be acquired only when the value of the default quantization identification information is zero.

FIG. 10 illustrates a method of adaptively obtaining a depth quantization parameter based on a depth quantization flag (depth_resQp_flag) according to an embodiment of the present invention.

Referring to FIG. 10, a depth quantization flag (depth_resQp_flag) can be obtained from the bit stream (S1000). Here, the depth quantization flag may indicate whether or not the depth quantization parameter is signaled. For example, when the value of the depth quantization flag is 1, the depth quantization parameter is signaled. If the value is 0, the depth quantization parameter may not be signaled.

Referring to FIG. 10, when the value of the depth quantization flag obtained in step S1000 is 1, a depth quantization parameter (depth_resQp) may be obtained for each layer or each view (S1010). As described above, the depth quantization parameter is the same syntax applied to all depth blocks encoded by the segment-based DC coding technique belonging to the layer.

FIG. 11 illustrates a method of acquiring depth quantization parameters based on a depth quantization flag (depth_resQp_flag) and default quantization identification information (isDefault_depth_resQp) according to an embodiment of the present invention.

Referring to FIG. 11, a depth quantization flag (depth_resQp_flag) can be obtained from the bitstream (S1100). Here, the depth quantization flag may indicate whether one of the first depth quantization parameter or the second depth quantization parameter is signaled. For example, if the value of the depth quantization flag is 1, the first or second depth quantization parameter is signaled, and if the value is 0, the first and second depth quantization parameters may not be signaled.

Referring to FIG. 11, if the value of the depth quantization flag obtained in step S1100 is 1, default quantization identification information (isDefault_depth_resQp) can be obtained from the bitstream (S1110).

Here, as shown in FIG. 9, the default quantization identification information may indicate whether or not the same depth quantization parameter is applied to all layers or time points.

Referring to FIG. 11, if the value of the default quantization identification information acquired in step S1110 is 1, the first depth quantization parameter (default_depth_resQp) may be obtained (S1120). Here, the first depth quantization parameter means a depth quantization parameter applied equally to all layers or time points. If the value of the default quantization identification information is 0, the first depth quantization parameter is not obtained, and in this case, the value of the first depth quantization parameter may be derived to be zero.

Referring to FIG. 11, if the value of the depth quantization flag is 0 and the value of the first depth quantization parameter is 0, the second depth quantization parameter may be obtained for each layer or time point (S1130).

Here, the second depth quantization parameter may mean a depth quantization parameter applied to the layer or the viewpoint. In FIG. 9, the first depth quantization parameter is obtained when the value of the first depth quantization parameter is 0, but it can be replaced with the value obtained when the value of the default quantization identification information is 0.

The above-mentioned offset absolute value can be obtained through entropy decoding based on context-based adaptive binary arithmetic coding, and will be described with reference to FIGS. 12 to 15. FIG.

FIG. 12 illustrates a method of obtaining an absolute value of an offset through entropy decoding based on context-based adaptive binary arithmetic coding according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 12, a bin string may be generated through a regular coding or a bypass coding process on a bitstream encoded by context-based adaptive binary arithmetic coding (S1200).

Here, the regular coding is adaptive binary arithmetic coding for predicting the probability of a bin using context modeling, and the bypass coding can mean coding for outputting the binarized bin string as a bitstream. Context modeling means probability modeling for each bin, and the probability can be updated according to the value of the currently encoded bin. When encoded through the regular coding, an empty string can be generated based on context modeling of the absolute value of the offset, that is, probability of occurrence of each bit.

The absolute value of the offset may be obtained through inverse-binarization of the bin string generated in operation S1200 (S1210).

Here, the de-binarization may refer to an inverse process of the binarization process on the absolute value of the offset performed in the encoder. Binarization method, and the like unary binary encoding (unary binarization), a cutting-type unary binary encoding (Truncated unary binarization), unary / zero-th order index Gollum coupled binary coding (Truncated unary / zero th order exponential golomb binarization) may be used .

The binarization of the absolute value of the offset may be performed by a combination of a prefix bin string and a suffix bin string. Here, the preamble bin string and the suffix bin string can be represented by different binarization methods. For example, the preamble bin string may use a truncated unary binary encoding, and the suffix bin string may use the zeroth exponent golem binarization encoding. Hereinafter, the process of binarizing the absolute value of offset according to the maximum number cMax of beans constituting the preamble bin string will be described with reference to FIGS. 13 to 15. FIG.

13 to 15 illustrate a method of binarizing an absolute value of an offset according to the maximum number cMax of bins according to an embodiment of the present invention.

FIG. 13 shows a binarization method when the maximum number of bins cMax is set to 3. 13, the absolute value of the offset is represented by a combination of a preamble bin string and a suffix bin string, and the preamble bin string and the suffix bin string are binarized by a cutting type unary binary encoding and a zero order exponent Golomb binary encoding, respectively do.

If the maximum number of bins (cMax) is set to 3 and the offset absolute value is 3, the preamble bin string can be represented by 111 and the suffix bin string can be represented by 0. If the absolute value of the offset is greater than 3, the preamble bin string is fixed at 111, and the suffix bin string can be represented by binarizing the difference value between the absolute value of the offset and the maximum number of beans according to the zero- have.

For example, assume that an empty string of 111101 is generated through context modeling of the absolute value of the offset. At this time, the generated empty string 111101 can be divided into a preamble empty string and a suffix empty string based on the maximum number (cMax) of bins. Here, since the maximum number cMax of bins is set to 3, the preamble bin string will be 111 and the suffix bin string will be 101.

Meanwhile, if inverse binarization is performed on the preamble bin string 111 that has been binarized according to the cutting type unary binary encoding, 3 is obtained, and inverse binarization is performed on the binarized suffix bin string 101 according to the zero- Performing binarization can yield 2. The acquired 3 and 2 can be added to obtain 5 as an offset absolute value.

Fig. 14 shows a binarization method when the maximum number of bins cMax is set to 5. Fig. Referring to FIG. 14, the absolute value of the offset is represented by a combination of a preamble bin string and a suffix bin string. The preamble bin string and the suffix bin string are binarized by a cutting type unary binary encoding and a zero order exponent Golomb binary encoding, respectively do.

If the maximum number of bins (cMax) is set to 5 and the offset absolute value is 5, the preamble bin string may be represented by 11111 and the suffix bin string may be represented by 0. If the absolute value of the offset is greater than 5, the preamble bin string is fixed to 11111, and the suffix bin string can be represented by binarizing the difference value between the absolute value of the offset and the maximum number of beans according to the zero- have.

For example, assume that an empty string of 11111100 is created through context modeling of the absolute value of the offset. At this time, the generated empty string 11111100 can be divided into a preamble empty string and a suffix empty string based on the maximum number of bins (cMax). Here, since the maximum number of bins cMax is set to 5, the preamble bin string will be 11111 and the suffix bin string will be 100.

On the other hand, by performing inverse-binarization on the preamble bin string 11111 that is binarized according to the cutting type unary binary encoding, 5 is obtained, and the inverse binarization is performed on the binarized suffix bin string 100 according to the zero- You can obtain 1 by performing binarization. The obtained 5 and 1 can be added to obtain 6 as an offset absolute value.

FIG. 15 shows a binarization method when the maximum number of bins cMax is set to 7. FIG. Referring to FIG. 15, the absolute value of the offset is represented by a combination of a preamble bin string and a suffix bin string. The preamble bin string and the suffix bin string are binarized by a cutting type unary binary encoding and a zero order exponent Golomb binary encoding, respectively do.

For example, if the maximum number of bins (cMax) is set to 7 and the offset absolute value is 7, the preamble bin string may be represented by 1111111 and the suffix bin string may be represented by 0. If the absolute value of the offset is greater than 7, the preamble bin string is fixed to 1111111, and the suffix bin string can be represented by binarizing the difference value between the absolute value of the offset and the maximum number of beans according to the zero- have.

For example, assume that an empty string of 11111111100 has been generated through context modeling of the absolute value of the offset. At this time, the generated empty string 11111111100 can be divided into a preamble empty string and a suffix empty string based on the maximum number of beans (cMax). Here, since the maximum number cMax of bins is set to 7, the preamble bin string will be 1111111 and the suffix bin string will be 100.

On the other hand, by performing inverse-binarization on the preamble bin string 11111 binarized according to the cutting type unary binary coding, 7 is obtained, and inverse-binarization is performed on the binarized suffix bin string 100 according to the zero- You can obtain 1 by performing binarization. The obtained 7 and 1 can be added to obtain 8 as an offset absolute value.

Claims (14)

Obtaining mode identifier information on a current depth block from a bitstream, wherein the mode identifier information specifies one of a plurality of candidate modes used to derive a representative difference depth value of the current depth block,
Determining a segment Wise DC mode (SDC mode) of the current depth block from among the plurality of candidate modes based on the obtained mode identifier information;
Deriving a representative difference depth value of the current depth block based on the determined SDC mode; And
And reconstructing the current depth block using the derived representative difference depth value.
The method of claim 1, wherein the plurality of candidate modes includes at least one of a first segment Wise DC candidate mode, a second segment Wise DC candidate mode, or a third segment Wise DC candidate mode,
Wherein the first segment Wise DC candidate mode is a mode for encoding the representative difference depth value into an offset absolute value and offset sign information and the second segment Wise DC candidate mode is a mode for mapping the representative difference depth value to the representative difference depth value using a depth lookup table And the third segment Wise DC candidate mode is a mode for encoding the representative difference depth value using a depth quantization parameter.
3. The method of claim 2,
If the SDC mode of the current depth block is determined to be the first segment Wise DC candidate mode, deriving the representative difference depth value comprises:
Obtaining the offset absolute value and the offset code information from a bitstream;
Deriving an offset value using the obtained offset absolute value and offset sign information; And
And setting the offset value to the representative difference depth value.
3. The method of claim 2, wherein if the SDC mode of the current depth block is determined as the second segment Wise DC candidate mode, deriving the representative difference depth value comprises:
Obtaining the offset absolute value and the offset code information from a bitstream;
Deriving an offset value using the obtained offset absolute value and offset sign information; And
And deriving the representative difference depth value using the derived offset value and a depth lookup table.
5. The method of claim 4, wherein deriving the representative difference depth value using the depth lookup table comprises:
Transforming the predicted depth value of the current depth block into a first index corresponding to the predicted depth value using the depth lookup table;
Acquiring a second index by adding the converted first index and the offset value;
Converting the depth value into a restoration depth value corresponding to the second index using the depth lookup table; And
And setting a value obtained by subtracting the prediction depth value from the restoration depth value to the representative difference depth value.
6. The method of claim 5,
Wherein the first index is an index assigned to a depth value that minimizes a difference between the depth value or the predictive depth value that is equal to the predictive depth value among the depth values defined in the depth lookup table.
The method according to claim 6,
Wherein the predicted depth value is derived as an average value of samples located at the corners of the current depth block, and the samples located at the corners are sampled at the upper left corner sample, upper right corner sample, lower left corner sample, Wherein the video signal is at least one of a first video signal and a second video signal.
Wherein the mode identifier information specifies one of a plurality of candidate modes used for deriving a representative difference depth value of the current depth block from the bitstream, the mode identifier information obtaining mode identifier information on the current depth block from the bitstream, ,
Determining a segment Wise DC mode (SDC mode) of the current depth block from among the plurality of candidate modes based on the obtained mode identifier information, and determining a representative difference depth value of the current depth block based on the determined SDC mode And reconstructing the current depth block using the derived representative difference depth value.
9. The method of claim 8, wherein the plurality of candidate modes comprises at least one of a first segment Wise DC candidate mode, a second segment Wise DC candidate mode, or a third segment Wise DC candidate mode,
Wherein the first segment Wise DC candidate mode is a mode for encoding the representative difference depth value into an offset absolute value and offset sign information and the second segment Wise DC candidate mode is a mode for mapping the representative difference depth value to the representative difference depth value using a depth lookup table And the third segment Wise DC candidate mode is a mode for encoding the representative differential depth value using a depth quantization parameter.
10. The apparatus of claim 9,
If the SDC mode of the current depth block is determined to be the first segment Wise DC candidate mode, derives an offset value using the offset absolute value and the offset code information obtained from the bit stream, Depth video signal to a depth value.
10. The apparatus of claim 9,
If the SDC mode of the current depth block is determined to be the second segment Wise DC candidate mode, deriving an offset value using the offset absolute value and the offset code information obtained from the bitstream, And derives the representative difference depth value using a lookup table.
12. The apparatus of claim 11,
Transforming the predicted depth value of the current depth block into a first index corresponding to the predicted depth value using the depth lookup table, adding the converted first index and the offset value to obtain a second index, And transforms the reconstructed depth value into a reconstructed depth value corresponding to the second index using a depth lookup table and sets a value obtained by subtracting the predicted depth value from the reconstructed depth value to the representative difference depth value.
13. The method of claim 12,
Wherein the first index is an index assigned to a depth value that minimizes a difference between the depth value or the predicted depth value that is equal to the predicted depth value among the depth values defined in the depth lookup table.
14. The method of claim 13,
Wherein the predicted depth value is derived as an average value of samples located at the corners of the current depth block, and the samples located at the corners are sampled at the upper left corner sample, upper right corner sample, lower left corner sample, And a second video signal decoding unit for decoding the second video signal.
KR1020150071664A 2014-05-26 2015-05-22 A method and an apparatus for processing a multi-view video signal KR20150136018A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20140063040 2014-05-26
KR1020140063040 2014-05-26

Publications (1)

Publication Number Publication Date
KR20150136018A true KR20150136018A (en) 2015-12-04

Family

ID=54867607

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150071664A KR20150136018A (en) 2014-05-26 2015-05-22 A method and an apparatus for processing a multi-view video signal

Country Status (1)

Country Link
KR (1) KR20150136018A (en)

Similar Documents

Publication Publication Date Title
CN108965888B (en) Method of generating merge candidate list for multi-view video signal and decoding apparatus
JP2021022947A (en) Effective partition encoding with high degree of freedom of partition
KR20150110357A (en) A method and an apparatus for processing a multi-view video signal
US20240121412A1 (en) Luma mapping- and chroma scaling-based video or image coding
KR20160004947A (en) A method and an apparatus for processing a multi-view video signal
KR20150136017A (en) A method and an apparatus for processing a multi-view video signal
KR20160001647A (en) A method and an apparatus for processing a multi-view video signal
CN114128273B (en) Image decoding and encoding method and data transmission method for image
AU2014205860B2 (en) Method and apparatus for processing video signal
KR20160004946A (en) A method and an apparatus for processing a multi-view video signal
KR102491959B1 (en) Video or picture coding based on luma mapping and chroma scaling
CN113475073A (en) Video or image compilation based on luminance mapping
US11889082B2 (en) Video or image coding based on luma mapping and chroma scaling
KR20210136988A (en) Video or video coding method and apparatus
KR20150105231A (en) A method and an apparatus for processing a multi-view video signal
KR20150136018A (en) A method and an apparatus for processing a multi-view video signal
RU2781435C1 (en) Encoding of videos or images based on brightness display and chroma scaling
RU2811987C1 (en) Encoding video or images based on luminance display and colour mapping
US11968401B2 (en) Video or image coding method and device therefor
RU2804453C2 (en) Encoding video or images based on luminance display and color mapping
KR20150139450A (en) A method and an apparatus for processing a multi-view video signal
KR20150146417A (en) A method and an apparatus for processing a multi-view video signal
KR20160003573A (en) A method and an apparatus for processing a multi-view video signal
CN116668698A (en) Image encoding/decoding method, image data transmitting method, and storage medium