KR20150110357A - A method and an apparatus for processing a multi-view video signal - Google Patents

A method and an apparatus for processing a multi-view video signal Download PDF

Info

Publication number
KR20150110357A
KR20150110357A KR1020150037360A KR20150037360A KR20150110357A KR 20150110357 A KR20150110357 A KR 20150110357A KR 1020150037360 A KR1020150037360 A KR 1020150037360A KR 20150037360 A KR20150037360 A KR 20150037360A KR 20150110357 A KR20150110357 A KR 20150110357A
Authority
KR
South Korea
Prior art keywords
current block
block
merge
candidate
flag
Prior art date
Application number
KR1020150037360A
Other languages
Korean (ko)
Inventor
이배근
김주영
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of KR20150110357A publication Critical patent/KR20150110357A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A multi-view video signal processing method according to the present invention generates a merge candidate list related to a current block, derives a motion vector of a current block based on a merge index of a current block obtained from the bitstream, The predictive value of the current block is obtained, and the predictive value and the residual value relating to the current block are added to reconstruct the current block.

Description

TECHNICAL FIELD [0001] The present invention relates to a multi-view video signal processing method and apparatus,

The present invention relates to a method and apparatus for coding a video signal.

Recently, the demand for high resolution and high quality images such as high definition (HD) image and ultra high definition (UHD) image is increasing in various applications. As the image data has high resolution and high quality, the amount of data increases relative to the existing image data. Therefore, when the image data is transmitted using a medium such as a wired / wireless broadband line or stored using an existing storage medium, The storage cost is increased. High-efficiency image compression techniques can be utilized to solve such problems as image data becomes high-resolution and high-quality.

An inter picture prediction technique for predicting a pixel value included in a current picture from a previous or a subsequent picture of a current picture by an image compression technique, an intra picture prediction technique for predicting a pixel value included in a current picture using pixel information in the current picture, There are various techniques such as an entropy encoding technique in which a short code is assigned to a value having a high appearance frequency and a long code is assigned to a value having a low appearance frequency. Image data can be effectively compressed and transmitted or stored using such an image compression technique.

On the other hand, demand for high-resolution images is increasing, and demand for stereoscopic image content as a new image service is also increasing. Video compression techniques are being discussed to effectively provide high resolution and ultra-high resolution stereoscopic content.

It is another object of the present invention to provide a method and apparatus for performing inter-view prediction using a disparity vector in encoding / decoding a multi-view video signal.

An object of the present invention is to provide a method and apparatus for deriving a variation vector of a texture block using depth data of a depth block in encoding / decoding a multi-view video signal.

It is an object of the present invention to provide a method and apparatus for deriving a disparity vector from a neighboring block of a current block in encoding / decoding a multi-view video signal.

An object of the present invention is to provide a method and apparatus for deriving an interview merge candidate using a disparity vector in encoding / decoding a multi-view video signal.

It is an object of the present invention to provide a method and apparatus for constructing a merge candidate list for merge mode in encoding / decoding a multi-view video signal.

It is an object of the present invention to provide a method and apparatus for efficiently encoding an illumination compensation flag in encoding / decoding a multi-view video signal.

The present invention provides a method and apparatus for selectively using an interview motion candidate in consideration of illumination compensation in encoding / decoding a multi-view video signal.

The present invention aims to provide a method and apparatus for determining the arrangement order of merge candidates in a merge candidate list in consideration of illumination compensation in encoding / decoding a multi-view video signal.

A method and an apparatus for decoding a multi-view video signal according to the present invention generate a merged candidate list about a current block, derive a motion vector of the current block based on a merge index for the current block obtained from the bitstream, Acquires a predicted value of the current block using the derived motion vector, and adds the obtained predicted value and the residual value of the current block to reconstruct the current block.

In the multi-view video signal decoding method and apparatus according to the present invention, the merge candidate list is constituted by at least one merge candidate, and the merge candidate includes at least one of a spatial neighbor block, a temporal neighbor block or an interview motion candidate (IvMC) .

In the multi-view video signal decoding method and apparatus according to the present invention, the interview motion candidate (IvMC) may include a motion compensation candidate list (IvMC), based on an illumination compensation flag (ic_flag) indicating whether or not illumination compensation is performed on the current block And is included in a limited manner.

In the multi-view video signal decoding method and apparatus according to the present invention, the merge index specifies a merge candidate used to decode the current block into merge mode.

In the multi-view video signal decoding method and apparatus according to the present invention, the interview motion candidate (IvMC) has a temporal motion vector of a reference block specified by a transition vector of the current block, And belongs to a reference view.

A method and apparatus for decoding a multi-view video signal according to the present invention includes the steps of obtaining an illumination compensation unavailability flag from a bitstream and obtaining a value of the illumination compensation flag based on the illumination compensation non- .

In the multi-view video signal decoding method and apparatus according to the present invention, the illumination compensation unavailability flag is characterized by specifying whether or not the illumination compensation flag is encoded for the current block whose value of the merge index is 0

In the multi-view video signal decoding method and apparatus according to the present invention, if illumination compensation is not performed on the current block according to the value of the illumination compensation flag, the interview motion candidate (IvMC) , And is arranged in the merge candidate list with the priority of the interview motion candidate (IvMC), the spatial neighbor block, and the temporal neighbor block.

A method and apparatus for encoding a multi-view video signal according to the present invention includes generating a merge candidate list for a current block, deriving a motion vector of the current block based on a merge index for the current block, The predicted value of the current block is obtained by using the predicted value of the current block, and the current block is restored by adding the obtained predicted value and the residual value of the current block.

In the multi-view video signal encoding method and apparatus according to the present invention, the merge candidate list is made up of at least one merge candidate, and the merge candidate has at least one of a spatial neighbor block, a temporal neighbor block or an interview motion candidate (IvMC) .

In the multi-view video signal encoding method and apparatus according to the present invention, the interview motion candidate (IvMC) may include a motion compensation candidate list (IvMC), based on an illumination compensation flag (ic_flag) indicating whether or not illumination compensation is performed on the current block And is included in a limited manner.

In the multi-view video signal encoding method and apparatus according to the present invention, the merge index specifies a merge candidate used to encode the current block into merge mode.

In the multi-view video signal encoding method and apparatus according to the present invention, the interview motion candidate (IvMC) has a temporal motion vector of a reference block specified by a transition vector of the current block, And belong to a reference view.

A method and apparatus for encoding a multi-view video signal according to the present invention determines a value of an illumination compensation unavailable flag that specifies whether or not an illumination compensation flag for the current block with a merge index value of 0 is encoded, The value of the illumination compensation flag is determined based on the value of the compensation non-availability flag and the value of the merge index.

In the multi-view video signal encoding method and apparatus according to the present invention, when the illumination compensation is not performed on the current block according to the value of the illumination compensation flag, the interview motion candidate (IvMC) , And is arranged in the merge candidate list with the priority of the interview motion candidate (IvMC), the spatial neighbor block, and the temporal neighbor block.

According to the present invention, it is possible to efficiently perform inter-view prediction using a disparity vector.

According to the present invention, the variation vector of the current block can be effectively derived from the depth data of the current depth block or the variation vector of the neighboring texture block.

According to the present invention, merge candidates of the merge candidate list can be efficiently configured.

According to the present invention, the value of the illumination compensation flag for the current block can be effectively signaled.

According to the present invention, encoding / decoding performance can be improved by selectively using an interview motion candidate in consideration of illumination compensation.

According to the present invention, the encoding / decoding performance can be improved by setting the priority order of a plurality of merge candidates constituting the merge candidate list.

FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
FIG. 2 illustrates a method of decoding a current block in a merge mode according to an embodiment of the present invention. Referring to FIG.
FIG. 3 illustrates a method of deriving a motion vector of an interview motion candidate based on a point-to-point motion prediction technique according to an embodiment of the present invention.
FIG. 4 illustrates a method of deriving a motion vector of a view composite candidate (VSP candidate) based on a transition vector of a current block according to an embodiment of the present invention.
FIG. 5 illustrates a method of deriving a variation vector of a current block using depth data of a depth image according to an embodiment of the present invention. Referring to FIG.
FIG. 6 is a diagram illustrating candidates of a spatial / temporal neighboring block of a current block according to an embodiment of the present invention.
FIG. 7 illustrates a method of adaptively using the interview motion candidate IvMC based on the illumination compensation flag ic_flag according to an embodiment of the present invention.
FIG. 8 illustrates an embodiment in which the present invention is applied, and shows the priority of merge candidates based on the illumination compensation flag ic_flag.

The technique of compression-encoding or decoding multi-view video signal data considers spatial redundancy, temporal redundancy, and redundancy existing between viewpoints. Also, in case of a multi-view image, a multi-view texture image captured at two or more viewpoints can be coded to realize a three-dimensional image. Further, the depth data corresponding to the multi-viewpoint texture image may be further encoded as needed. In coding the depth data, it is needless to say that compression coding can be performed in consideration of spatial redundancy, temporal redundancy, or inter-view redundancy. The depth data expresses the distance information between the camera and the corresponding pixel. In the present specification, the depth data can be flexibly interpreted as depth-related information such as a depth value, a depth information, a depth image, a depth picture, a depth sequence and a depth bit stream . In the present specification, coding may include both concepts of encoding and decoding, and may be flexibly interpreted according to the technical idea and technical scope of the present invention.

FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.

1, the video decoder includes a NAL parsing unit 100, an entropy decoding unit 200, an inverse quantization / inverse transformation unit 300, an intra prediction unit 400, an in-loop filter unit 500, A buffer unit 600, and an inter-prediction unit 700.

The NAL parsing unit 100 may receive the bitstream including the texture data again. In addition, when depth data is required for coding texture data, a bitstream including encoded depth data may be further received. The texture data and the depth data input at this time may be transmitted in one bit stream or in a separate bit stream. The NAL parsing unit 100 may perform parsing in units of NALs to decode the input bitstream. If the input bitstream is multi-point related data (e.g., 3-Dimensional Video), the input bitstream may further include camera parameters. The camera parameters may include intrinsic camera parameters and extrinsic camera parameters and the unique camera parameters may include focal length, aspect ratio, principal point, etc., and the camera parameter of the note may include position information of the camera in the world coordinate system, and the like.

The entropy decoding unit 200 can extract quantized transform coefficients through entropy decoding, coding information for predicting a texture picture, and the like.

The inverse quantization / inverse transformation unit 300 can obtain a transform coefficient by applying a quantization parameter to the quantized transform coefficient, and invert the transform coefficient to decode the texture data or the depth data. Here, the decoded texture data or depth data may mean residual data according to prediction processing. In addition, the quantization parameter for the depth block can be set in consideration of the complexity of the texture data. For example, when a texture block corresponding to a depth block is a region with a high complexity, a low quantization parameter may be set, and in the case of a low complexity region, a high quantization parameter may be set. The complexity of the texture block can be determined based on the difference value between adjacent pixels in the reconstructed texture picture as shown in Equation (1).

Figure pat00001

In Equation (1), E represents the complexity of the texture data, C represents the restored texture data, and N may represent the number of pixels in the texture data region for which the complexity is to be calculated. Referring to Equation 1, the complexity of the texture data corresponds to the difference value between the texture data corresponding to the (x, y) position and the texture data corresponding to the position (x-1, y) Can be calculated using the difference value between the texture data and the texture data corresponding to the position (x + 1, y). In addition, the complexity can be calculated for the texture picture and the texture block, respectively, and the quantization parameter can be derived using Equation (2) below.

Figure pat00002

Referring to Equation (2), the quantization parameter for the depth block can be determined based on the ratio of the complexity of the texture picture and the complexity of the texture block. [alpha] and [beta] may be a variable integer that is derived from the decoder, or it may be a predetermined integer within the decoder.

The intra prediction unit 400 may perform intra prediction using the restored texture data in the current texture picture. In-depth prediction can also be performed on the depth map in the same manner as the texture picture. For example, the coding information used for intra-picture prediction of the texture picture can be used equally in the tep-picture. Here, the coding information used for intra-picture prediction may include intra-prediction mode and partition information of intra-prediction.

The in-loop filter unit 500 may apply an in-loop filter to each coded block to reduce block distortion. The filter can smooth the edges of the block to improve the picture quality of the decoded picture. The filtered texture or depth pictures may be output or stored in the decoded picture buffer unit 600 for use as a reference picture. On the other hand, when the texture data and the depth data are coded using the same in-loop filter, the coding efficiency may deteriorate because the characteristics of the texture data and the characteristics of the depth data are different from each other. Thus, a separate in-loop filter for depth data may be defined. Hereinafter, an area-based adaptive loop filter and a trilateral loop filter will be described as an in-loop filtering method capable of efficiently coding depth data.

In the case of an area-based adaptive loop filter, it may be determined whether to apply an area-based adaptive loop filter based on the variance of the depth block. Here, the variation amount of the depth block can be defined as a difference between the maximum pixel value and the minimum pixel value in the depth block. It is possible to decide whether to apply the filter by comparing the variation amount of the depth block with the predetermined threshold value. For example, if the variation of the depth block is greater than or equal to the predetermined threshold value, it means that the difference between the maximum pixel value and the minimum pixel value in the depth block is large, so that it can be determined to apply the area-based adaptive loop filter . Conversely, when the depth variation is smaller than the predetermined threshold value, it can be determined that the area-based adaptive loop filter is not applied. When the filter is applied according to the comparison result, the pixel value of the filtered depth block may be derived by applying a predetermined weight to the neighboring pixel value. Here, the predetermined weight may be determined based on the positional difference between the currently filtered pixel and the neighboring pixel and / or the difference value between the currently filtered pixel value and the neighboring pixel value. In addition, the neighboring pixel value may mean any one of the pixel values included in the depth block excluding the pixel value currently filtered.

The Tracheal Loop Filter according to the present invention is similar to the region-based adaptive loop filter, but differs in that it additionally considers the texture data. Specifically, the trilateral loop filter compares the following three conditions and extracts depth data of neighboring pixels satisfying the following three conditions.

Condition 1.

Figure pat00003

Condition 2.

Figure pat00004

Condition 3.

Figure pat00005

Condition 1 represents the positional difference between the current pixel p and the neighboring pixel q in the depth block as a predetermined parameter

Figure pat00006
And condition 2 is to compare the difference between the depth data of the current pixel p and the depth data of the neighboring pixel q to a predetermined parameter
Figure pat00007
And condition 3 is to compare the difference between the texture data of the current pixel p and the texture data of the neighboring pixel q to a predetermined parameter
Figure pat00008
.

It is possible to extract neighboring pixels satisfying the above three conditions and to filter the current pixel p with an intermediate value or an average value of these depth data.

A decoded picture buffer unit (600) performs a function of storing or opening a previously coded texture picture or a depth picture to perform inter-picture prediction. At this time, frame_num and picture order count (POC) of each picture can be used to store or open the picture in the decoding picture buffer unit 600. Further, among the previously coded pictures in the depth coding, since there are depth pictures that are different from the current depth picture, in order to utilize such pictures as a reference picture, time identification information for identifying the time of the depth picture may be used have. The decoded picture buffer unit 600 can manage reference pictures using a memory management control operation method and a sliding window method in order to more flexibly perform inter-picture prediction. This is to uniformly manage the memories of the reference picture and the non-reference picture into one memory and efficiently manage them with a small memory. In the depth coding, the depth pictures may be marked with a separate mark in order to distinguish them from the texture pictures in the decoding picture buffer unit, and information for identifying each depth picture in the marking process may be used.

The inter prediction unit 700 may perform motion compensation of the current block using the reference picture and the motion information stored in the decoding picture buffer unit 600. [ In the present specification, the motion information can be understood as a broad concept including motion vectors and reference index information. In addition, the inter prediction unit 700 may perform temporal inter prediction to perform motion compensation. Temporal inter prediction may refer to inter prediction using motion information of the current block and a reference picture located at the same time and at a different time in the current block. Also, in the case of a multi-view image captured by a plurality of cameras, temporal inter prediction as well as inter-view prediction may be performed. The motion information used for the inter-view prediction may include a disparity vector or an inter-view motion vector.

FIG. 2 illustrates a method of decoding a current block in a merge mode according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 2, a merge candidate list related to a current block can be generated (S200).

The merge candidate list of the present invention may include at least one merge candidate available for decoding the current block into merge mode. Here, as an example of the merge candidate, the spatial / temporal neighbor block of the current block can be used. The spatial neighbor block may include at least one of a left neighbor block, an upper neighbor block, an upper right neighbor block, a lower left neighbor block, and an upper left neighbor block of the current block. The temporal neighbor block is a block included in a call picture having a temporal order different from that of the current block, and may be defined as a block at the same position as the current block.

In addition, a merge candidate (hereinafter referred to as an interview merge candidate) based on the correlation between the viewpoints or the texture and the depth may be included in the merge candidate list. The interview merge candidate includes a texture merge candidate, an inter-view motion candidate, an inter-view disparity candidate, a view synthesis prediction candidate (VSP candidate) The method of deriving the motion vectors of each of the interview merge candidates and the method of constructing the merge candidate in the merge candidate list will be described later with reference to FIG. 3 to FIG.

Referring to FIG. 2, the motion vector of the current block may be derived based on the merge candidate list and the merge index (merge_idx) generated in operation S200 (S210).

Specifically, the merge candidate corresponding to the merge index of the current block can be selected from the merge candidate list. Here, the merge index can be extracted from the bitstream as a syntax specifying one of a plurality of merge candidates included in the merge candidate list. That is, the merge index is information for specifying the merge candidate used to derive the motion vector of the current block.

The motion vector assigned to the selected merge candidate can be set as the motion vector of the current block.

The predicted value of the current block may be obtained using the motion vector derived in step S210 (S200).

Specifically, if the reference picture of the current block belongs to the same time as the current block, the current block can perform temporal inter prediction using the motion vector. On the other hand, if the reference picture of the current block belongs to a time point different from the current block, the current block can perform inter-view inter prediction using the motion vector.

Whether the reference picture of the current block belongs to the same point in time as the current block can be determined by specifying a reference picture in the reference picture list using the reference index of the current block and setting the view index of the specified reference picture to And is equal to the view index of the current block.

Referring to FIG. 2, the current block may be restored by adding the predicted value of the current block obtained in step S220 and the residual value of the current block (S230).

Here, the residual value means the difference between the reconstruction value and the predicted value of the current block, and can be obtained by performing inverse quantization and / or inverse transform on the transform coefficient extracted from the bitstream.

Hereinafter, a method of deriving the motion vectors of the interview merge candidate mentioned in FIG. 2 and a method of constructing a merge candidate in the merge candidate list will be described.

Ⅰ. interview merge  Candidate's motion  Vector induction method

1. Texture Merge Candidate (T)

The texture data of the video image and the depth data represent the images of the same time and the same time, and the correlation is high. Therefore, when the depth data is encoded / decoded using the same motion vector used for encoding / decoding the texture data, the encoding / decoding performance of the video image can be improved.

Specifically, if the current block is a depth block (DepthFlag = 1), the motion vector of the texture block corresponding to the depth block can be allocated to the texture merge candidate. Here, the texture block may be determined as a block at the same position as the depth block.

2. Interview Motion Candidate (IvMC)

The motion vector of the interview motion candidate may be derived based on the point-to-point motion prediction technique, which will be described with reference to FIG.

Referring to FIG. 3, a current block belonging to a current view (view 0) can specify a reference block belonging to a reference view (view 1) using a variation vector. For example, the reference block may be specified as a block at a position shifted by the shift vector at a reference block position corresponding to the current block position. If the reference block has a temporal motion vector (i.e., if it is coded with temporal inter prediction), the temporal motion vector of the reference block may be assigned to the interview motion candidate.

In addition, the current block can perform inter-viewpoint motion prediction on a sub-block basis. In this case, the current block may be divided into sub-block units (for example, 8x8), temporal motion vectors of reference blocks may be obtained for each sub-block, and the temporal motion vectors may be allocated to the interview motion candidates.

Meanwhile, the disparity vector of the current block can be derived from the depth image corresponding to the current block, which will be described in detail with reference to FIG. Further, the mutation vector may be derived from a neighboring block spatially adjacent to the current block, or may be derived from a temporal neighboring block located at a different time zone from the current block. A method of deriving a variation vector from a spatial / temporal neighbor block of a current block will be described with reference to FIG.

3. Interview Motion Shift Candidate (IvMCShift)

The mutation vector of the above-described motion motion candidate IvMC can be shifted by a specific value, and the reference block belonging to the reference point (view 0) can be specified using the shifted mutation vector. Specifically, the shifted disparity vector may be derived by shifting the disparity vector of the intervening motion candidate IvMC in consideration of the width (nPbW) / height (nPbH) of the current block. For example, the shifted variation vector can be derived by shifting the variation vector of the motion motion candidate IvMC by (nPbW * 2, nPbH * 2).

If the reference block has a temporal motion vector, the temporal motion vector of the reference block may be assigned to the intervening motion shift candidate.

4. Interview Candidate (IvDC)

As described above, the variation vector can be derived from the depth image or the spatial / temporal neighboring block corresponding to the current block. A vector with a vertical component (y component) of 0 in the derived variation vector can be assigned to the interview variation candidate. For example, if the derived variance vector of the current block is (mvDisp [0], mvDisp [1]), then the vector (mvDisp [0], 0) can be assigned to the interview variation candidate.

5. Interview Mutation Shift Candidate (IvDCShift)

Similarly, a variation vector may be derived from a depth image or a spatial / temporal neighboring block corresponding to the current block. A vector in which the horizontal component (x component) is shifted by a pre-determined value in the derived variation vector can be assigned to the interview variation shift candidate. For example, if the motion vector of the interview candidate is (mvDisp [0], mvDisp [1]), a vector that moves the horizontal component mvDisp [0] by 4, that is, mvDisp [0] +4, mvDisp [ 1]) can be assigned to the interview variation shift candidate.

Alternatively, a vector obtained by moving the horizontal component (x component) in the derived variation vector by a pre-determined value and setting the vertical component (y component) to 0 may be assigned to the interview variation shift candidate. For example, if the motion vector of the interview candidate is (mvDisp [0], mvDisp [1]), the horizontal component mvDisp [0] is shifted by 4 and the vertical component mvDisp [1] That is, (mvDisp [0] +4, 0) may be assigned to the interview variation shift candidate.

6. VSP Candidates

The motion vector of the VSP candidate may also be derived based on the difference vector of the current block, which will be described with reference to FIG.

Referring to FIG. 4, a variation vector (first variation vector) can be derived from a depth image or a spatial / temporal neighboring block of a current block (S400), and a method of deriving the variation vector will be described with reference to FIGS. 5 to 6 Will be described later.

The depth block at the reference time point can be specified using the disparity vector derived in step S400 (S410). Here, the depth block may be included in the reference depth picture. The reference depth picture and the reference texture picture belong to the same access unit, and the reference texture picture may correspond to a reference picture between the current block time points.

In step S420, the modified variation vector (second variation vector) may be derived using at least one depth sample of the pre-defined position in the depth block (S420). For example, depth samples located at the four corners of the depth block may be used. The second variation vector may be derived from the maximum of the four depth-located depth samples, or may be derived from the average of the depth samples located at four corners and may be derived from any one of the four corners It is possible.

FIG. 5 illustrates a method of deriving a variation vector of a current block using depth data of a depth image according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 5, position information of a depth block in a depth picture corresponding to a current block can be obtained based on position information of a current block (S500).

The position of the depth block may be determined in consideration of the spatial resolution between the depth picture and the current picture.

For example, if the depth picture and the current picture are coded with the same spatial resolution, the position of the depth block may be determined to be the same block as the current block of the current picture. On the other hand, the current picture and the depth picture may be coded with different spatial resolutions. Since the depth information indicating the distance information between the camera and the object has the characteristic that the coding efficiency may not decrease significantly even if the spatial resolution is lowered. Thus, if the spatial resolution of the depth picture is coded lower than the current picture, the decoder may involve an upsampling process on the depth picture before acquiring the location information of the depth block. In addition, when the aspect ratio between the upsampled depth picture and the current picture does not exactly coincide, offset information may be additionally considered in acquiring the position information of the current depth block in the upsampled depth picture. Here, the offset information may include at least one of top offset information, left offset information, right offset information, and bottom offset information. The top offset information may indicate a positional difference between at least one pixel located at the top of the upsampled depth picture and at least one pixel located at the top of the current picture. Left, right, and bottom offset information may also be defined in the same manner, respectively.

Referring to FIG. 5, the depth data corresponding to the position information of the depth block may be obtained (S510).

When a plurality of pixels exist in the depth block, depth data corresponding to a corner pixel of the depth block may be used. Alternatively, depth data corresponding to a center pixel of a depth block may be used. Alternatively, one of a maximum value, a minimum value, and a mode value may be selectively used among a plurality of depth data corresponding to a plurality of pixels, or an average value of a plurality of depth data may be used.

Referring to FIG. 5, the variation vector of the current block may be derived using the depth data obtained in step S510 (S520).

For example, the transition vector of the current block can be derived as: < EMI ID = 3.0 >

Figure pat00009

Referring to Equation (3), v represents depth data, a represents a scaling factor, and f represents an offset used to derive a variation vector. The scaling factor a and the offset f may be signaled in a video parameter set or a slice header, or may be a pre-set value in a decoder. n is a variable indicating the value of the bit shift, which can be variably determined according to the accuracy of the variation vector.

FIG. 6 is a diagram illustrating candidates of a spatial / temporal neighboring block of a current block according to an embodiment of the present invention.

6A, the spatial neighboring block includes a left neighbor block A1, an upper neighbor block B1, a lower left neighbor block A0, a upper right neighbor block B0, or an upper left neighbor block A1, (B2). ≪ / RTI >

Referring to FIG. 6 (b), the temporal neighboring block may mean a block at the same position as the current block. Specifically, the temporal neighboring block is a block belonging to a picture located in a different time zone from the current block, and includes a block (BR) corresponding to the lower right pixel of the current block, a block (CT) corresponding to the center pixel of the current block, And a block TL corresponding to the upper left pixel.

The disparity vector of the current block may be derived from a disparity-compensated prediction block (hereinafter referred to as a DCP block) among the spatially / temporally neighboring blocks. Here, the DCP block may be a block encoded through inter-view texture prediction using a transition vector. In other words, the DCP block can perform the inter-view prediction using the texture data of the reference block specified by the disparity vector. In this case, the disparity vector of the current block can be predicted or restored using the disparity vector used for the inter-view texture prediction of the DCP block.

Alternatively, the disparity vector of the current block may be derived from a disparity vector based motion compensation prediction block (hereinafter referred to as a DV-MCP block) of the spatial neighboring block. Here, the DV-MCP block may mean a block coded through inter-view motion prediction using a disparity vector. In other words, the DV-MCP block can perform temporal inter prediction using the temporal motion vector of the reference block specified by the disparity vector. In this case, the disparity vector of the current block may be predicted or reconstructed using the disparity vector used by the DV-MCP block to obtain the temporal motion vector of the reference block.

The current block may be searched for whether the spatially / temporally neighboring block corresponds to a DCP block according to the predefined priority, and the variation vector may be derived from the first searched DCP block. As an example of the predefined priority, a search can be performed with priority of a spatial neighbor block -> temporal neighbor block, and among the spatial neighbor blocks, a priority order of A1-> B1-> B0-> A0-> B2 It is possible to search whether it corresponds to a DCP block. However, it is to be understood that this is merely one embodiment of the priority order and can be determined differently within a range that is obvious to a person skilled in the art.

If any of the spatially / temporally neighboring blocks does not correspond to the DCP block, the spatial neighboring block may be additionally searched for whether it corresponds to the DV-MCP block, and the variation vector may be derived from the first searched DV-MCP block .

Ⅱ. merge  How to organize candidate lists

The maximum number of merge candidates constituting the merge candidate list (MaxNumMergeCand) can be variably determined. However, the maximum number of merge candidates (MaxNumMergeCand) may be determined to be limited within a pre-set range (e.g., 1 to 6). The maximum number of merge candidates (MaxNumMergeCand) for each slice may be adaptively adjusted to improve the coding performance.

For example, the maximum number of merge candidates (MaxNumMergeCand) can be derived as: " (4) "

Figure pat00010

In Equation (4), five_minus_max_num_merge_cand is a syntax of a slice segment level, and the difference between the maximum number of merge candidates for each slice excluding the maximum number of merge candidates (for example, 5) excluding the number of interview merge candidates and the number of interview merge candidates It can mean. The variable NumExtraMergeCand can be derived as shown in Equation (5).

Figure pat00011

In Equation (5), the variable NumExtraMergeCand may be derived based on iv_mv_pred_flag [nuh_layer_id], mpi_flag [nuh_layer_id], or ViewSynthesisPredFlag.

Here, iv_mv_pred_flag is a syntax indicating whether or not inter-viewpoint motion prediction is performed at the current time point. For example, when iv_mv_pred_flag = 1, the inter-viewpoint motion prediction is performed. Otherwise, the inter-viewpoint motion prediction is not performed. Therefore, when iv_mv_pred_flag = 1, the inter-view motion candidate (IvMC) based on the inter-viewpoint motion prediction can be used, and the variable NumExtraMergeCand can be set to one.

The mpi_flag is a syntax indicating whether motion parameter inheritance is performed or not. For example, in the process of decoding a depth block, a method of using a motion vector of a texture block corresponding to the depth block, or deriving a motion vector of a texture block from a reference block at a neighboring time is referred to as a motion parameter inheritance ). Therefore, when the motion parameter inheritance is performed according to the mpi_flag, the above-described texture merge candidate or interview motion candidate IvMC can be used as the merge candidate of the current block, and the variable NumExtraMergeCand can be set to one.

ViewSynthesisPredFlag is a flag indicating whether the VSP candidate is used. Thus, if the value of ViewSynthesisPredFlag is 1, the current block may add the VSP candidate to the merge candidate list, and the variable NumExtraMergeCand may be set to one.

The above merge candidates, that is, the spatially / temporally neighboring blocks of the current block and the interview merge candidate, are included in the merge candidate list of the current block, but may be included as much as the maximum number (MaxNumMergeCand) of the derived merge candidates.

To this end, priority (or arrangement order) needs to be defined among the merge candidates added to the merge candidate list.

For example, the merge candidates may include an interview motion candidate IvMC, a left neighbor block A1, an upper neighbor block B1, an upper right neighbor block B0, an interview variation candidate IvDC, a VSP candidate, An upper left neighbor block B2, an Interview Motion Shift Candidate (IvMCShift), and an Interview Mutation Shift Candidate (IvDCShift).

Alternatively, the merge candidates may be classified into an interview motion candidate IvMC, a left neighbor block A1, an upper neighbor block B1, a VSP candidate, a top right neighbor block B0, an interview variation candidate IvDC, , The upper left neighbor block B2, the Interview Motion Shift Candidate (IvMCShift), and the Interview Mutation Shift Candidate (IvDCShift), and the above-described priorities may be changed within a range that is obvious to a person skilled in the art Of course.

FIG. 7 illustrates a method of adaptively using the interview motion candidate IvMC based on the illumination compensation flag ic_flag according to an embodiment of the present invention.

As shown in FIG. 6, when the interview motion candidate IvMC is included in the merge candidate list, the interview motion candidate IvMC can be arranged in the merge candidate list with the highest priority among the merge candidates. For example, when an index is assigned to each of the merge candidates constituting the merge candidate list within an integer range greater than or equal to 0, the interview motion candidate IvMC receives merge candidates having an index of 0 in the merge candidate list . ≪ / RTI >

If the merge index (merge_idx) value of the current block is 0, the current block is likely to be decoded into the merge mode using the interview motion candidate IvMC. In addition, the fact that the current block uses the interview motion candidate (IvMC) means that there is a large possibility that the illumination difference between the current point including the current block and the reference point is not large. Therefore, it may be efficient to encode the illumination compensation flag ic_flag in consideration of the merge index (merge_idx) value of the current block.

Specifically, referring to FIG. 7, an illumination compensation unavailable flag (slice_ic_diable_merge_zero_idx_flag) and a merge index (merge_idx) may be obtained from the bitstream (S700).

The illumination compensation unavailable flag slice_ic_disable_merge_zero_idx_flag may specify whether the illumination compensation flag ic_flag exists or is coded for the current block whose merge index merge_idx is 0. Here, the current block may be a coding block (merge_flag = 1) coded in merge mode and a coding block having a partition mode of 2Nx2N (i.e., a prediction block of 2Nx2N).

Specifically, when the value of the illumination compensation unavailability flag (slice_ic_diable_merge_zero_idx_flag) is 1, the illumination compensation flag ic_flag does not exist for the current block whose merge index merge_idx is 0, ) May be set to zero.

On the other hand, when the value of the illumination compensation unavailability flag (slice_ic_diable_merge_zero_idx_flag) is 0, it can be encoded that the illumination compensation flag ic_flag can be encoded for the current block whose merge index merge_idx is 0.

The merge index (merge_idx) is information for specifying the merge candidate used to derive the motion vector of the current block, as illustrated in FIG.

Referring to FIG. 7, it can be checked whether the value of the illumination compensation non-availability flag slice_ic_diable_merge_zero_idx_flag obtained in step S700 is 1 and the value of the merge index (merge_idx) is 0 (S710).

If the value of the illumination compensation ratio flag slice_ic_diable_merge_zero_idx_flag is 1 and the value of the merge index merge_idx is 0, the value of the illumination compensation flag ic_flag for the current block may be set to 0 (S720).

On the other hand, if the value of the illumination compensation ratio flag slice_ic_diable_merge_zero_idx_flag is 0 or the value of the merge index merge_idx is 1, the illumination compensation flag ic_flag for the current block can be obtained from the bitstream (S730).

Here, the illumination compensation flag may mean information indicating whether or not illumination compensation is performed on a current block (for example, a coding unit or a prediction unit). The illumination compensation of the present invention means compensating the illumination difference between the viewpoints. If there is a difference in illumination between the points, it may be more efficient to use the interview transition candidate (IvDC) than to use the interview motion candidate (IvMC) using the temporal motion vector at the neighboring point of time. Thus, it may be possible to selectively use the interview motion candidate IvMC based on the illumination compensation flag.

Referring to FIG. 7, it can be checked whether the value of the illumination compensation flag for the current block is 1 (S740).

If the value of the illumination compensation flag is not 1 (i.e., ic_flag = 0), the interview motion candidate IvMC can be derived (S750). The method of deriving the meaning of the interview motion candidate IvMC and the method of deriving the motion vector will be described in detail with reference to FIG. 3. A merge candidate list including the interview motion candidate IvMC derived in step S750 can be generated S770). That is, the interview motion candidate IvMC can be added to the merge candidate list according to the above-described priority order.

On the other hand, if the value of the illumination compensation flag is 1, the process of deriving the interview motion candidate IvMC may be skipped (S760).

In this case, by setting the value of the flag (availableFlagIvMC) indicating whether or not the interview motion candidate IvMC is available as a merge candidate for the current block to 0, the motion motion candidate IvMC is restricted from being added to the merge candidate list It is possible. In addition, the Interview Motion Shift Candidate (IvMCShift) associated with the Interview Motion Candidate (IvMC) may also be excluded from the Merge Candidate List.

The merge candidate list may be generated using the merge candidates except for the interview motion candidate IvMC and / or the interview motion shift candidate IvMCShift according to the priority order described above (S770).

For example, the left neighbor block A1, the upper neighbor block B1, the interview variation candidate IvDC, the upper right neighbor block B0, the VSP candidate, and the lower left neighbor in the maximum number of merge candidates (MaxNumMergeCand) May be added to the merge candidate list as a priority order of the block A0, the upper left neighbor block B2, and the interview variation shift candidate IvDCShift.

Alternatively, the left neighbor block A1, the upper neighbor block B1, the interview variation candidate IvDC, the upper right neighbor block B0, the VSP candidate, the interview variation shift candidate IvDCShift, the lower left neighbor block A0, May be added to the merge candidate list with the priority of the upper left neighbor block B2.

Alternatively, the left neighbor block A1, the upper neighbor block B1, the upper right neighbor block B0, the interview variation candidate IvDC, the VSP candidate, the lower left neighbor block A0, the upper left neighbor block B2, Can be added to the merge candidate list as a priority of the interview mutation shift candidate (IvDCShift).

FIG. 8 illustrates an embodiment in which the present invention is applied, and shows the priority of merge candidates based on the illumination compensation flag ic_flag.

As described above with reference to FIG. 7, it is effective to use the interview variation candidate (IvDC) rather than the interview motion candidate (IvMC) when the illumination difference occurs between the viewpoints. Therefore, it is possible to improve the coding performance by changing the priority of the interview transition candidate (IvDC) to be higher than the interview motion candidate (IvMC). It is also possible to change the priority order so that the interview transition shift candidate IvDCShift has priority over the interview motion shift candidate IvMCShift.

Referring to FIG. 8, when the value of the illumination compensation flag is 1, the interpolation candidate candidate IvDC, the left neighbor block A1, the upper neighbor block B1, the interview motion candidate IvMC, the upper right neighbor block B0 ), A VSP candidate, an Interview Variation Shift Candidate (IvDCShift), an Interactive Motion Shift Candidate (IvMCShift), a lower left neighbor block (A0), and an upper left neighbor block (B2). The priority of the merge candidate shown in FIG. 8 has a higher priority with a smaller value, and a lower priority with a larger value.

On the other hand, if the value of the illumination compensation flag is 0, the candidate for the motion motion candidate IvMC, the left neighbor block A1, the upper neighbor block B1, the interview variation candidate IvDC, the upper right neighbor block B0, , The interview motion shift candidate (IvMCShift), the interview transition shift candidate (IvDCShift), the lower left neighbor block (A0), and the upper left neighbor block (B2) to the merge candidate list.

Claims (15)

Generating a merge candidate list for the current block; Wherein the merge candidate list comprises at least one merge candidate and the merge candidate includes at least one of a spatial neighbor block, a temporal neighbor block, or an interview motion candidate (IvMC)
Deriving a motion vector of the current block based on a merge index for the current block obtained from the bitstream; Herein, the merge index specifies a merge candidate used to decode the current block into merge mode,
Obtaining a predicted value of the current block using the derived motion vector; And
And restoring the current block by adding the obtained predicted value and the residual value of the current block,
Wherein the interview motion candidate (IvMC) is limitedly included in the merge candidate list based on an illumination compensation flag (ic_flag) indicating whether or not illumination compensation is performed on the current block.
The method according to claim 1,
The intervention motion candidate IvMC has a temporal motion vector of a reference block specified by a transition vector of the current block,
Wherein the reference block belongs to a reference view of the current block.
3. The method of claim 2,
Obtaining an illumination compensation unavailable flag from the bitstream; Wherein the illumination compensation unavailability flag specifies whether or not the illumination compensation flag is encoded for the current block whose value of the merge index is 0,
And obtaining the value of the illumination compensation flag based on the illumination compensation non-availability flag and the merge index.
The method of claim 3,
If the illumination compensation is not performed on the current block according to the value of the illumination compensation flag, the interview motion candidate IvMC is included in the merge candidate list,
Wherein the motion candidate candidate (IvMC), the spatial neighbor block, and the temporal neighbor block are arranged in the merge candidate list in order of priority.
Generating a merge candidate list for a current block, deriving a motion vector of the current block based on a merge index for the current block obtained from the bitstream, and calculating a predicted value of the current block using the derived motion vector An inter prediction unit for acquiring the inter prediction unit; And
And a restoration unit for restoring the current block by adding the obtained predicted value and a residual value related to the current block,
Wherein the merge candidate list comprises at least one merge candidate and the merge candidate comprises at least one of a spatial neighbor block, a temporal neighbor block, or an interview motion candidate (IvMC), the interview motion candidate (IvMC) Is limited to the merge candidate list based on an illumination compensation flag (ic_flag) indicating whether or not illumination compensation is to be performed on the current block, and the merge index specifies a merge candidate used to decode the current block into the merge mode And the second video signal is decoded.
6. The method of claim 5,
The intervention motion candidate IvMC has a temporal motion vector of a reference block specified by a transition vector of the current block,
Wherein the reference block belongs to a reference view of the current block.
The method according to claim 6,
An intropy decoding unit which obtains an illumination compensation unavailable flag from the bitstream; Wherein the illumination compensation unavailability flag specifies whether or not the illumination compensation flag is encoded for the current block whose value of the merge index is 0,
Wherein the IvMC includes the inter prediction unit which obtains the value of the illumination compensation flag based on the illumination compensation non-availability flag and the merge index.
8. The apparatus of claim 7, wherein the inter-
If the illumination compensation is not performed on the current block according to the value of the illumination compensation flag, the motion motion candidate IvMC is included in the merge candidate list,
And arranges the motion candidate candidate (IvMC), the spatial neighbor block, and the temporal neighbor block in the merge candidate list in a priority order.
Generating a merge candidate list for the current block; Wherein the merge candidate list comprises at least one merge candidate and the merge candidate includes at least one of a spatial neighbor block, a temporal neighbor block, or an interview motion candidate (IvMC)
Deriving a motion vector of the current block based on a merge index for the current block; Herein, the merge index specifies a merge candidate used to encode the current block into merge mode,
Obtaining a predicted value of the current block using the derived motion vector; And
And restoring the current block by adding the obtained predicted value and the residual value of the current block,
Wherein the interview motion candidate (IvMC) is limitedly included in the merge candidate list based on an illumination compensation flag (ic_flag) indicating whether or not illumination compensation is performed on the current block.
10. The method of claim 9,
The intervention motion candidate IvMC has a temporal motion vector of a reference block specified by a transition vector of the current block,
Wherein the reference block belongs to a reference view of the current block.
11. The method of claim 10,
Determining a value of an illumination compensation unavailability flag that specifies whether an illumination compensation flag for the current block with a merge index value of 0 is encoded;
And determining the value of the illumination compensation flag based on the value of the illumination compensation unavailable flag and the value of the merge index.
12. The method of claim 11,
If the illumination compensation is not performed on the current block according to the value of the illumination compensation flag, the interview motion candidate IvMC is included in the merge candidate list,
Wherein the motion candidate candidate (IvMC), the spatial neighbor block, and the temporal neighbor block are arranged in the merge candidate list in order of priority.
An inter prediction unit for generating a merged candidate list for a current block, deriving a motion vector of the current block based on a merge index for the current block, and obtaining a predicted value of the current block using the derived motion vector, ; And
And a restoration unit for restoring the current block by adding the obtained predicted value and a residual value related to the current block,
Wherein the merge candidate list comprises at least one merge candidate and the merge candidate comprises at least one of a spatial neighbor block, a temporal neighbor block, or an interview motion candidate (IvMC), the interview motion candidate (IvMC) (Ic_flag) indicating whether or not illumination compensation is performed on the merged candidate list,
Wherein the merge index specifies a merge candidate used to encode the current block into the merge mode.
14. The method of claim 13,
The intervention motion candidate IvMC has a temporal motion vector of a reference block specified by a transition vector of the current block,
Wherein the reference block belongs to a reference view of the current block.
15. The apparatus of claim 14, wherein the inter-
Determining a value of an illumination compensation unavailability flag that specifies whether or not an illumination compensation flag for the current block with a merge index value of 0 is to be coded; and based on the value of the illumination compensation unavailability flag and the value of the merge index To determine the value of the illumination compensation flag.
KR1020150037360A 2014-03-21 2015-03-18 A method and an apparatus for processing a multi-view video signal KR20150110357A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20140033102 2014-03-21
KR1020140033102 2014-03-21

Publications (1)

Publication Number Publication Date
KR20150110357A true KR20150110357A (en) 2015-10-02

Family

ID=54144946

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150037360A KR20150110357A (en) 2014-03-21 2015-03-18 A method and an apparatus for processing a multi-view video signal

Country Status (2)

Country Link
KR (1) KR20150110357A (en)
WO (1) WO2015142057A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111919447A (en) * 2018-03-14 2020-11-10 韩国电子通信研究院 Method and apparatus for encoding/decoding image and recording medium storing bitstream
WO2020256455A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method for deriving prediction sample on basis of default merge mode, and device therefor
WO2020256453A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method comprising generating prediction samples by applying determined prediction mode, and device therefor
WO2020256454A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method for performing inter-prediction when prediction mode for current block ultimately cannot be selected, and device for same
WO2023043226A1 (en) * 2021-09-15 2023-03-23 주식회사 케이티 Video signal encoding/decoding method, and recording medium having bitstream stored thereon
US11632546B2 (en) 2018-07-18 2023-04-18 Electronics And Telecommunications Research Institute Method and device for effective video encoding/decoding via local lighting compensation

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363636B (en) * 2016-07-05 2024-06-04 株式会社Kt Method and apparatus for processing video signal
CA3100980A1 (en) * 2018-06-08 2019-12-12 Kt Corporation Method and apparatus for processing a video signal
CN112204982B (en) * 2018-06-29 2024-09-17 株式会社Kt Method and apparatus for processing video signal
CN117241040A (en) * 2018-11-08 2023-12-15 Oppo广东移动通信有限公司 Image signal encoding/decoding method and apparatus therefor
EP3900354A4 (en) * 2018-12-21 2022-09-14 Sharp Kabushiki Kaisha Systems and methods for performing inter prediction in video coding
WO2020139040A1 (en) * 2018-12-27 2020-07-02 인텔렉추얼디스커버리 주식회사 Image encoding/decoding method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100232507A1 (en) * 2006-03-22 2010-09-16 Suk-Hee Cho Method and apparatus for encoding and decoding the compensated illumination change
WO2009089032A2 (en) * 2008-01-10 2009-07-16 Thomson Licensing Methods and apparatus for illumination compensation of intra-predicted video
US20130182768A1 (en) * 2010-09-30 2013-07-18 Korea Advanced Institute Of Science And Technology Method and apparatus for encoding / decoding video using error compensation
KR101423648B1 (en) * 2011-09-09 2014-07-28 주식회사 케이티 Methods of decision of candidate block on inter prediction and appratuses using the same
WO2013069933A1 (en) * 2011-11-07 2013-05-16 엘지전자 주식회사 Image encoding/decoding method and device therefor

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111919447A (en) * 2018-03-14 2020-11-10 韩国电子通信研究院 Method and apparatus for encoding/decoding image and recording medium storing bitstream
US11632546B2 (en) 2018-07-18 2023-04-18 Electronics And Telecommunications Research Institute Method and device for effective video encoding/decoding via local lighting compensation
WO2020256455A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method for deriving prediction sample on basis of default merge mode, and device therefor
WO2020256453A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method comprising generating prediction samples by applying determined prediction mode, and device therefor
WO2020256454A1 (en) * 2019-06-19 2020-12-24 엘지전자 주식회사 Image decoding method for performing inter-prediction when prediction mode for current block ultimately cannot be selected, and device for same
US11632568B2 (en) 2019-06-19 2023-04-18 Lg Electronics Inc. Image decoding method for performing inter-prediction when prediction mode for current block ultimately cannot be selected, and device for same
US11800112B2 (en) 2019-06-19 2023-10-24 Lg Electronics Inc. Image decoding method comprising generating prediction samples by applying determined prediction mode, and device therefor
US12096022B2 (en) 2019-06-19 2024-09-17 Lg Electronics Inc. Image decoding method for performing inter-prediction when prediction mode for current block ultimately cannot be selected, and device for same
WO2023043226A1 (en) * 2021-09-15 2023-03-23 주식회사 케이티 Video signal encoding/decoding method, and recording medium having bitstream stored thereon

Also Published As

Publication number Publication date
WO2015142057A1 (en) 2015-09-24

Similar Documents

Publication Publication Date Title
KR20150109282A (en) A method and an apparatus for processing a multi-view video signal
JP7248741B2 (en) Efficient Multiview Coding with Depth Map Estimation and Update
KR20150110357A (en) A method and an apparatus for processing a multi-view video signal
CN105379282B (en) The method and apparatus of advanced residual prediction (ARP) for texture decoding
KR101370919B1 (en) A method and apparatus for processing a signal
JP5646994B2 (en) Method and apparatus for motion skip mode using regional disparity vectors in multi-view coded video
TW201743619A (en) Confusion of multiple filters in adaptive loop filtering in video coding
CN113491124A (en) Inter-frame prediction method and device based on DMVR (digital video VR)
CN105122812A (en) Advanced merge mode for three-dimensional (3d) video coding
EP3737091A1 (en) Method for processing image on basis of inter-prediction mode and device therefor
TW201340724A (en) Disparity vector prediction in video coding
CN112204964B (en) Image processing method and device based on inter-frame prediction mode
KR20150020175A (en) Method and apparatus for processing video signal
KR20150037847A (en) Method and device for processing video signal
KR20160004947A (en) A method and an apparatus for processing a multi-view video signal
KR20160001647A (en) A method and an apparatus for processing a multi-view video signal
KR20160004946A (en) A method and an apparatus for processing a multi-view video signal
KR20150136017A (en) A method and an apparatus for processing a multi-view video signal
KR20150105231A (en) A method and an apparatus for processing a multi-view video signal
KR20150139450A (en) A method and an apparatus for processing a multi-view video signal
KR20150146417A (en) A method and an apparatus for processing a multi-view video signal
KR20150136018A (en) A method and an apparatus for processing a multi-view video signal
KR20160003573A (en) A method and an apparatus for processing a multi-view video signal