WO2014168411A1 - 비디오 신호 처리 방법 및 장치 - Google Patents
비디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2014168411A1 WO2014168411A1 PCT/KR2014/003078 KR2014003078W WO2014168411A1 WO 2014168411 A1 WO2014168411 A1 WO 2014168411A1 KR 2014003078 W KR2014003078 W KR 2014003078W WO 2014168411 A1 WO2014168411 A1 WO 2014168411A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- residual
- value
- current block
- prediction
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
Definitions
- the present invention relates to a method and apparatus for coding a video signal.
- Compression coding refers to a series of signal processing techniques that transmit digitized information through a communication line or store the data in a form suitable for a storage medium.
- the object of compression encoding includes objects such as voice, video, text, and the like.
- a technique of performing compression encoding on an image is called video image compression.
- a general feature of a multiview video image is that it has spatial redundancy, temporal redundancy and inter-view redundancy.
- An object of the present invention is to improve the coding efficiency of video signals, especially depth data.
- the present invention obtains a depth prediction value of a current block, restores a depth residual for each sample of the current block according to an SDC mode indicator, and uses the depth prediction value and the reconstructed depth residual. Restoring the depth value of the current block.
- the SDC mode indicator refers to a flag indicating whether the current block is coded in the SDC mode, and the SDC mode refers to depth residuals of a plurality of samples in the current block as one depth residual. Characterized by means of coding.
- the depth residual of the current block may be reconstructed using the residual coding information.
- the residual coding information according to the present invention includes the absolute value of the depth residual and the sign information of the depth residual.
- Depth residual means a difference between the average value of the depth value of the current block and the average value of the depth prediction value of the current block.
- the depth residual means the average value of the depth residual of the i th sample derived from the difference between the depth value of the i th sample of the current block and the depth prediction value of the i th sample.
- the SDC mode indicator according to the present invention indicates that the current block is coded in the SDC mode
- the depth residual is restored using the depth lookup table.
- the depth residual derives a difference index by using an absolute value of a depth residual and sign information of the depth residual, obtains a depth prediction average value of a current block, and calculates a depth prediction average value and the depth lookup table.
- Obtains a prediction index using the depth lookup table obtains a table depth value corresponding to an index derived from the sum of the prediction index and the difference index, and restores the difference between the obtained table depth value and the depth prediction average value. It is characterized by.
- the prediction index according to the present invention is set to a table index assigned to a table depth value that minimizes the difference between the depth prediction average value and the table depth value in the depth lookup table.
- the coding efficiency of the depth data can be improved by adaptively using the SDC mode using the SDC mode indicator.
- the depth residual for all samples in the current block may be coded as one depth residual, and the inverse quantization and inverse transform processes are skipped.
- the coding efficiency of the depth residual can be improved.
- an error according to the round operation can be reduced by performing an average operation after the difference operation between the depth value of the current block and the depth prediction value of the current block.
- the number of bits required for coding depth data can be reduced.
- FIG. 1 is a schematic block diagram of a video decoder 100 according to an embodiment to which the present invention is applied.
- FIG. 2 is a block diagram illustrating a broadcast receiver to which a video decoder is applied according to an embodiment to which the present invention is applied.
- FIG. 3 is a flowchart illustrating a process of restoring a depth value of a current block according to an embodiment to which the present invention is applied.
- FIG. 4 illustrates a method of encoding residual coding information when an depth lookup table is not used as an embodiment to which the present invention is applied.
- FIG. 5 illustrates a method of obtaining a depth residual of a current block by using residual coding information when the depth lookup table is not used as an embodiment to which the present invention is applied.
- FIG. 6 illustrates a method of encoding residual coding information when using a depth lookup table according to an embodiment to which the present invention is applied.
- FIG. 7 illustrates a method of restoring a depth residual using residual coding information when using a depth lookup table according to an embodiment to which the present invention is applied.
- the present invention obtains the depth prediction value of the current block, and the depth residual for each sample of the current block according to the SDC mode indicator. And restore the depth value of the current block by using the depth prediction value and the reconstructed depth residual.
- the SDC mode indicator refers to a flag indicating whether the current block is coded in the SDC mode, and the SDC mode refers to depth residuals of a plurality of samples in the current block as one depth residual. Characterized by means of coding.
- the depth residual of the current block may be reconstructed using the residual coding information.
- the residual coding information according to the present invention includes the absolute value of the depth residual and the sign information of the depth residual.
- Depth residual means a difference between the average value of the depth value of the current block and the average value of the depth prediction value of the current block.
- the depth residual means the average value of the depth residual of the i th sample derived from the difference between the depth value of the i th sample of the current block and the depth prediction value of the i th sample.
- the SDC mode indicator according to the present invention indicates that the current block is coded in the SDC mode
- the depth residual is restored using the depth lookup table.
- the depth residual derives a difference index by using an absolute value of a depth residual and sign information of the depth residual, obtains a depth prediction average value of a current block, and calculates a depth prediction average value and the depth lookup table.
- Obtains a prediction index using the depth lookup table obtains a table depth value corresponding to an index derived from the sum of the prediction index and the difference index, and restores the difference between the obtained table depth value and the depth prediction average value. It is characterized by.
- the prediction index according to the present invention is set to a table index assigned to a table depth value that minimizes the difference between the depth prediction average value and the table depth value in the depth lookup table.
- FIG. 1 is a schematic block diagram of a video decoder 100 according to an embodiment to which the present invention is applied.
- the video decoder 100 may include a parser 110, a residual reconstructor 120, an intra predictor 130, an in-loop filter 140, a decoded picture buffer 150,
- the inter prediction unit 160 may be included.
- the parser 110 may receive a bitstream including multi-view texture data.
- the bitstream including the depth data may be further received.
- the input texture data and the depth data may be transmitted in one bitstream or may be transmitted in separate bitstreams.
- the bitstream may further include a camera parameter.
- Camera parameters may have intrinsic camera parameters and extrinsic camera parameters, and inherent camera parameters may include focal length, aspect ratio, and principal. point) and the like, and the non-unique camera parameter may include location information of the camera in the world coordinate system.
- the parser 110 performs parsing on a NAL basis to decode the received bitstream, thereby coding information (eg, block partition information, intra prediction mode, motion information, reference index, etc.) for predicting a video image. ) And coding information (eg, quantized transform coefficients, absolute values of the depth residuals, code information of the depth residuals, etc.) corresponding to the residual data of the video image may be extracted.
- coding information eg, quantized transform coefficients, absolute values of the depth residuals, code information of the depth residuals, etc.
- the residual reconstruction unit 120 may obtain the scaled transform coefficients by scaling the quantized transform coefficients using the quantization parameter, and reconstruct the residual data by inverse transforming the scaled transform coefficients.
- the residual restoring unit 120 may restore the residual data using the absolute value of the depth residual and the sign information of the depth residual, which will be described later with reference to FIGS. 3 to 7.
- the quantization parameter for the depth block may be set in consideration of the complexity of the texture data. For example, when the texture block corresponding to the depth block is a region of high complexity, a low quantization parameter may be set, and in the case of a region of low complexity, a high quantization parameter may be set.
- the complexity of the texture block may be determined based on a difference value between pixels adjacent to each other in the reconstructed texture picture as shown in Equation 1 below.
- Equation 1 E denotes the complexity of the texture data, C denotes the restored texture data, and N denotes the number of pixels in the texture data area to which the complexity is to be calculated.
- the complexity of the texture data corresponds to the difference value between the texture data corresponding to the (x, y) position and the texture data corresponding to the (x-1, y) position and the (x, y) position. It may be calculated using a difference value between the texture data and the texture data corresponding to the position (x + 1, y).
- the complexity may be calculated for the texture picture and the texture block, respectively, and the quantization parameter may be derived using Equation 2 below.
- the quantization parameter for the depth block may be determined based on a ratio of the complexity of the texture picture and the complexity of the texture block.
- ⁇ and ⁇ may be variable integers derived at the decoder, or may be predetermined integers in the decoder.
- the intra predictor 130 may perform intra prediction using a neighboring sample adjacent to the current block and an intra prediction mode.
- the neighboring samples are samples located at the left, bottom left, top, and top right of the current block, and may refer to samples in which restoration is completed before the current block.
- the intra prediction mode may be extracted from the bitstream or may be derived based on the intra prediction mode of at least one of the left neighboring block and the upper neighboring block of the current block.
- the intra prediction mode of the depth block may be derived from the intra prediction mode of the texture block corresponding to the depth block.
- the inter prediction unit 160 may perform motion compensation on the current block by using the reference picture and the motion information stored in the decoded picture buffer unit 150.
- the motion information may be understood as a broad concept including a motion vector and a reference index.
- the inter prediction unit 160 may perform motion compensation through temporal inter prediction.
- Temporal inter prediction may mean inter prediction using a reference picture located at the same time point and a different time zone as the current block.
- inter-view inter prediction as well as temporal inter prediction may be used.
- Inter-view inter prediction may mean inter prediction using a reference picture located at a different time point than the current block.
- the in-loop filter unit 140 may apply an in-loop filter to each coded block to reduce block distortion.
- the filter can smooth the edges of the block to improve the quality of the decoded picture.
- Filtered texture pictures or depth pictures may be output or stored in the decoded picture buffer unit 150 for use as a reference picture.
- the coding efficiency may be reduced.
- a separate in-loop filter for depth data may be defined.
- an in-loop filtering method capable of efficiently coding depth data a region-based adaptive loop filter and a trilateral loop filter will be described.
- the region-based adaptive loop filter it may be determined whether to apply the region-based adaptive loop filter based on the variation of the depth block.
- the variation amount of the depth block may be defined as the difference between the maximum pixel value and the minimum pixel value in the depth block.
- Whether to apply the filter may be determined by comparing the change amount of the depth block with a predetermined threshold. For example, when the amount of change in the depth block is greater than or equal to the predetermined threshold value, since the difference between the maximum pixel value and the minimum pixel value in the depth block is large, it may be determined to apply an area-based adaptive loop filter. . In contrast, when the depth change amount is smaller than the predetermined threshold, it may be determined that the region-based adaptive loop filter is not applied.
- the pixel value of the filtered depth block may be derived by applying a predetermined weight to the neighboring pixel value.
- the predetermined weight may be determined based on a position difference between the pixel currently being filtered and the neighboring pixel and / or a difference value between the pixel value currently being filtered and the neighboring pixel value.
- the neighbor pixel value may mean any one of the pixel values included in the depth block except for the pixel value currently being filtered.
- Trilateral loop filters are similar to region-based adaptive loop filters, except that they consider texture data additionally.
- the trilateral loop filter compares the following three conditions and extracts depth data of neighboring pixels satisfying the following three conditions.
- Condition 1 is to compare the positional difference between the current pixel p and the neighboring pixel q in the depth block with a predetermined parameter
- condition 2 is the depth data of the current pixel p and the depth data of the neighboring pixel q. The difference is compared with the predetermined parameter
- Condition 3 is to compare the difference between the texture data of the current pixel p and the texture data of the neighboring pixel q with the predetermined parameter. The neighboring pixels satisfying the three conditions may be extracted, and the current pixel p may be filtered by the median or average value of the depth data.
- the decoded picture buffer unit 150 stores or opens a previously coded texture picture or a depth picture in order to perform inter prediction.
- frame_num and POC Picture Order Count
- POC Picture Order Count
- some of the previously coded pictures may have depth pictures that are different from the current depth picture, and thus, view identification information identifying a view point of the depth picture may be used to use these pictures as reference pictures. have.
- the decoded picture buffer unit 150 may manage the reference picture using an adaptive memory management method, a sliding window method, or the like in order to more flexibly realize inter prediction.
- the depth pictures may be marked by a separate mark in order to distinguish them from the texture pictures in the decoded picture buffer unit, and an identifier may be used to identify each depth picture in the marking process.
- FIG. 2 is a block diagram illustrating a broadcast receiver to which a video decoder is applied according to an embodiment to which the present invention is applied.
- the broadcast receiver is for receiving an over-the-air broadcast signal and playing back an image.
- the broadcast receiver may generate 3D content using the received depth related information.
- the broadcast receiver includes a tuner 200, a demodulation / channel decoder 202, a transport demultiplexer 204, a packet release unit 206, an audio decoder 208, a video decoder 210, a PSI / PSIP processor ( 214, a 3D renderer 216, a formatter 220, and a display 222.
- the tuner 200 selects and outputs a broadcast signal of one channel selected by a user from among a plurality of broadcast signals input through an antenna (not shown).
- the demodulation / channel decoder 202 demodulates the broadcast signal from the tuner 200 and performs error correction decoding on the demodulated signal to output the transport stream TS.
- the transport demultiplexer 204 demultiplexes the transport stream, separates the video PES and the audio PES, and extracts PSI / PSIP information.
- the packet release unit 206 restores the video ES and the audio ES by releasing packets for the video PES and the audio PES.
- the audio decoder 208 decodes the audio ES and outputs an audio bitstream.
- the audio bitstream is converted into an analog voice signal by a digital-to-analog converter (not shown), amplified by an amplifier (not shown), and then output through a speaker (not shown).
- the video decoder 210 decodes the video ES to reconstruct the original image, which will be omitted with reference to FIG. 1.
- the decoding process of the audio decoder 208 and the video decoder 210 may be performed based on the packet ID (PID) confirmed by the PSI / PSIP processing unit 214.
- the video decoder 210 may extract depth information.
- additional information required to generate an image of a virtual camera viewpoint for example, camera information, or information for estimating occlusion covered by a relatively forward object (eg, geometric information such as an object outline, an object Transparency information, color information), and the like, may be extracted and provided to the 3D renderer 216.
- the depth information and / or additional information may be separated by the transport demultiplexer 204.
- the PSI / PSIP processing unit 214 receives the PSI / PSIP information from the transport demultiplexing unit 204, parses it, and stores it in a memory (not shown) or a register so that the broadcast is reproduced based on the stored information.
- the 3D renderer 216 may generate color information at the virtual camera position, depth information, etc. using the reconstructed image, depth information, additional information, and camera parameters. In addition, the 3D renderer 216 generates a virtual image at a virtual camera position by performing 3D warping using the reconstructed image and depth information of the reconstructed image.
- the 3D rendering unit 216 is described as being composed of a separate block from the video decoder 210, but this is only an example, and the 3D rendering unit 216 is the video decoder 210. It may be included in the).
- the formatter 220 formats the image reconstructed in the decoding process, that is, the image captured by the actual camera and the virtual image generated by the 3D renderer 216 according to the display method of the corresponding receiver, and displays the display unit 222. ) Will display the 3D image.
- the synthesis of the depth information and the virtual image at the virtual camera position by the 3D rendering unit 216 and the image formatting by the formatter 220 may be selectively performed in response to a user's command. That is, the viewer may operate the remote controller (not shown) so that the synthesized image is not displayed or may specify a time point at which the image synthesis is to be performed.
- the depth information is used by the 3D rendering unit 216 to generate a 3D image, but may be used by the video decoder 210 as another embodiment.
- FIG. 3 is a flowchart illustrating a process of restoring a depth value of a current block according to an embodiment to which the present invention is applied.
- a depth prediction value of the current block may be obtained (S300).
- a depth prediction value of the current block may be obtained using a neighboring sample adjacent to the current block and an intra prediction mode of the current block.
- the intra prediction mode may include a planar mode, a DC mode, and an angular mode.
- the depth prediction value of the current block may be obtained using the motion information and the reference picture of the current block.
- the depth residual may be restored for each sample of the current block (S310).
- the SDC mode indicator may mean a flag indicating whether the current block is coded in the SDC mode.
- the SDC mode may mean a method of coding depth residuals of a plurality of samples in a current block into one residual.
- the depth residual may be restored only when the current block is not coded in the skip mode. This is because the skip mode does not involve residual data.
- the quantized transform coefficient may be obtained from the bitstream.
- the obtained quantized transform coefficients may be scaled using a quantization parameter, and the scale residual transform may be inversely transformed to restore the depth residual.
- the depth residual of the current block may be restored using the residual coding information.
- the residual coding information may include the absolute value of the depth residual and the sign information of the depth residual.
- the residual coding information may be divided into a case coded without using a depth look-up table (DLT) and a case coded using a depth look-up table.
- the depth lookup table is for improving coding efficiency by coding the index by allocating an index corresponding to the depth value without coding the depth value as it is.
- the depth lookup table may be a table in which a table depth value and a table index corresponding to each table depth value are defined.
- the table depth value may include at least one depth value covering a range of a depth residual minimum value and a depth residual maximum value of the current block.
- the table depth value may be coded in an encoder and transmitted through a bitstream, and a predetermined value in the decoder may be used.
- the depth value of the current block may be reconstructed using the depth prediction value acquired in step S300 and the depth residual reconstructed in step S310 (S320).
- the depth value of the current block may be derived from the sum of the depth prediction value and the depth residual.
- the depth value of the current block may be derived for each sample.
- FIG. 4 illustrates a method of encoding residual coding information when an depth lookup table is not used as an embodiment to which the present invention is applied.
- the first method according to the present invention is a method of acquiring a depth residual of a current block through a difference operation after averaging an original depth value of a current block and a depth prediction value of the current block.
- an average value DCorig of the depth value of the current block is obtained.
- the average value DCpred of the depth prediction value of the current block is obtained.
- Depth residual DCres may be obtained by a difference between the average value of the obtained depth values and the average value of the depth prediction values.
- the depth residual may be coded with an absolute value DCabs of the depth residual and sign information DCsign of the depth residual, and then transmitted to the decoder.
- the second method according to the present invention is a method of acquiring a depth residual of the current block through an average operation after a difference operation between the depth value of the current block and the depth prediction value.
- the depth residual Resi of the i th sample is obtained by performing a difference operation between the depth value Origi of the i th sample of the current block and the depth prediction value Predi of the i th sample corresponding thereto. Can be obtained.
- i is greater than or equal to 0 and less than or equal to N2-1, which may specify the position of the sample.
- the depth residual DCres of the current block may be obtained by averaging between N2 depth residuals.
- the depth residual may be coded with the absolute value DCabs of the depth residual and the sign information DCsign of the depth residual and transmitted to the decoder.
- an average operation may be used to code the depth residual of the current block into one depth residual according to the SDC mode.
- one depth residual may be obtained from a maximum value, a minimum value, or a mode value among a plurality of depth residuals of the current block.
- FIG. 5 illustrates a method of obtaining a depth residual of a current block by using residual coding information when the depth lookup table is not used as an embodiment to which the present invention is applied.
- the absolute value of the depth residual and the sign information of the depth residual may be extracted from the bitstream (S500).
- the depth residual of the current block may be derived by using the absolute value of the depth residual extracted in operation S500 and the sign information of the depth residual (S510).
- the depth residual may be defined as a difference between the average value of the depth value of the current block and the average value of the depth prediction value of the current block. Can be.
- the absolute value of the depth residual and the sign information are coded according to the second method described in FIG. 4, the depth residual is derived from the difference between the depth value of the i th sample and the i th sample of the current block. It may be defined as the average value of the depth residual of the i th sample.
- FIG. 6 illustrates a method of encoding residual coding information when using a depth lookup table according to an embodiment to which the present invention is applied.
- a depth average value DCorig of the current block may be obtained.
- the depth average value may mean an average value of depth values of a plurality of samples included in the current block.
- the depth index Iorig may be obtained using the obtained depth average value DCorig and the depth lookup table of the current block.
- the table depth value in the depth lookup table corresponding to the depth average value DCorig may be determined.
- the determined table depth value may mean a table depth value that minimizes the difference between the depth average value DCorig and the table depth value in the depth lookup table.
- the table index assigned to the determined table depth value may be set as the depth index Iorig.
- the depth prediction value of the current block may be obtained.
- the depth prediction value may be obtained in one of an intra mode and an inter mode.
- An average value (hereinafter, referred to as a depth prediction average value DCpred) between depth prediction values of a plurality of samples included in the current block may be obtained.
- the prediction index Ipred may be obtained using the depth prediction average value DCpred and the depth lookup table of the current block. Specifically, the table depth value in the depth lookup table corresponding to the depth prediction average value DCpred may be determined. The determined table depth value may mean a table depth value that minimizes the difference between the depth prediction average value DCpred and the table depth value in the depth lookup table. The table index assigned to the determined table depth value may be set as the prediction index (Ipred).
- the difference index Ires between the obtained depth index Iorig and the prediction index Ipred may be obtained.
- the difference index Ires may be encoded into residual coding information including an absolute value DCabs of the depth residual and sign information DCsign of the depth residual, as in the case of not using the depth lookup table.
- the absolute value of the depth residual may mean an absolute value of the difference index Ires
- the sign information of the depth residual may mean a sign of the difference index Ires.
- the depth residual when the depth lookup table is not used, the depth residual is coded with the value of the sample domain, while when using the depth lookup table, the depth residual may be coded with the value of the index domain.
- FIG. 7 illustrates a method of restoring a depth residual using residual coding information when using a depth lookup table according to an embodiment to which the present invention is applied.
- Residual coding information can be obtained from the bitstream.
- the residual coding information may include an absolute value DCabs of the depth residual and sign information DCsign of the depth residual.
- the difference index Ires may be derived using the absolute value DCabs of the depth residual and the sign information DCsign of the depth residual.
- coding information eg, intra prediction mode, motion information, etc.
- a depth prediction value may be obtained for each sample of the current block by using the coding information, and an average value of the obtained depth prediction values, that is, a depth prediction average value DCpred may be obtained.
- a prediction index Ipred may be obtained by using the depth prediction average value DCpred and the depth lookup table of the current block. As illustrated in FIG. 6, the prediction index Ipred may be set to a table index assigned to a table depth value that minimizes the difference between the depth prediction average value DCpred and the table depth value in the depth lookup table.
- the depth residual may be restored using the prediction index Ipred, the difference index Ires, and the depth lookup table.
- a table depth value Idx2DepthValue (Ipred + Ires) corresponding to an index derived from the sum of the prediction index Ipred and the difference index Ires may be obtained from the depth lookup table. Then, the depth residual of the current block may be restored from the difference between the obtained table depth value and the depth prediction average value DCpred.
- each component or feature is to be considered optional unless stated otherwise.
- Each component or feature may be embodied in a form that is not combined with other components or features. It is also possible to combine some of the components and / or features to form an embodiment of the invention.
- the order of the operations described in the embodiments of the present invention may be changed. Some components or features of one embodiment may be included in another embodiment or may be replaced with corresponding components or features of another embodiment.
- the invention can be used to encode or decode a video signal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims (8)
- 현재 블록의 뎁스 예측값을 획득하는 단계;SDC 모드 지시자에 따라 상기 현재 블록의 샘플 별로 뎁스 레지듀얼을 복원하는 단계; 및상기 뎁스 예측값과 복원된 뎁스 레지듀얼을 이용하여 상기 현재 블록의 뎁스값을 복원하는 단계를 포함하되,상기 SDC 모드 지시자는 상기 현재 블록이 SDC 모드로 코딩되는지 여부를 나타내는 플래그를 의미하고, 상기 SDC 모드라 함은 상기 현재 블록 내의 복수개의 샘플에 대한 뎁스 레지듀얼을 하나의 뎁스 레지듀얼로 코딩하는 방식을 의미하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 1항에 있어서, 상기 SDC 모드 지시자가 상기 현재 블록이 SDC 모드로 코딩됨을 나타내는 경우, 상기 뎁스 레지듀얼을 복원하는 단계는,비트스트림으로부터 레지듀얼 코딩 정보를 추출하는 단계; 및상기 추출된 레지듀얼 코딩 정보를 이용하여 상기 현재 블록의 뎁스 레지듀얼을 유도하는 단계를 포함하되,상기 레지듀얼 코딩 정보는 뎁스 레지듀얼의 절대값과 뎁스 레지듀얼의 부호 정보를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 2항에 있어서, 상기 유도된 뎁스 레지듀얼은 상기 현재 블록의 뎁스값의 평균값과 상기 현재 블록의 뎁스 예측값의 평균값 간의 차분을 의미하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 2항에 있어서, 상기 유도된 뎁스 레지듀얼은 상기 현재 블록의 i번째 샘플의 뎁스값과 i번째 샘플의 뎁스 예측값 간의 차분으로부터 나온 i번째 샘플의 뎁스 레지듀얼의 평균값을 의미하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 1항에 있어서, 상기 SDC 모드 지시자가 상기 현재 블록이 SDC 모드로 코딩됨을 나타내는 경우, 상기 뎁스 레지듀얼은 뎁스 룩업 테이블을 이용하여 복원되는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 5항에 있어서, 상기 뎁스 레지듀얼을 복원하는 단계는,비트스트림으로부터 레지듀얼 코딩 정보를 획득하는 단계;여기서, 상기 레지듀얼 코딩 정보는 뎁스 레지듀얼의 절대값과 뎁스 레지듀얼의 부호 정보를 포함함,상기 뎁스 레지듀얼의 절대값과 상기 뎁스 레지듀얼의 부호 정보를 이용하여 차분 인덱스를 유도하는 단계;상기 현재 블록의 뎁스 예측 평균값을 획득하는 단계; 여기서, 상기 뎁스 예측 평균값은 상기 획득된 뎁스 예측값의 평균값을 의미함,상기 뎁스 예측 평균값과 상기 뎁스 룩업 테이블을 이용하여 예측 인덱스를 획득하는 단계;상기 뎁스 룩업 테이블로부터, 상기 예측 인덱스와 상기 차분 인덱스 간의 합으로부터 나온 인덱스에 대응하는 테이블 뎁스값을 획득하는 단계; 및상기 획득된 테이블 뎁스값과 상기 뎁스 예측 평균값 간의 차분으로부터 상기 현재 블록의 뎁스 레지듀얼을 복원하는 단계를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제 6항에 있어서, 상기 예측 인덱스는 상기 뎁스 예측 평균값과 상기 뎁스 룩업 테이블 내의 테이블 뎁스값 간의 차분을 최소로 하는 테이블 뎁스값에 할당된 테이블 인덱스로 설정되는 것을 특징으로 하는 비디오 신호 처리 방법.
- 현재 블록의 뎁스 예측값을 획득하는 인터 예측부;SDC 모드 지시자에 따라 상기 현재 블록의 샘플 별로 뎁스 레지듀얼을 복원하는 레지듀얼 복원부; 및상기 뎁스 예측값과 복원된 뎁스 레지듀얼을 이용하여 상기 현재 블록의 뎁스값을 복원하는 뎁스 복원부를 포함하되,상기 SDC 모드 지시자는 상기 현재 블록이 SDC 모드로 코딩되는지 여부를 나타내는 플래그를 의미하고, 상기 SDC 모드라 함은 상기 현재 블록 내의 복수개의 샘플에 대한 뎁스 레지듀얼을 하나의 뎁스 레지듀얼로 코딩하는 방식을 의미하는 것을 특징으로 하는 비디오 신호 처리 장치.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/780,781 US20160050437A1 (en) | 2013-04-11 | 2014-04-09 | Method and apparatus for processing video signal |
JP2016507884A JP2016519519A (ja) | 2013-04-11 | 2014-04-09 | ビデオ信号処理方法及び装置 |
EP14783381.8A EP2985999A4 (en) | 2013-04-11 | 2014-04-09 | METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL |
CN201480020415.3A CN105103555A (zh) | 2013-04-11 | 2014-04-09 | 处理视频信号的方法及装置 |
KR1020157024954A KR20160002712A (ko) | 2013-04-11 | 2014-04-09 | 비디오 신호 처리 방법 및 장치 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361810715P | 2013-04-11 | 2013-04-11 | |
US61/810,715 | 2013-04-11 | ||
US201361856033P | 2013-07-18 | 2013-07-18 | |
US61/856,033 | 2013-07-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014168411A1 true WO2014168411A1 (ko) | 2014-10-16 |
Family
ID=51689761
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2014/003078 WO2014168411A1 (ko) | 2013-04-11 | 2014-04-09 | 비디오 신호 처리 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20160050437A1 (ko) |
EP (1) | EP2985999A4 (ko) |
JP (1) | JP2016519519A (ko) |
KR (1) | KR20160002712A (ko) |
CN (1) | CN105103555A (ko) |
WO (1) | WO2014168411A1 (ko) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10708626B2 (en) * | 2018-11-19 | 2020-07-07 | Google Llc | Iterative IDCT with adaptive non-linear filtering |
US12047567B2 (en) * | 2021-01-26 | 2024-07-23 | Beijing Dajia Internet Information Technology Co., Ltd. | System and method for applying adaptive loop filter in video coding |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110017300A (ko) * | 2009-08-13 | 2011-02-21 | 삼성전자주식회사 | 큰 크기의 변환 단위를 이용한 영상 부호화, 복호화 방법 및 장치 |
KR20120008323A (ko) * | 2010-07-16 | 2012-01-30 | 광운대학교 산학협력단 | 쿼드트리 기반의 매크로블록을 위한 멀티 레벨의 양자화 파라미터 기록 방법 및 장치 |
WO2012059841A1 (en) * | 2010-11-04 | 2012-05-10 | Koninklijke Philips Electronics N.V. | Generation of depth indication maps |
US20120230421A1 (en) * | 2011-03-10 | 2012-09-13 | Qualcomm Incorporated | Transforms in video coding |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009131703A2 (en) * | 2008-04-25 | 2009-10-29 | Thomson Licensing | Coding of depth signal |
US10271034B2 (en) * | 2013-03-05 | 2019-04-23 | Qualcomm Incorporated | Simplified depth coding |
-
2014
- 2014-04-09 EP EP14783381.8A patent/EP2985999A4/en not_active Withdrawn
- 2014-04-09 JP JP2016507884A patent/JP2016519519A/ja active Pending
- 2014-04-09 CN CN201480020415.3A patent/CN105103555A/zh active Pending
- 2014-04-09 US US14/780,781 patent/US20160050437A1/en not_active Abandoned
- 2014-04-09 KR KR1020157024954A patent/KR20160002712A/ko not_active Application Discontinuation
- 2014-04-09 WO PCT/KR2014/003078 patent/WO2014168411A1/ko active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110017300A (ko) * | 2009-08-13 | 2011-02-21 | 삼성전자주식회사 | 큰 크기의 변환 단위를 이용한 영상 부호화, 복호화 방법 및 장치 |
KR20120008323A (ko) * | 2010-07-16 | 2012-01-30 | 광운대학교 산학협력단 | 쿼드트리 기반의 매크로블록을 위한 멀티 레벨의 양자화 파라미터 기록 방법 및 장치 |
WO2012059841A1 (en) * | 2010-11-04 | 2012-05-10 | Koninklijke Philips Electronics N.V. | Generation of depth indication maps |
US20120230421A1 (en) * | 2011-03-10 | 2012-09-13 | Qualcomm Incorporated | Transforms in video coding |
Non-Patent Citations (1)
Title |
---|
JAGER, F.: "Simplified depth map intra coding with an optional depth lookup table''.", 3D IMAGING (IC3D), 2012 INTERNATIONAL CONFERENCE., 3 December 2012 (2012-12-03), pages 1 - 4, XP032491674 * |
Also Published As
Publication number | Publication date |
---|---|
KR20160002712A (ko) | 2016-01-08 |
JP2016519519A (ja) | 2016-06-30 |
EP2985999A4 (en) | 2016-11-09 |
US20160050437A1 (en) | 2016-02-18 |
EP2985999A1 (en) | 2016-02-17 |
CN105103555A (zh) | 2015-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010087589A2 (ko) | 경계 인트라 코딩을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2014107083A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
KR101619448B1 (ko) | 영상 신호 처리 방법 및 장치 | |
WO2014107029A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009098A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168443A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013133627A1 (ko) | 비디오 신호 처리 방법 | |
WO2011090352A2 (ko) | 영역 기반의 부호화/복호화 장치 및 방법 | |
WO2014058207A1 (ko) | 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 | |
WO2014054897A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054896A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009091A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014073877A1 (ko) | 다시점 비디오 신호의 처리 방법 및 이에 대한 장치 | |
WO2014109547A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013133587A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168411A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014109563A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014077573A2 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014073873A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014042459A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168445A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009092A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014168444A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015133830A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480020415.3 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14783381 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20157024954 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014783381 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14780781 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2016507884 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |