WO2016056754A1 - 3d 비디오 부호화/복호화 방법 및 장치 - Google Patents
3d 비디오 부호화/복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2016056754A1 WO2016056754A1 PCT/KR2015/009422 KR2015009422W WO2016056754A1 WO 2016056754 A1 WO2016056754 A1 WO 2016056754A1 KR 2015009422 W KR2015009422 W KR 2015009422W WO 2016056754 A1 WO2016056754 A1 WO 2016056754A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- prediction
- view
- current block
- disparity vector
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/521—Processing of motion vectors for estimating the reliability of the determined motion vectors or motion vector field, e.g. for smoothing the motion vector field or for correcting motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to video coding, and more particularly, to coding of 3D video images.
- High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
- 3D video can provide realism and immersion using a plurality of view channels.
- 3D video can be used in a variety of areas such as free viewpoint video (FCC), free viewpoint TV (FTV), 3DTV, surveillance and home entertainment.
- FCC free viewpoint video
- FTV free viewpoint TV
- 3DTV surveillance and home entertainment.
- 3DV video using multiple views has a high correlation between views of the same picture order count (POC). Since the multi-view image captures the same scene at the same time by using multiple adjacent cameras, that is, multiple views, the correlation between the different views is high because it contains almost the same information except for parallax and slight lighting differences.
- POC picture order count
- the decoding target block of the current view may be predicted or decoded with reference to the block of another view.
- the present invention provides a method and apparatus for predicting a current block in 3D video coding.
- the present invention provides a residual prediction method and apparatus in 3D video coding.
- the present invention provides a residual prediction method and apparatus based on a disparity vector in 3D video coding.
- the present invention provides a method and apparatus for performing residual prediction on a current block when a disparity vector is not derived from a neighboring block of the current block.
- a residual prediction method includes deriving a disparity vector for a current block, performing a residual prediction on the current block based on the disparity vector, and generating a residual prediction sample.
- the parity vector is not derived based on a neighboring block of the current block, but is derived from a preset default vector.
- a residual prediction apparatus includes a predictor that derives a disparity vector for a current block and generates a residual prediction sample by performing residual prediction on the current block based on the disparity vector.
- the parity vector may be derived from a preset default vector instead of a neighboring block of the current block.
- coding efficiency in 3D video coding, by predicting a current block in a current picture, coding efficiency can be increased and data amount to be transmitted can be reduced.
- the prediction effect can be enhanced by performing residual prediction on the current block according to a specific condition even when not only deriving the disparity vector from the neighboring block of the current block but also not.
- FIG. 1 is a diagram schematically illustrating a process of encoding and decoding 3D video to which the present invention can be applied.
- FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
- FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
- FIG. 4 is a diagram schematically illustrating an example of multi-view video coding to which the present invention may be applied.
- 5 exemplarily shows spatial and temporal neighboring blocks to which the present invention can be applied.
- FIG 6 shows an example of disparity derivation from an MCP coded neighboring block.
- FIG. 7 exemplarily shows DV-MCP blocks that can be used for disparity vector derivation.
- FIG. 10 is a flowchart schematically showing an encoding method using residual prediction according to an embodiment of the present invention.
- FIG. 11 is a flowchart schematically illustrating a decoding method using residual prediction according to an embodiment of the present invention.
- each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- a pixel or a pel may refer to a minimum unit constituting one picture (or image).
- 'sample' may be used as a term indicating a value of a specific pixel.
- the sample may generally represent the value of the pixel, may represent only the pixel value of the luma component, or may represent only the pixel value of the chroma component.
- a unit represents the basic unit of image processing.
- the unit may include at least one of a specific region of the picture and information related to the region.
- the unit may be used interchangeably with terms such as block or area in some cases.
- an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
- FIG. 1 is a diagram schematically illustrating a process of encoding and decoding 3D video to which the present invention can be applied.
- the 3D video encoder may encode a video picture, a depth map, and a camera parameter to output a bitstream.
- the depth map may be composed of distance information (depth information) between a camera and a subject with respect to pixels of a corresponding video picture (texture picture).
- the depth map may be an image in which depth information is normalized according to bit depth.
- the depth map may be composed of recorded depth information without color difference representation.
- the depth map may be called a depth map picture or a depth picture.
- the disparity information indicating the correlation between views may be derived from the depth information of the depth map using camera parameters.
- a general color image that is, a bitstream including a depth map and camera parameters together with a video picture (texture picture) may be transmitted to a decoder through a network or a storage medium.
- the decoder side can receive the bitstream and reconstruct the video.
- the 3D video decoder may decode the video picture and the depth map and the camera parameters from the bitstream.
- a view required for a multi view display may be synthesized based on the decoded video picture, the depth map, and the camera parameter.
- the display used is a stereo display, the 3D image may be displayed using pictures of two views among the reconstructed multi views.
- the stereo video decoder can reconstruct two pictures that will each be incident into eyes from the bitstream.
- a stereoscopic image may be displayed by using a view difference or disparity between a left image incident to the left eye and a right image incident to the right eye.
- the multi view display is used together with the stereo video decoder, different views may be generated based on the two reconstructed pictures to display the multi view.
- the 2D image may be restored and the image may be output to the 2D display.
- the decoder may output one of the reconstructed images to the 2D display when using a 3D video decoder or a stereo video decoder.
- view synthesis may be performed at the decoder side and may be performed at the display side.
- the decoder and the display may be one device or may be separate devices.
- the 3D video decoder, the stereo video decoder, and the 2D video decoder are described as separate decoders.
- one decoding apparatus may perform 3D video decoding, stereo video decoding, and 2D video decoding.
- the 3D video decoding apparatus may perform 3D video decoding
- the stereo video decoding apparatus may perform stereo video decoding
- the 2D video decoding apparatus may perform 2D video decoding.
- the multi view display may output 2D video or output stereo video.
- FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
- the video encoding apparatus 200 may include a picture splitter 205, a predictor 210, a subtractor 215, a transformer 220, a quantizer 225, a reorderer 230, An entropy encoding unit 235, an inverse quantization unit 240, an inverse transform unit 245, an adder 250, a filter unit 255, and a memory 260 are included.
- the picture dividing unit 205 may divide the input picture into at least one processing unit block.
- the processing unit block may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be split along a quad-tree structure from a largest coding unit block as a unit block of coding.
- the prediction unit block is a block partitioning from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be split along the quad tree structure from the coding unit block, and may be a unit block that derives along the transform coefficients or a unit block that derives a residual signal from the transform coefficients.
- a coding unit block is referred to as a coding block or coding unit (CU)
- a prediction unit block is called a prediction block or prediction unit (PU)
- a transform unit block is called a transform block or transform unit (TU). Can be called.
- a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
- a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
- the prediction unit 210 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a prediction block including prediction samples of the current block.
- the unit of prediction performed by the prediction unit 210 may be a coding block, a transform block, or a prediction block.
- the prediction unit 210 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 210 may determine whether intra prediction or inter prediction is applied on a CU basis.
- the prediction unit 210 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 210 may (i) derive a prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block. The prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode, and in case of (ii), it is called directional mode. The prediction unit 210 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the prediction unit 210 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
- the prediction unit 210 may derive the prediction sample for the current block by applying any one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode.
- the prediction unit 210 may use the motion information of the neighboring block as the motion information of the current block.
- the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the current block can be derived by using the motion vector of the neighboring block as a motion vector predictor.
- the neighboring block includes a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
- a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
- the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the view may be divided into an independent view and a dependent view.
- the prediction unit 210 may not only inter-prediction but also inter-view. You can also make predictions.
- the predictor 210 may construct a reference picture list by including pictures of other views.
- the prediction unit 210 may derive a disparity vector.
- the disparity vector may add a block corresponding to the current block in another view of the same access unit (AU) as the current picture.
- the AU may include video pictures and depth maps that correspond to the same time instance, for example in multiple views.
- AU may mean a set of pictures having the same Picture Order Count (POC).
- POC corresponds to the display order of the pictures and may be distinguished from the coding order.
- the prediction unit 210 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter-view motion preidciton, and register. Dual prediction, illumination compensation (IC), view synthesis, etc. can be performed.
- the disparity vector for the current block can be derived from the depth value using camera parameters or from the motion vector or disparity vector of the neighboring block in the current or other view.
- the prediction unit 210 may include an inter-view merging candidate (IvMC) corresponding to temporal motion information of a reference view and an inter-view disparity vector candidate corresponding to a disparity vector.
- IvDC inter-view merging candidate
- a shifted IvMC candidate derived by shifting the disparity vector
- a texture merge candidate derived from a texture picture corresponding to the case where the current block is a block on the depth map.
- texture merging candidate (T) disparity derived merging candidate (D) derived from disparity from the texture merge candidate, view synthesis prediction candidate derived based on view synthesis: VSP) and the like can be added to the merge candidate list.
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 210 may apply the inter-view motion vector prediction to predict the motion vector of the current block based on the disparity vector.
- the prediction unit 210 may derive a disparity vector based on a conversion of the maximum depth value in the corresponding depth block.
- a block including the reference sample may be used as the reference block.
- the prediction unit 210 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity compensated prediction (DCP). Can be used as a vector.
- DCP disparity compensated prediction
- the subtraction unit 215 generates a residual sample which is a difference between the original sample and the prediction sample.
- residual samples may not be generated as described above.
- the transform unit 220 generates a transform coefficient by transforming the residual sample in units of transform blocks.
- the quantization unit 225 may quantize the transform coefficients to generate quantized transform coefficients.
- the reordering unit 230 rearranges the quantized transform coefficients.
- the reordering unit 230 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form through a coefficient scanning method.
- the entropy encoding unit 235 may perform entropy encoding on the quantized transform coefficients.
- Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variavle length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
- the entropy encoding unit 235 may encode information necessary for video reconstruction other than the quantized transform coefficients (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
- NAL network abstraction layer
- the adder 250 reconstructs the picture by combining the residual sample and the predictive sample.
- the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
- the adder 250 has been described in a separate configuration, the adder 250 may be part of the predictor 210.
- the filter unit 255 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
- the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
- the memory 260 may store information necessary for reconstructed pictures or encoding / decoding.
- the memory 260 may store (reference) pictures used for inter prediction / inter view prediction.
- pictures used for inter prediction / inter view prediction may be designated by a reference picture set or a reference picture list.
- one encoding device has been described as encoding independent views and dependent views, this is for convenience of description, and a separate encoding device is configured for each view, or a separate internal module (for example, each view). Prediction module) may be configured.
- FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
- the video decoding apparatus 300 includes an entropy decoding unit 310, a reordering unit 320, an inverse quantization unit 330, an inverse transform unit 340, a predictor 350, and an adder 360.
- the filter unit 370 and the memory 380 are included.
- the video decoding apparatus 300 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
- the video decoding apparatus 300 may perform video decoding using a processing unit applied in the video encoding apparatus.
- the processing unit block of video decoding may be a coding unit block, a prediction unit block, or a transform unit block.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of decoding.
- the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
- the transform unit block may be divided along the quad tree structure from the coding unit block, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- the entropy decoding unit 310 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction.
- the entropy decoding unit # 10 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and quantizes values of syntax elements required for video reconstruction and transform coefficients for residuals. You can print the values.
- the bitstream may be input for each view.
- information about each view may be multiplexed in the bitstream.
- the entropy decoding unit 310 may de-multiply the bitstream and parse the view for each view.
- the reordering unit 320 may rearrange the quantized transform coefficients in the form of a two-dimensional block.
- the reordering unit 320 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
- the inverse quantization unit 330 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
- information for deriving a quantization parameter may be signaled from the encoding apparatus.
- the inverse transform unit 340 may inverse residual transform coefficients to derive residual samples.
- the prediction unit 350 may perform prediction on the current block and generate a prediction block including prediction samples for the current block.
- the unit of prediction performed by the prediction unit 350 may be a coding block, a transform block, or a prediction block.
- the prediction unit 350 may determine whether to apply intra prediction or inter prediction.
- a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
- the unit for generating a prediction sample in inter prediction and an intra prediction may also be different.
- whether to apply inter prediction or intra prediction may be determined in units of CUs.
- a prediction mode may be determined and a prediction sample may be generated in PU units
- intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
- the prediction unit 350 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
- the prediction unit 350 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
- the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
- the prediction unit 350 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
- the prediction unit 350 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- the motion information of the neighboring block may be used as the motion information of the current block.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the prediction unit 350 may construct a merge candidate list using motion information of available neighboring blocks, and use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
- the merge index may be signaled from the encoding device.
- the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the prediction unit 350 may perform inter view prediction.
- the prediction unit 350 may configure a reference picture list including pictures of other views.
- the prediction unit 350 may derive a disparity vector.
- the prediction unit 350 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter-view motion prediction, residual prediction, and IC. (illumination compensation), view synthesis, etc. can be performed.
- the disparity vector for the current block can be derived from the depth value using the camera parameter or from the motion vector or disparity vector of the neighboring block in the current or other view.
- Camera parameters may be signaled from the encoding device.
- the prediction unit 350 may include an IvMC corresponding to the temporal motion information of the c-refernece view, an IvDC corresponding to the disparity vector, and a shift of the disparity vector ( shifted IvMC derived by shift), a texture merge candidate (T) derived from a texture picture corresponding to the case where the current block is a block on the depth map, and a disparity derived merge candidate derived using disparity from the texture merge candidate (D), a view synthesis prediction merge candidate (VSP) derived based on view synthesis may be added to the merge candidate list.
- IvMC corresponding to the temporal motion information of the c-refernece view
- an IvDC corresponding to the disparity vector
- a shift of the disparity vector shifted IvMC derived by shift
- T texture merge candidate
- D disparity derived merge candidate derived using disparity from the texture merge candidate
- VSP view synthesis prediction merge candidate
- the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
- the prediction unit 350 may apply inter-view motion vector prediction to predict the motion vector of the current block based on the disparity vector.
- the prediction unit 350 may use a block in the reference view specified by the disparity vector as the reference block.
- the prediction unit 350 may use the motion vector of the reference block as a candidate motion parameter or a motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity compensated prediction (DCP). Can be used as a parity vector.
- DCP disparity compensated prediction
- the adder 360 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
- the adder 360 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
- the adder 360 has been described in a separate configuration, the adder 360 may be part of the predictor 350.
- the filter unit 370 may apply deblocking filtering and / or sample adaptive offset to the reconstructed picture.
- the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
- the memory 380 may store information necessary for reconstruction picture or decoding.
- the memory 380 may store pictures used for inter prediction / inter view prediction.
- pictures used for inter prediction / inter view prediction may be designated by a reference picture set or a reference picture list.
- the reconstructed picture can be used as a reference picture for another picture.
- the memory 380 may output the reconstructed picture in the output order.
- the output unit may display a plurality of different views.
- each decoding apparatus may operate for each view, and an internal module (eg, a prediction module) corresponding to each view in one decoding apparatus may be provided.
- an internal module eg, a prediction module
- Multi-view video coding can enhance video coding efficiency for the current view by coding the current picture using decoding data of another view belonging to the same access unit (AU) as the current picture.
- AU access unit
- views may be coded in units of AUs, and pictures may be coded in units of views. Coding proceeds between views according to a predetermined order.
- a view that can be coded without reference to another view can be referred to as a base view or an independent view.
- a view that can be coded by referring to an independent view or another view after the independent view is coded may be referred to as a dependent view or extended view.
- the view referred to in coding of the current view may be referred to as a reference view.
- coding the view includes coding a video picture, a depth map, or the like belonging to the corresponding view.
- FIG. 4 is a diagram schematically illustrating an example of multi-view video coding to which the present invention may be applied.
- pictures with different view IDs in the same AU and the same POC are coded according to a predefined view coding order.
- V0 the first coded view in the AU
- V0 may be coded without reference to another view
- V0 may be a base view or an independent view
- V1 the next coded view, is a dependent view.
- the base view is coded by referring to pictures belonging to the base view without referring to other views.
- the dependent view is coded with reference to another view already coded while being coded after the base view.
- a CU belonging to a dependent view may perform inter prediction by referring to a picture that is already coded.
- a method of performing prediction by referring to a picture having the same view ID is referred to as motion compensated prediction (MCP)
- MCP motion compensated prediction
- DCP prediction compensated prediction
- the A block may derive prediction samples by performing MCP based on a motion vector with reference to a picture belonging to the same view V1.
- the B block may derive prediction samples by performing DCP based on the disparity vector with reference to a picture of a view V0 different from itself in the same AU.
- pictures of different views may be used, and depth maps of the same view may be used.
- the disparity vector is added to the position (x, y) of the sample of the corresponding picture in the reference view corresponding to the position (x, y) of the prediction sample in the current block.
- the location of the reference sample of the corresponding picture in the reference view may be determined.
- the prediction sample may be derived based on the reference sample in the reference view.
- the disparity vector may have only an x-axis component. In this case, the disparity vector may be (disp, 0), and the position (xr, y) of the reference sample may be determined as (x + disp, y). Where disp represents the value of the disparity vector.
- the disparity vector may be derived from neighboring blocks.
- the disparity vector can be derived by searching for spatial or temporal neighboring blocks.
- the spatial or temporal neighboring blocks may be neighboring blocks coded based on DCP. If the first disparity vector is found according to a predetermined search order, the disparity vector derivation process is terminated.
- the disparity vector derived from the neighboring blocks may be called a disparity vector from neighboring blocks (NBDV).
- 5 exemplarily shows spatial and temporal neighboring blocks to which the present invention can be applied.
- a 1 510 and B 1 520 represent spatial neighboring blocks of the current CU 500
- T 0 530 represents temporal neighboring blocks of the current CU 500
- a 1 510 and B 1 520 may be located on the current picture
- T0 530 may be located on the reference picture
- a 1 510 is the left peripheral block of the current CU 500
- a 2 510 is the upper peripheral block of the current CU 500.
- a 1 510 may be the lowest neighboring block among the neighboring blocks adjacent to the left boundary of the current CU 500
- a 2 520 is adjacent to the upper boundary of the current CU 500. It may be the rightmost peripheral block among the neighboring blocks.
- the disparity vector of the current CU 500 can be derived based on A 1 510, B 1 520, and T 0 530, and the A 1 510, B 1 520, and T 0. 530 may be searched according to a predetermined search order.
- a temporal neighboring block T 0 530 may be searched, and if the disparity vector is not found based on the temporal neighboring block, then the spatial neighboring blocks A 1 510 and B are next. 1 520 may be searched for. And finally, neighboring blocks coded based on MCP may be searched. In this case, the disparity vector that was used when predicting the temporal interview motion vector may be searched. If the disparity vector is not found from the neighboring block through the above process, a zero disparity vector may be designated as the disparity vector of the current block CU. In this case, the zero disparity vector may point to an inter-view reference picture (a block or a sample in the reference picture) having the smallest index of the reference picture list.
- the spatial neighboring blocks may be searched in order of A 1 510 and B 1 520, for example.
- the first candidate picture may be a collocated picture used for temporal motion vector prediction.
- the co-location picture may be indicated from a slice header.
- the second candidate picture is derived based on the ascending order of the reference picture index in the reference picture list and may be added to the candidate list.
- the second candidate picture derivation may be performed by, for example, the following method.
- step 1) Find an NBDV in a random access point (RAP) picture. If the RAP picture is in the candidate list for the second candidate picture, the derivation process is completed. If the RAP picture is not available for the current picture, the process proceeds to step 2).
- RAP random access point
- a picture having the smallest TID (temporal ID) among temporal reference pictures is derived as the second candidate picture, and the derivation process is completed when the picture having the smallest TID is unique. If there is an image having the same TID in the candidate list, the process proceeds to the next step 3).
- the second reference picture derivation process may be regarded as a process of selecting a reference picture estimated to have many disparity vectors.
- the second reference picture derivation process may be performed for each slice and may be performed only once for each slice.
- one temporal candidate block is searched, for example, as T 0 530 of FIG. 5.
- the neighboring blocks coded based on the MCP may be used in the disparity derivation process.
- FIG 6 shows an example of disparity derivation from an MCP coded neighboring block.
- the disparity vector 630 used for the interview motion prediction is presently present. It may represent a motion vector between the picture 600 and the interview reference picture 650. This type of motion vector may be called an inter-view predicted motion vector (IvpMv), and the peripheral block 610 may be called a DV-MCP block.
- IvpMv inter-view predicted motion vector
- DV-MCP block DV-MCP block
- three variables may be used.
- a block that is coded in the merge / skip mode and whose index is 0 is selected.
- ivpMvFlag is 1
- ivpMvDisparityX and reViewIdx are set to the disparity vector and associated view order index used for the inter-view motion prediction, respectively.
- the disparity vector may be derived from MV-MCP blocks coded in a skip mode.
- MVD motion vector difference
- residual data is signaled, which is more accurate (or better than DV-MVP blocks where DV-MCP blocks coded in skip mode are not coded in skip mode). More corresponding) disparity vector.
- the DV-MCP block coded in skip mode may be searched in the neighboring block of the current CU. .
- FIG. 7 exemplarily shows DV-MCP blocks that can be used for disparity vector derivation.
- a 1 710 and B 1 720 represent DV-MCP (spatial) peripheral blocks of the current CU 700.
- the disparity vector of the current CU 700 may be derived based on A 1 710 and B 1 720, which are MV-MCP peripheral blocks coded in the skip mode, in this case A 1 710, for example. And search (or scan) in B 1 720 order. If an MV-MCP neighboring block is coded in a skip mode, ivpMvDisparityX and reViewIdx associated with the neighboring block are returned to the derived disparity vector and view order index, and The disparity vector derivation process is terminated.
- the zero vector that maps to the default view is defined, if a default view is defined
- the disparity vector may be used as the disparity vector.
- the default disparity vector may be treated as an NBDV in the DoNBDV process described later.
- the disparity vector derived through the NBDV derivation process is selected based on a specific criterion among various disparity vector candidates derived from neighboring blocks for the current CU. This maps the position having the smallest difference between the neighboring block and the block of the interview reference picture, and may be different from the ideal disparity vector to be actually obtained.
- the method is used on the assumption that the characteristics of the neighboring block are similar to those of the current block, if the characteristics of the block from which the disparity vector is derived are similar to those of the current CU block, the accuracy of the disparity vector is reduced. It can also fall.
- the disparity derived based on the neighboring block may not be accurate enough to be used as the disparity for the current CU.
- the disparity may be corrected or compensated by using the decoded depth map. have.
- a decoded depth map of the reference view (eg base view) is available.
- the depth map of the reference view may be utilized to improve derivation of the disparity vector required to code the texture of the dependent view. That is, the NBDV may be corrected based on the depth map, which may be referred to as depth-oriented disparity vector derivation (DoNBDV).
- DoNBDV depth-oriented disparity vector derivation
- the disparity vector of the current CU may be corrected through the following process.
- a disparity vector is derived through the NBDV derivation process.
- the disparity vector is used to indicate the position of the corresponding block in the coded depth map of the reference view having the same view order index as the view order index associated with the disparity vector. If the corresponding block is located outside or at the boundary of the depth map, the samples of the corresponding block located outside the boundary of the depth map are clipped and the samples located inside are retained.
- the depth block of the corresponding position of the reference view is assumed to be a "virtual depth block" for the current block of the dependent view.
- the coded depth map in view 0 (view index 0) is labeled Coded D0 830 and the texture to be coded is labeled T1 800.
- the depth block 840 in Coded D0 830 may be derived using the disparity vector estimated through NBDV, and the depth block 850 is the depth block 840.
- the depth value of the sample having the maximum depth value among the four corner samples of the virtual depth block 850 may be converted into the corrected disparity vector.
- residual prediction of a current block may be performed by using residual correlation between views. That is, in multi-view video coding, residual prediction may be performed in addition to intra / inter prediction and inter-view prediction for the current block.
- the residual prediction may be called advanced residual prediction (ARP).
- the residual prediction method may be performed by the video encoding apparatus of FIG. 2 and the video decoding apparatus of FIG. 3. More specifically, the residual prediction method may be performed by a prediction unit of each of the video encoding apparatus and the video decoding apparatus.
- the reference block used for the residual prediction of the current block 905 (reference sample) Are derived, and residual prediction samples for the current block 905 may be generated based on the residual of the derived reference blocks.
- Reference blocks for residual prediction may vary depending on whether they are inter view prediction.
- the predictor When the current block 905 is inter predicted, the predictor first derives the corresponding block 915 in the reference view Vref corresponding to the current block 905.
- the corresponding block 915 may be derived from the picture 910 belonging to the reference view of the current block 905 among the pictures in the same AU as the current block 905.
- the location of the corresponding block 915 may be specified using the disparity vector 920 within the picture 910 belonging to the reference view.
- the corresponding block 915 may be used as a first reference block (residual prediction block, rpBlock, or rpSamples) for residual prediction of the current block 905.
- first reference block residual prediction block, rpBlock, or rpSamples
- the prediction unit derives the reference picture 920 or 930 of the corresponding block 915 in the reference view, and derives the reference block 925 or 935 from the derived reference picture 920 or 930 of the corresponding block 915. do.
- the reference block 925 or 935 may be used as a second reference block (residual prediction reference block, rpRefBlock or rpRefSamples) for the residual prediction of the current block 905.
- a picture having the same POC value as the reference picture 940 or 950 of the current block 905 in the current view may be selected, or the register of the current block 905. It may be a reference picture in the reference picture list used for dual prediction.
- the reference block 925 or 935 of the corresponding block 915 is motion information of the current block 905 in the reference picture 920 or 930 of the corresponding block 915, for example, the motion vector 960 or 965 of the current block 905. ) Can be specified by performing motion compensation.
- the reference picture 940 of the current block 905 is a picture that can be referred to as forward-directional (L0) in inter prediction, and is, for example, a picture specified by the reference picture index Ref0 in the reference picture list L0. Can be.
- the reference picture 950 of the current block 905 is a picture that can be referred to backward-directinoal in inter prediction, and may be, for example, a picture specified by the reference picture index Ref1 in the reference picture list L1.
- the prediction unit may use the difference between the first reference block reBlock and the second reference block rpRefBlock derived for the residual prediction as the residual prediction sample value of the current block 905 as described above.
- the difference value obtained by subtracting the sample value of the reference block 925 or 935 from the sample value of the corresponding block 915 may be derived as the residual prediction sample value of the current block 905.
- the prediction sample value of the current block 905 may be modified based on the residual prediction sample value of the current block 905.
- a weighting factor may be applied to the residual prediction sample value of the current block 905.
- the weight may be transmitted from an encoder to a decoder.
- the weight may be called iv_res_pred_weight_idx.
- the weight may be, for example, one of 0, 0.5 or 1.
- the weight 0 may be indicative that residual prediction is not applied.
- Index information indicating which weight to apply may be transmitted in units of blocks to the decoder in encoding.
- the prediction unit When the current block 905 is inter-view predicted, the prediction unit derives the reference picture 940 or 950 in the current view and derives the reference block 945 or 955 in the reference picture 940 or 950. For example, the prediction unit may derive the reference block 945 or 955 in the reference picture 940 or 950 based on the derived temporal motion vector and the reference picture index.
- the reference block 945 or 955 may be used as a first reference block (residual prediction block, rpBlock, or rpSamples) for residual prediction of the current block 905.
- the prediction unit derives the corresponding block 915 in the reference view corresponding to the current block 905.
- the corresponding block 915 may be derived from the picture 910 belonging to the reference view of the current block 905 among the pictures in the same AU as the current block 905.
- the position of the corresponding block 915 may be specified using the disparity vector 920 of the current block 905 within the picture 910 of the reference view.
- the prediction unit may derive the reference picture 920 or 930 of the corresponding block 915 and derive the reference block 925 or 935 from the derived reference picture 920 or 930 of the corresponding block 915. .
- the reference block 925 or 935 may be used as a second reference block (residual prediction reference block, rpRefBlock or rpRefSamples) for the residual prediction of the current block 950.
- a picture having the same POC value as the reference picture 940 or 950 of the current block 905 in the current view may be selected, or the current block 905 may be selected. It may be a reference picture in the reference picture list for the reference view used for residual prediction.
- the reference block 925 or 935 of the corresponding block 915 uses motion information of the current block 905 in the reference picture 920 or 930 of the corresponding block 915, for example, the temporal motion vector of the current block 905. Can be specified by performing motion compensation.
- the prediction unit may use the difference between the first reference block rpBlcok and the second reference block rpRefBlock derived for the residual prediction as the residual prediction sample value of the current block 915 as described above. For example, a value obtained by subtracting the sample value of the reference block 925 or 935 in the reference view from the sample value of the reference block 945 or 955 in the current view may be derived as the residual prediction value of the current block 905.
- the prediction sample value of the current block 905 may be modified based on the residual prediction sample value of the current block 905. In this case, as described above, a weight may be applied to the residual prediction sample value of the current block 905.
- the weight may be, for example, one of 0, 0.5 or 1.
- a weight of 0 may indicate that residual prediction is not applied.
- Index information indicating which weight to apply may be transmitted in units of blocks.
- encoding / decoding may be performed using NBDV and DONBDV based on various coding tools such as IvMC, VSP, and ARP.
- various coding tools such as IvMC, VSP, and ARP.
- the disparity vector, target (reference) view, and availability for each CU may be used together.
- the availability may be referred to as disparity availability and may be divided into three cases as follows.
- DISP_DEFAULT no NBDV but default view is specified and zero vector (default vector) is used as the disparity vector
- the above-described ARP may be performed only when the availability parameter of the CU is DISP_AVAILABLE. That is, ARP can be performed only when NBDV is obtained from neighboring blocks.
- the coding tool it may not be necessary to distinguish whether the currently derived disparity vector is an NBDV or a zero vector on the default view. For example, a depth refinement is performed on the disparity vector candidate based on the corresponding depth block. Even when refinement is performed, coding is performed by refining the disparity vector regardless of whether it is NBDV or the default vector on the default view. However, when the ARP is performed only when there is an NBDV derived from a neighboring block, if the NBDV found from the neighboring block is a zero vector, ARP is performed, but when the default disparity vector is not performed, the ARP is not performed. By preventing such discrepancies and enabling ARP to be performed even when the default disparity vector is used, prediction and coding efficiency for the current block can be improved.
- the aforementioned DISP_DEFAULT is regarded as DISP_AVAILABLE.
- the information (flag) indicating disparity availability only needs to indicate two cases, not three.
- a flag indicating disparity availability is DispAvailFlag
- DispAvailFlag when a default view is set, DispAvailFlag may be set to one.
- dvAvailFlag may be set to 1, when dvAvailFlag is 0, the reference view index for the current block is set as the default view index, and the disparity vector is zero. It can be set to a vector (0, 0).
- the ARP may be performed when an RpRefPicAvailFlag indicating the availability of a reference picture for residual prediction exists, and the RpRefPicAvailflag may have a value of 1 when the above-described DispAvailFlag is 1.
- RpRefPicAvailflag 1
- the weight iv_res_pred_weight_idx for the residual prediction may be signaled, and ARP may be applied when the iv_res_pred_weight_idx for the current block is not zero.
- a default view is set, and the disparity vector of the current block based on the default view and the default vector (zero vector). If A is derived, ARP may be performed on the current block.
- FIG. 10 is a flowchart schematically showing an encoding method using residual prediction according to an embodiment of the present invention.
- the encoding apparatus derives a disparity vector for the current block of the current view (S1000).
- the encoding apparatus may derive the disparity vector based on the above-described NBDV derivation method. In this case, the encoding apparatus may correct the disparity vector based on the above-described DoNBDV method.
- the encoding apparatus may derive a zero vector on the default view as a disparity vector for the current block.
- the encoding apparatus generates a residual prediction sample of the current block (S1010).
- the residual prediction sample generation may be performed in units of CUs.
- the current block may be a CU block or a coding block (CB).
- a CU may be partitioned into one or a plurality of PUs, and thus the current block may include a PU.
- the residual prediction sample can be generated not only when the disparity vector for the current block is derived based on the NBDV but also based on the zero vector on the default view. That is, according to the present invention, the residual prediction operation can be performed even when the disparity vector for the current block is not derived from the neighboring block and the zero vector is used as the default view and the disparity vector as the reference view.
- the encoding apparatus may generate a residual prediction sample of the current block based on the residual prediction operation.
- the encoding apparatus may encode video information for video decoding (S1020).
- the encoding device may entropy encode video information and output the entropy in a bitstream.
- the output bitstream may be transmitted through a network or stored in a storage medium.
- the video information may include information for performing residual prediction (eg, a weighted index for residual prediction).
- the video information may include values of syntax elements for reconstructing the current block as well as residual information about the current block.
- FIG. 11 is a flowchart schematically illustrating a decoding method using residual prediction according to an embodiment of the present invention.
- the decoding apparatus may decode video information included in the bitstream (S1100).
- the decoding apparatus may entropy decode the bitstream and obtain the video video information.
- the video information may include values of syntax elements for reconstructing the current block as well as residual information about the current block.
- the decoding apparatus may decode information about a prediction mode applied to the current block, and may decode a weight index for residual prediction.
- the decoding apparatus derives a disparity vector for the current block of the current view (S1110).
- the decoding apparatus may derive the disparity vector based on the above-described NBDV derivation method. In this case, the decoding apparatus may correct the disparity vector based on the above-described DoNBDV method.
- the decoding apparatus may derive a zero vector on the default view as a disparity vector for the current block.
- the decoding apparatus generates residual prediction samples of the current block (S1120).
- the residual prediction sample generation may be performed in units of CUs.
- the current block may be a CU block or a coding block (CB).
- a CU may be partitioned into one or a plurality of PUs, and thus the current block may include a PU.
- the residual prediction sample can be generated not only when the disparity vector for the current block is derived based on the NBDV but also based on the zero vector on the default view. That is, according to the present invention, the residual prediction operation can be performed even when the disparity vector for the current block is not derived from the neighboring block and the zero vector is used as the default view and the disparity vector as the reference view.
- the encoding apparatus may generate a residual prediction sample of the current block based on the residual prediction operation.
- the decoding apparatus may generate a modified prediction sample for the current block based on the prediction sample previously generated for the current block and the residual prediction sample, and based on the modified prediction sample and the residual sample, And reconstructed picture).
- the prediction sample may be derived in a PU or prediction block (PB) unit
- the residual prediction sample may be derived in a CU or CB unit.
- the residual prediction sample generated in a CU unit may be partitioned in a PU unit, thereby generating a modified prediction sample based on the prediction sample and the residual prediction sample in a PU unit.
- the decoding apparatus may generate the block of the present application by adding the residual sample to the modified prediction sample.
- the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
- the module may be stored in memory and executed by a processor.
- the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (8)
- 레지듀얼 예측 방법으로,현재 블록에 대한 디스패리티 벡터를 도출하는 단계;상기 디스패리티 벡터를 기반으로 현재 블록에 대한 레지듀얼 예측을 수행하고, 레지듀얼 예측 샘플을 생성하는 단계를 포함하되,상기 디스패리티 벡터는 현재 블록의 주변 블록을 기반으로 도출되지 않고, 기 설정된 디폴트 벡터로부터 도출된 것을 특징으로 하는, 레지듀얼 예측 방법.
- 제 1항에 있어서,상기 디폴트 벡터는 영 벡터임을 특징으로 하는, 레지듀얼 예측 방법.
- 제 2항에 있어서,상기 현재 블록에 대한 상기 디스패리티 벡터의 가용성을 나타내는 플래그 정보는 가용 및 불가용의 두 가지 상태정보를 나타내며, 상기 디스패리티 정보가 상기 기 설정된 디폴트 벡터로부터 도출된 경우에 상기 플래그 정보는 1 값을 나타냄을 특징으로 하는, 레지듀얼 예측 방법.
- 제 2항에 있어서,상기 레지듀얼 예측 샘플 생성이 수행되는 상기 현재 블록은 CU(coding unit) 블록임을 특징으로 하는, 레지듀얼 예측 방법.
- 제 4항에 있어서,현재 블록에 대한 인터 예측 또는 인터 뷰 예측을 수행하여 예측 샘플을 생성하는 단계;상기 예측 샘플 및 상기 레지듀얼 예측 샘플을 기반으로 수정된 예측 샘플을 생성하는 단계를 포함함을 특징으로 하는, 레지듀얼 예측 방법.
- 제 5항에 있어서,상기 레지듀얼 예측 샘플 생성은 상기 CU 블록에 대하여 수행되고,상기 CU 블록은 하나 또는 복수의 PU(prediction unit) 블록들로 파티셔닝되고, 상기 예측 샘플 생성 및 상기 수정된 예측 샘플 생성은 상기 하나 또는 복수의 PU 블록들에 대하여 수행되는 것을 특징으로 하는, 레지듀얼 예측 방법.
- 제 2항에 있어서,비트스트림을 통하여 레지듀얼 예측에 관한 정보를 수신하는 단계를 포함하되,상기 레지듀얼 예측에 관한 정보는 레지듀얼 예측을 위한 가중치 정보를 포함하고,상기 가중치 정보는 상기 디스패리티 벡터가 현재 블록의 주변 블록을 기반으로 도출되거나 상기 디폴트 벡터로부터 도출된 경우에 상기 레지듀얼 예측에 관한 정보에 포함되어 수신되고,상기 가중치 정보가 0을 나타내지 않는 경우, 상기 현재 블록에 대한 레지듀얼 예측을 수행함을 특징으로 하는, 레지듀얼 예측 방법.
- 레지듀얼 예측 장치로서,현재 블록에 대한 디스패리티 벡터를 도출하고, 상기 디스패리티 벡터를 기반으로 현재 블록에 대한 레지듀얼 예측을 수행하여 레지듀얼 예측 샘플을 생성하는 예측부를 포함하고,상기 예측부는 상기 디스패리티 벡터를 현재 블록의 주변 블록을 기반으로 도출하지 않고, 기 설정된 디폴트 벡터로부터 도출하는 것을 특징으로 하는, 레지듀얼 예측 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/510,500 US10587894B2 (en) | 2014-10-08 | 2015-09-07 | Method and device for encoding/decoding 3D video |
KR1020177006804A KR20170065502A (ko) | 2014-10-08 | 2015-09-07 | 3d 비디오 부호화/복호화 방법 및 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462061178P | 2014-10-08 | 2014-10-08 | |
US62/061,178 | 2014-10-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016056754A1 true WO2016056754A1 (ko) | 2016-04-14 |
Family
ID=55653326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/009422 WO2016056754A1 (ko) | 2014-10-08 | 2015-09-07 | 3d 비디오 부호화/복호화 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10587894B2 (ko) |
KR (1) | KR20170065502A (ko) |
WO (1) | WO2016056754A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113873239A (zh) * | 2016-06-24 | 2021-12-31 | 株式会社Kt | 用于处理视频信号的方法和设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6506036B2 (ja) * | 2015-02-02 | 2019-04-24 | オリンパス株式会社 | 撮像機器 |
KR102459853B1 (ko) * | 2017-11-23 | 2022-10-27 | 삼성전자주식회사 | 디스패리티 추정 장치 및 방법 |
KR102030384B1 (ko) * | 2018-06-19 | 2019-11-08 | 광운대학교 산학협력단 | 잔차 계수 부호화/복호화 방법 및 장치 |
CN112262576A (zh) | 2018-06-11 | 2021-01-22 | 光云大学校产学协力团 | 残差系数编码/解码方法和装置 |
US20210092390A1 (en) * | 2019-09-23 | 2021-03-25 | Alibaba Group Holding Limited | Methods and apparatuses for prediction refinement with optical flow in reference picture resampling |
US11956441B2 (en) * | 2021-12-23 | 2024-04-09 | Ati Technologies Ulc | Identifying long term reference frame using scene detection and perceptual hashing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000125305A (ja) * | 1998-10-20 | 2000-04-28 | Nippon Hoso Kyokai <Nhk> | 動きベクトルの評価方法および装置 |
KR20080007069A (ko) * | 2006-07-14 | 2008-01-17 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20140049498A (ko) * | 2011-02-10 | 2014-04-25 | 파나소닉 주식회사 | 동화상 부호화 방법, 동화상 부호화 장치, 동화상 복호 방법, 동화상 복호 장치, 및 동화상 부호화 복호 장치 |
WO2014075625A1 (en) * | 2012-11-16 | 2014-05-22 | Mediatek Inc. | Method and apparatus of constrained disparity vector derivation in 3d video coding |
US20140176674A1 (en) * | 2012-12-21 | 2014-06-26 | Qualcomm Incorporated | Disparity vector derivation in three-dimensional video coding |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9503720B2 (en) * | 2012-03-16 | 2016-11-22 | Qualcomm Incorporated | Motion vector coding and bi-prediction in HEVC and its extensions |
US20130258052A1 (en) * | 2012-03-28 | 2013-10-03 | Qualcomm Incorporated | Inter-view residual prediction in 3d video coding |
US9503702B2 (en) * | 2012-04-13 | 2016-11-22 | Qualcomm Incorporated | View synthesis mode for three-dimensional video coding |
US9325990B2 (en) * | 2012-07-09 | 2016-04-26 | Qualcomm Incorporated | Temporal motion vector prediction in video coding extensions |
US9615090B2 (en) * | 2012-12-28 | 2017-04-04 | Qualcomm Incorporated | Parsing syntax elements in three-dimensional video coding |
US9967586B2 (en) * | 2013-01-07 | 2018-05-08 | Mediatek Inc. | Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding |
US9596448B2 (en) * | 2013-03-18 | 2017-03-14 | Qualcomm Incorporated | Simplifications on disparity vector derivation and motion vector prediction in 3D video coding |
US9609347B2 (en) * | 2013-04-04 | 2017-03-28 | Qualcomm Incorporated | Advanced merge mode for three-dimensional (3D) video coding |
US9930363B2 (en) * | 2013-04-12 | 2018-03-27 | Nokia Technologies Oy | Harmonized inter-view and view synthesis prediction for 3D video coding |
US9948915B2 (en) * | 2013-07-24 | 2018-04-17 | Qualcomm Incorporated | Sub-PU motion prediction for texture and depth coding |
WO2015021914A1 (en) * | 2013-08-13 | 2015-02-19 | Mediatek Inc. | Method of deriving default disparity vector in 3d and multiview video coding |
US9948950B2 (en) * | 2014-01-03 | 2018-04-17 | Qualcomm Incorporated | Disparity vector and/or advanced residual prediction for video coding |
US9967592B2 (en) * | 2014-01-11 | 2018-05-08 | Qualcomm Incorporated | Block-based advanced residual prediction for 3D video coding |
US10158884B2 (en) * | 2014-03-19 | 2018-12-18 | Qualcomm Incorporated | Simplified merge list construction process for 3D-HEVC |
CN108632629B9 (zh) * | 2014-03-19 | 2021-06-15 | 株式会社Kt | 针对多视图视频信号生成合并候选列表的方法及解码装置 |
CN107005705B (zh) * | 2014-10-07 | 2021-03-09 | 三星电子株式会社 | 使用层间预测对多层图像进行编码或解码的方法和装置 |
-
2015
- 2015-09-07 WO PCT/KR2015/009422 patent/WO2016056754A1/ko active Application Filing
- 2015-09-07 KR KR1020177006804A patent/KR20170065502A/ko unknown
- 2015-09-07 US US15/510,500 patent/US10587894B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000125305A (ja) * | 1998-10-20 | 2000-04-28 | Nippon Hoso Kyokai <Nhk> | 動きベクトルの評価方法および装置 |
KR20080007069A (ko) * | 2006-07-14 | 2008-01-17 | 엘지전자 주식회사 | 비디오 신호 디코딩 방법 및 장치 |
KR20140049498A (ko) * | 2011-02-10 | 2014-04-25 | 파나소닉 주식회사 | 동화상 부호화 방법, 동화상 부호화 장치, 동화상 복호 방법, 동화상 복호 장치, 및 동화상 부호화 복호 장치 |
WO2014075625A1 (en) * | 2012-11-16 | 2014-05-22 | Mediatek Inc. | Method and apparatus of constrained disparity vector derivation in 3d video coding |
US20140176674A1 (en) * | 2012-12-21 | 2014-06-26 | Qualcomm Incorporated | Disparity vector derivation in three-dimensional video coding |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113873239A (zh) * | 2016-06-24 | 2021-12-31 | 株式会社Kt | 用于处理视频信号的方法和设备 |
Also Published As
Publication number | Publication date |
---|---|
US20170289573A1 (en) | 2017-10-05 |
US10587894B2 (en) | 2020-03-10 |
KR20170065502A (ko) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016056754A1 (ko) | 3d 비디오 부호화/복호화 방법 및 장치 | |
WO2012081879A1 (ko) | 인터 예측 부호화된 동영상 복호화 방법 | |
WO2018070632A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2016200043A1 (ko) | 비디오 코딩 시스템에서 가상 참조 픽처 기반 인터 예측 방법 및 장치 | |
WO2017188565A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2016056821A1 (ko) | 3d 비디오 코딩을 위한 움직임 정보 압축 방법 및 장치 | |
WO2018021585A1 (ko) | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2017048008A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2016056782A1 (ko) | 비디오 코딩에서 뎁스 픽처 코딩 방법 및 장치 | |
WO2018174357A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2015057033A1 (ko) | 3d 비디오 부호화/복호화 방법 및 장치 | |
WO2018128222A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2016056779A1 (ko) | 3d 비디오 코딩에서 카메라 파라미터 처리 방법 및 장치 | |
WO2020005002A1 (ko) | 영상 코딩 시스템에서 인터 예측에 따른 템플릿 영역 도출 방법 및 장치 | |
WO2020141932A1 (ko) | Cpr 기반 mmvd를 이용하는 인터 예측 방법 및 장치 | |
WO2020141831A2 (ko) | 인트라 블록 카피 예측을 이용한 영상 코딩 방법 및 장치 | |
WO2015057032A1 (ko) | 멀티 뷰를 포함하는 비디오 부호화/복호화 방법 및 장치 | |
WO2020145656A1 (ko) | Tmvp 후보의 가용 여부를 시그널링하는 방법 및 장치 | |
WO2018074813A1 (ko) | 영상 부호화 또는 복호화하기 위한 장치 및 방법 | |
WO2020141884A1 (ko) | Cpr 기반 mmvd를 사용하는 영상 코딩 방법 및 장치 | |
WO2020004879A1 (ko) | 영상 코딩 시스템에서 복수의 주변 블록들을 사용하는 인터 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2019225932A1 (ko) | 영상 코딩 시스템에서 dmvr을 사용하는 영상 디코딩 방법 및 그 장치 | |
WO2014098380A1 (ko) | 계층적 비디오 코딩에서 색차 블록의 화면 내 예측 모드를 복호화하는 방법 및 이러한 방법을 사용하는 장치 | |
WO2021107622A1 (ko) | 영상/비디오 코딩 방법 및 장치 | |
WO2018128228A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15848595 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20177006804 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15510500 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15848595 Country of ref document: EP Kind code of ref document: A1 |