WO2016056821A1 - Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d) - Google Patents

Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d) Download PDF

Info

Publication number
WO2016056821A1
WO2016056821A1 PCT/KR2015/010554 KR2015010554W WO2016056821A1 WO 2016056821 A1 WO2016056821 A1 WO 2016056821A1 KR 2015010554 W KR2015010554 W KR 2015010554W WO 2016056821 A1 WO2016056821 A1 WO 2016056821A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
prediction
sample
view
picture
Prior art date
Application number
PCT/KR2015/010554
Other languages
English (en)
Korean (ko)
Inventor
남정학
예세훈
서정동
유선미
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to US15/517,712 priority Critical patent/US20170310993A1/en
Priority to KR1020177009466A priority patent/KR20170066411A/ko
Publication of WO2016056821A1 publication Critical patent/WO2016056821A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to video coding, and more particularly, to a method and apparatus for compressing, storing and utilizing motion information in a 3D video coding.
  • High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
  • 3D video can provide realism and immersion using a plurality of view channels.
  • 3D video can be used in various areas such as free viewpoint video (FCC), free viewpoint TV (FTV), 3DTV, surveillance and home entertainment.
  • FCC free viewpoint video
  • FTV free viewpoint TV
  • 3DTV surveillance and home entertainment.
  • 3D video (3DV) using multiple views has a high correlation between views of the same picture order count (POC). Since the multi-view image captures the same scene at the same time by using multiple adjacent cameras, that is, multiple views, the correlation between the different views is high because it contains almost the same information except for parallax and slight lighting differences.
  • POC picture order count
  • information necessary for encoding and / or decoding of the current view may be obtained in consideration of correlation between different views.
  • the current block that is a decoding target of the current view may be predicted or decoded by referring to a block of another view.
  • Each picture in each view may be divided into blocks of different sizes, and storing all relevant information in the reference picture of another view including the reference block for coding of the current block is a buffer load of the encoder and decoder. It becomes a burden to). Therefore, related information compression such as motion information should be performed.
  • the present invention provides a method and apparatus for compressing motion information in 3D video coding.
  • the present invention provides a method and apparatus for compressing and storing motion information for each block in a reference picture of a reference view.
  • the present invention is to derive motion information for a reference block in consideration of motion information compression in coding a current block.
  • motion information of the corresponding block is derived in consideration of motion information compression.
  • the present invention provides a method and apparatus for deriving motion information of a corresponding block when an inter-view merge candidate or residual prediction method is used for a current block.
  • a 3D video decoding method includes deriving a disparity vector for a current block, deriving a position of a corresponding sample on a reference view based on the disparity vector, and deriving a position of a reference sample based on the position of the corresponding sample. And deriving motion information of the prediction block covering the position of the reference sample.
  • inter-view prediction or residual prediction for the current block may be performed based on the derived motion information.
  • the position of the corresponding sample may be derived as the upper left position of the corresponding block determined based on the position of the current block and the disparity vector.
  • the position of the corresponding sample may be derived as the center position of the corresponding block determined based on the position of the current block and the disparity vector.
  • the position of the reference sample may be a top-left sample position of a motion compression unit block including the corresponding sample.
  • the position of the reference sample may be derived by performing a shift operation based on the position of the corresponding sample.
  • a 3D video decoding apparatus may further include: a decoding unit for decoding video information, deriving a disparity vector for a current block, determining a position of a corresponding sample on a reference view based on the disparity vector, and based on the position of the corresponding sample. And a prediction unit for deriving a position of a reference sample and deriving motion information of a prediction block covering the position of the reference sample.
  • motion information may be compressed and stored in 3D video coding. Accordingly, when a corresponding block in the reference view is derived using a disparity vector, motion information of the corresponding block may be derived in consideration of motion information compression, and a buffer load of an encoder and a decoder may be reduced, Coding efficiency can be improved by reducing the amount of information that is processed.
  • FIG. 1 is a diagram schematically illustrating a process of encoding and decoding 3D video to which the present invention can be applied.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • FIG. 4 is a diagram schematically illustrating an inter-view motion prediction method.
  • FIG. 5 is a diagram schematically illustrating a residual prediction method.
  • FIG. 6 is a diagram schematically illustrating a positional relationship between a current block and a corresponding block.
  • FIG. 7 is a flowchart schematically illustrating a 3D video coding method according to an embodiment of the present invention.
  • each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • a picture generally refers to a unit representing one image of a specific time zone
  • a slice is a unit constituting a part of a picture in coding.
  • One picture may be composed of a plurality of slices, and if necessary, the picture and the slice may be mixed with each other.
  • a pixel or a pel may refer to a minimum unit constituting one picture (or image).
  • 'sample' may be used as a term indicating a value of a specific pixel.
  • the sample may generally represent the value of the pixel, may represent only the pixel value of the luma component, or may represent only the pixel value of the chroma component.
  • a unit represents the basic unit of image processing.
  • the unit may include at least one of a specific region of the picture and information related to the region.
  • the unit may be used interchangeably with terms such as block or area in some cases.
  • an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
  • FIG. 1 is a diagram schematically illustrating a process of encoding and decoding 3D video to which the present invention can be applied.
  • the 3D video encoder may encode a video picture, a depth map, and a camera parameter to output a bitstream.
  • the depth map may be composed of distance information (depth information) between a camera and a subject with respect to pixels of a corresponding video picture (texture picture).
  • the depth map may be an image in which depth information is normalized according to bit depth.
  • the depth map may be composed of recorded depth information without color difference representation.
  • the depth map may be called a depth map picture or a depth picture.
  • the disparity information indicating the correlation between views may be derived from the depth information of the depth map using camera parameters.
  • a general color image that is, a bitstream including a depth map and camera parameters together with a video picture (texture picture) may be transmitted to a decoder through a network or a storage medium.
  • the decoder side can receive the bitstream and reconstruct the video.
  • the 3D video decoder may decode the video picture and the depth map and the camera parameters from the bitstream.
  • a view required for a multi view display may be synthesized based on the decoded video picture, the depth map, and the camera parameter. In this case, if the display used is a stereo display, the 3D image may be displayed using pictures of two views among the reconstructed multi views.
  • the stereo video decoder can reconstruct two pictures that will each be incident on both eyes from the bitstream.
  • a stereoscopic image may be displayed by using a view difference or disparity between a left image incident to the left eye and a right image incident to the right eye.
  • the multi view display is used together with the stereo video decoder, different views may be generated based on the two reconstructed pictures to display the multi view.
  • the 2D image may be restored and the image may be output to the 2D display.
  • the decoder may output one of the reconstructed images to the 2D display when using a 3D video decoder or a stereo video decoder.
  • view synthesis may be performed at the decoder side and may be performed at the display side.
  • the decoder and the display may be one device or may be separate devices.
  • the 3D video decoder, the stereo video decoder, and the 2D video decoder are described as separate decoders.
  • one decoding apparatus may perform 3D video decoding, stereo video decoding, and 2D video decoding.
  • the 3D video decoding apparatus may perform 3D video decoding
  • the stereo video decoding apparatus may perform stereo video decoding
  • the 2D video decoding apparatus may perform 2D video decoding.
  • the multi view display may output 2D video or output stereo video.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • the video encoding apparatus 200 may include a picture splitter 205, a predictor 210, a subtractor 215, a transformer 220, a quantizer 225, a reorderer 230, An entropy encoding unit 235, an inverse quantization unit 240, an inverse transform unit 245, an adder 250, a filter unit 255, and a memory 260 are included.
  • the picture dividing unit 205 may divide the input picture into at least one processing unit block.
  • the processing unit block may be a coding unit block, a prediction unit block, or a transform unit block.
  • the coding unit block may be split along a quad-tree structure from a largest coding unit block as a unit block of coding.
  • the prediction unit block is a block partitioning from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
  • the transform unit block may be split along the quad tree structure from the coding unit block, and may be a unit block that derives along the transform coefficients or a unit block that derives a residual signal from the transform coefficients.
  • a coding unit block is a coding block (CB) or a coding unit (CU)
  • a prediction unit block is a prediction block (PB) or a prediction unit (PU)
  • a transform unit block is It may be called a transform block (TB) or a transform unit (TU).
  • a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
  • a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
  • the prediction unit 210 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a prediction block including prediction samples of the current block.
  • the unit of prediction performed by the prediction unit 210 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 210 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 210 may determine whether intra prediction or inter prediction is applied on a CU basis.
  • the prediction unit 210 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 210 may (i) derive a prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block. The prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode, and in case of (ii), it is called directional mode. The prediction unit 210 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the prediction unit 210 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
  • the prediction unit 210 may derive the prediction sample for the current block by applying any one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode.
  • the prediction unit 210 may use the motion information of the neighboring block as the motion information of the current block.
  • the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the MVP mode the motion vector of the current block can be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block includes a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
  • the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
  • the view may be divided into an independent view and a dependent view.
  • the prediction unit 210 may not only inter-prediction but also inter-view. You can also make predictions.
  • the predictor 210 may construct a reference picture list by including pictures of other views.
  • the prediction unit 210 may derive a disparity vector.
  • the disparity vector may add a block corresponding to the current block in another view of the same access unit (AU) as the current picture.
  • the AU may include video pictures and depth maps that correspond to the same time instance, for example in multiple views.
  • AU may mean a set of pictures having the same Picture Order Count (POC).
  • POC corresponds to the display order of the pictures and may be distinguished from the coding order.
  • the prediction unit 210 may specify a depth block in a depth view based on the disparity vector, configure a merge list, inter-view motion prediction, Residual prediction, illumination compensation, and view synthesis can be performed.
  • the disparity vector for the current block can be derived from the depth value using camera parameters or from the motion vector or disparity vector of the neighboring block in the current or other view.
  • the prediction unit 210 may include an inter-view merging candidate (IvMC) corresponding to temporal motion information of a reference view and an inter-view disparity vector candidate corresponding to a disparity vector.
  • IvDC inter-view merging candidate
  • IvMC inter-view disparity vector candidate
  • T texture merging candidate
  • D disparity derived merging candidate
  • VSP view synthesis prediction candidate
  • the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
  • the prediction unit 210 may apply the inter-view motion vector prediction to predict the motion vector of the current block based on the disparity vector.
  • the prediction unit 210 may derive a disparity vector based on a conversion of the maximum depth value in the corresponding depth block.
  • a block including the reference sample may be used as the reference block.
  • the prediction unit 210 may use the motion vector of the reference block as a candidate motion parameter or motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity compensated prediction (DCP). Can be used as a vector.
  • DCP disparity compensated prediction
  • the subtraction unit 215 generates a residual sample which is a difference between the original sample and the prediction sample.
  • residual samples may not be generated as described above.
  • the transform unit 220 generates a transform coefficient by transforming the residual sample in units of transform blocks.
  • the quantization unit 225 may quantize the transform coefficients to generate quantized transform coefficients.
  • the reordering unit 230 rearranges the quantized transform coefficients.
  • the reordering unit 230 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form through a coefficient scanning method.
  • the entropy encoding unit 235 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
  • the entropy encoding unit 235 may encode information necessary for video reconstruction other than the quantized transform coefficients (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
  • NAL network abstraction layer
  • the adder 250 reconstructs the picture by combining the residual sample and the predictive sample.
  • the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
  • the adder 250 has been described in a separate configuration, the adder 250 may be part of the predictor 210.
  • the filter unit 255 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
  • the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
  • the memory 260 may store information necessary for reconstructed pictures or encoding / decoding.
  • the memory 260 may store (reference) pictures used for inter prediction / inter view prediction.
  • pictures used for inter prediction / inter view prediction may be designated by a reference picture set or a reference picture list.
  • one encoding device has been described as encoding independent views and dependent views, this is for convenience of description, and a separate encoding device is configured for each view, or a separate internal module (for example, each view). Prediction module) may be configured.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • the video decoding apparatus 300 includes an entropy decoding unit 310, a reordering unit 320, an inverse quantization unit 330, an inverse transform unit 340, a predictor 350, and an adder 360.
  • the filter unit 370 and the memory 380 are included.
  • the video decoding apparatus 300 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
  • the video decoding apparatus 300 may perform video decoding using a processing unit applied in the video encoding apparatus.
  • the processing unit block of video decoding may be a coding unit block, a prediction unit block, or a transform unit block.
  • the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of decoding.
  • the prediction unit block is a block partitioned from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit block may be divided into sub blocks.
  • the transform unit block may be divided along the quad tree structure from the coding unit block, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
  • the entropy decoding unit 310 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 310 decodes the information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements required for video reconstruction, and residual coefficients. Can be output.
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements required for video reconstruction, and residual coefficients. Can be output.
  • the bitstream may be input for each view.
  • information about each view may be multiplexed in the bitstream.
  • the entropy decoding unit 310 may de-multiply the bitstream and parse the view for each view.
  • the reordering unit 320 may rearrange the quantized transform coefficients in the form of a two-dimensional block.
  • the reordering unit 320 may perform reordering in response to coefficient scanning performed by the encoding apparatus.
  • the inverse quantization unit 330 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
  • information for deriving a quantization parameter may be signaled from the encoding apparatus.
  • the inverse transform unit 340 may inverse residual transform coefficients to derive residual samples.
  • the prediction unit 350 may perform prediction on the current block and generate a prediction block including prediction samples for the current block.
  • the unit of prediction performed by the prediction unit 350 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 350 may determine whether to apply intra prediction or inter prediction.
  • a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
  • the unit for generating a prediction sample in inter prediction and intra prediction may also be different.
  • whether to apply inter prediction or intra prediction may be determined in units of CUs.
  • a prediction mode may be determined and a prediction sample may be generated in PU units
  • intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
  • the prediction unit 350 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
  • the prediction unit 350 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
  • the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
  • the prediction unit 350 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
  • the prediction unit 350 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
  • the motion information of the neighboring block may be used as the motion information of the current block.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 350 may construct a merge candidate list using motion information of available neighboring blocks, and use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
  • the merge index may be signaled from the encoding device.
  • the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
  • the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 350 may perform inter view prediction.
  • the prediction unit 350 may configure a reference picture list including pictures of other views.
  • the prediction unit 350 may derive a disparity vector.
  • the prediction unit 350 may specify a depth block in a depth view based on the disparity vector, configure the merge list, inter-view motion prediction, residual prediction, and IC. (illumination compensation), view synthesis, etc. can be performed.
  • the disparity vector for the current block can be derived from the depth value using camera parameters or from the motion vector or disparity vector of the neighboring block in the current or other view. Camera parameters may be signaled from the encoding device.
  • VSP view synthesis prediction merge candidate
  • the number of candidates included in the merge candidate list applied to the dependent view may be limited to a predetermined value.
  • the prediction unit 350 may apply inter-view motion vector prediction to predict the motion vector of the current block based on the disparity vector.
  • the prediction unit 350 may use a block in the reference view specified by the disparity vector as the reference block.
  • the prediction unit 350 may use the motion vector of the reference block as a candidate motion parameter or a motion vector predictor candidate of the current block, and use the disparity vector as a candidate disparity for disparity compensated prediction (DCP). Can be used as a parity vector.
  • DCP disparity compensated prediction
  • the adder 360 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
  • the adder 360 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
  • the adder 360 has been described in a separate configuration, the adder 360 may be part of the predictor 350.
  • the filter unit 370 may apply deblocking filtering and / or sample adaptive offset to the reconstructed picture.
  • the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
  • the memory 380 may store information necessary for reconstruction picture or decoding.
  • the memory 380 may store pictures used for inter prediction / inter view prediction.
  • pictures used for inter prediction / inter view prediction may be designated by a reference picture set or a reference picture list.
  • the reconstructed picture can be used as a reference picture for another picture.
  • the memory 380 may output the reconstructed picture in the output order.
  • the output unit may display a plurality of different views.
  • each decoding apparatus may operate for each view, and an internal module (eg, a prediction module) corresponding to each view in one decoding apparatus may be provided.
  • an internal module eg, a prediction module
  • Multi-view video coding can enhance video coding efficiency for the current view by coding the current picture using decoding data of another view belonging to the same access unit (AU) as the current picture.
  • AU access unit
  • views may be coded in units of AUs, and pictures may be coded in units of views. Coding proceeds between views according to a predetermined order.
  • a view that can be coded without reference to another view can be referred to as a base view or an independent view.
  • a view that can be coded by referring to an independent view or another view after the independent view is coded may be referred to as a dependent view or extended view.
  • the view referred to in coding of the current view may be referred to as a reference view.
  • coding the view includes coding a texture picture, a depth picture, or the like that belongs to the view.
  • the inter-view motion prediction procedure finds a corresponding block in a reference picture of a reference view different from the current view based on the disparity vector, and derives motion information of the corresponding block as motion information of the current block.
  • the motion information of the corresponding block may be used as an inter-view merge candidate (IvMC), and the inter-view merge candidate may be used for generating a prediction sample of the current block.
  • IvMC inter-view merge candidate
  • residual prediction of a current block may be performed by using residual correlation between views. That is, in multi-view video coding, residual prediction may be performed in addition to intra / inter prediction and inter-view prediction for the current block.
  • the residual prediction may be called advanced residual prediction (ARP).
  • ARP advanced residual prediction
  • a corresponding block in a reference picture different from the current view may be found based on the disparity vector, and a residual prediction sample of the current block may be generated using another reference block derived based on the corresponding block.
  • FIG. 4 is a diagram schematically illustrating an inter-view motion prediction method.
  • the current block 420 in the current picture 410 is coded (encoded / decoded).
  • the current picture 810 may be a depth picture.
  • the current block 420 may be a prediction block or a block coded based on MCP.
  • MCP motion compensated prediction
  • DCP disparity compensated prediction
  • the motion information of the current block 420 may be derived based on the motion information of the corresponding block 440 in the inter-view reference picture 430.
  • the corresponding block 440 may be derived based on the disparity vector as described above.
  • FIG. 5 is a diagram schematically illustrating a residual prediction method.
  • a reference block used for the residual prediction of the current block 505 (reference sample) ) May be derived, and residual prediction samples for the current block 505 may be generated based on the residual of the derived reference blocks.
  • reference blocks for residual prediction may vary depending on whether (1) temporal residual prediction or (2) inter-view residual prediction.
  • the predictor When the temporal residual prediction method is applied, the predictor first derives the corresponding block 515 in the reference view Vref corresponding to the current block 505.
  • the corresponding block 515 may be derived from the picture 510 belonging to the reference view of the current block 505 among the pictures in the same AU as the current block 505.
  • the location of the corresponding block 515 may be specified using the disparity vector 520 within the picture 510 belonging to the reference view.
  • the corresponding block 515 may be used as a first reference block (residual prediction block, rpBlock, or rpSamples) for residual prediction of the current block 505.
  • first reference block residual prediction block, rpBlock, or rpSamples
  • the prediction unit derives the reference picture 520 or 530 of the corresponding block 515 in the reference view, and derives the reference block 525 or 535 from the reference picture 520 or 530 of the derived corresponding block 515. do.
  • the reference block 525 or 535 may be used as a second reference block (residual prediction reference block, rpRefBlock or rpRefSamples) for residual prediction of the current block 505.
  • rpRefBlock residual prediction reference block
  • rpRefSamples residual prediction reference block
  • a picture having the same POC value as the reference picture 540 or 550 of the current block 505 in the current view may be selected, or the register of the current block 505. It may be a reference picture in the reference picture list used for dual prediction.
  • the reference block 525 or 535 of the corresponding block 515 is motion information of the current block 505 in the reference picture 520 or 530 of the corresponding block 515, for example, the motion vector 560 or 565 of the current block 505. ) Can be specified by performing motion compensation.
  • the reference picture 540 of the current block 505 is a picture that can be referred to as forward-directional (L0) in inter prediction, and is a picture specified by the reference picture index Ref0 in the reference picture list L0, for example. Can be.
  • the reference picture 550 of the current block 505 is a picture that may be referred to backward-directinonal in inter prediction, and may be, for example, a picture specified by the reference picture index Ref1 in the reference picture list L1.
  • the prediction unit may use the difference between the first reference block reBlock and the second reference block rpRefBlock derived for the residual prediction as the residual prediction sample value of the current block 505 as described above.
  • the difference value obtained by subtracting the sample value of the reference block 525 or 535 from the sample value of the corresponding block 515 may be derived as the residual prediction sample value of the current block 505.
  • a weighting factor may be applied to the residual prediction sample value of the current block 505.
  • the weight may be transmitted from an encoder to a decoder.
  • the weight may be called iv_res_pred_weight_idx.
  • the weight may be, for example, one of 0, 0.5 or 1.
  • the weight 0 may be indicative that residual prediction is not applied.
  • Index information indicating which weight to apply may be transmitted in units of blocks from the encoder to the decoder.
  • the inter-view residual prediction method will be described. For example, when the current block is predicted from the inter-view reference picture, the inter-view residual prediction method may be applied.
  • the prediction unit derives the reference picture 540 or 550 in the current view and derives the reference block 545 or 555 in the reference picture 540 or 550.
  • the prediction unit may induce the reference block 545 or 555 in the reference picture 540 or 550 based on the derived temporal motion vector of the corresponding block 515.
  • the reference block 545 or 555 may be used as a first reference block (residual prediction block, rpBlock, or rpSamples) for residual prediction of the current block 505.
  • the prediction unit derives the corresponding block 515 in the reference view corresponding to the current block 505.
  • the corresponding block 515 may be derived from the picture 510 belonging to the reference view of the current block 505 among the pictures in the same AU as the current block 505.
  • the position of the corresponding block 515 may be specified using the disparity vector 520 of the current block 505 within the picture 510 of the reference view.
  • the prediction unit derives the reference picture 520 or 530 of the corresponding block 515 based on the (temporal) motion vector of the corresponding block 515 and the reference picture index, and the reference picture 520 of the corresponding block 515. Or reference block 525 or 535 from 530.
  • the reference block 525 or 535 may be used as a second reference block (residual prediction reference block, rpRefBlock or rpRefSamples) for residual prediction of the current block 550.
  • a picture having the same POC value as the reference picture 540 or 550 of the current block 505 in the current view may be selected, or the current block 505 may be selected. It may be a reference picture in the reference picture list for the reference view used for residual prediction.
  • the prediction unit may use the difference between the first reference block rpBlcok and the second reference block rpRefBlock derived for the residual prediction as the residual prediction sample value of the current block 515 as described above. For example, a value obtained by subtracting the sample value of the reference block 525 or 535 in the reference view from the sample value of the reference block 545 or 555 in the current view may be derived as the residual prediction value of the current block 505. In this case, as described above, a weight may be applied to the residual prediction sample value of the current block 505.
  • the weight may be, for example, one of 0, 0.5 or 1.
  • a weight of 0 may indicate that residual prediction is not applied.
  • Index information indicating which weight to apply may be transmitted in units of blocks.
  • a corresponding block of a current block is derived on a reference picture of a reference view using a disparity vector, and motion information (eg, motion) for the corresponding block is derived.
  • the motion vector includes a temporal motion vector.
  • the position of the corresponding block may not exactly correspond to any prediction block on the reference picture of the reference view, and storing the motion information of all prediction blocks in the reference picture of the reference view for coding of the current block is an encoder and a decoder. This is a considerable burden on the buffer load.
  • the motion information of the prediction unit (PU) to be currently encoded may directly obtain motion information of a corresponding block encoded in time or may be used as a prediction value. Therefore, even if encoding and decoding of one frame (or picture) are completed, the motion information of the blocks in the frame may be used for encoding and decoding the next frame or picture. In this case, motion information of blocks of a previous frame may be compressed at a ratio of 1/4 or 1/16 and stored in a buffer (or a memory).
  • the frames of other views in the AU with the same time instance as the current frame are highly correlated with the current frame, and thus the temporally Frames of other views within the AU may be referenced more for coding of the current frame than frames. That is, in performing coding of the current frame, motion information used for frames of other views in the same AU may be used relatively more than motion information of previous frames in the same view.
  • the motion information of the frame in the AU may be compressed at the 1/4 or 1/8 ratio.
  • the motion information of the upper left 4 * 4 block in each 8 * 8 block unit is represented as the representative motion vector for four 4 * 4 blocks in the 8 * 8 block.
  • the motion information of the upper left 4 * 4 blocks in each 16 * 16 block unit is representative of the motion vectors of 16 4 * 4 blocks in the 16 * 16 blocks.
  • motion information compression technology is a method for reducing the buffer size and memory bandwidth, which is a big advantage in hardware implementation than software implementation.
  • the amount of computation that compresses motion information can be more of an issue than the gains caused by buffer size and memory bandwidth reduction.
  • the present invention proposes a method for deriving effective and precise compressed motion information without compressing physical motion information in software implementation.
  • a corresponding block of a current block is derived on a reference picture of a reference view using a disparity vector, and motion information about the corresponding block is obtained. (E.g., motion vectors).
  • motion vectors the position of the corresponding block may not be aligned to the 4 * 4 block grid that is the minimum block unit.
  • 6 is a diagram schematically illustrating a positional relationship between a current block and a corresponding block. 6 exemplarily shows pictures in a 4 * 4 block grid.
  • the current picture 600 is a picture on view V1
  • reference picture 630 is a picture on view V0.
  • the corresponding block 640 may be derived using the current block 610 and the disparity vector 620 on the current picture 630.
  • the corresponding block 640 may be overlapped with several blocks, and a criterion for deriving motion information for the corresponding block 640 should be defined in consideration of motion information compression.
  • the (prediction) block covering the position of the reference sample may be determined as the representative block, and the motion information of the representative block may be determined as the motion information for the corresponding block 640.
  • the position of the reference sample may be specified as a top-left position of the first block of FIG. 6, and the representative block may be the first block.
  • the position of the reference sample may be determined or calculated based on the position of the corresponding sample, and the position of the corresponding sample is the upper left position or center of the corresponding block determined based on the position of the current block 610 and the disparity vector. Can be derived from location.
  • the width (width) and height (height) of the corresponding block are each composed of an even number of samples, four samples face each other at the center point of the corresponding block. In this case, the position of the lower right sample of the four center samples The center position can be determined.
  • the position of the current block 610 may indicate the upper left position of the current block 610.
  • the position of the reference sample includes the corresponding sample.
  • the upper left sample position of the motion compression unit block may be set.
  • the position of the representative sample may be set to the upper left sample position of the motion compression unit block regardless of the detailed positions of the corresponding sample and the corresponding block. If the motion compression unit block has an 8 * 8 size, the representative block may have a 4 * 4 size.
  • the position of the reference sample may be derived by performing a shift operation based on the position of the corresponding sample.
  • the shift operation includes arithmetic right shift (>>) and arithmetic left shift ( ⁇ ).
  • the position of the reference sample may be determined based on the following equation.
  • xRef and yRef are the x and y coordinates of the location of the reference sample, respectively
  • xRefFull and yRefFull are the x and y coordinates of the location of the corresponding sample, respectively
  • pic_width_in_luma_samples is the width of the luma samples-based picture, pic_height_in_luma_samples, respectively. It represents the height of the picture based on the luma samples.
  • the picture may include a current picture or may include a reference picture. Alternatively, when the current picture and the reference picture are set to have the same width and height, the picture may include the current picture and the reference picture.
  • the Clip3 operation can be expressed as Equation 2 below.
  • the block in which the corresponding block most overlaps (prediction) may be determined as the representative block.
  • the representative block may be block 4 of FIG. 6.
  • the above-described motion information compression method according to the present invention may be performed by the video encoding apparatus of FIG. 2 and may be performed by the video decoding apparatus of FIG. 3.
  • FIG. 7 is a flowchart schematically illustrating a 3D video coding method according to an embodiment of the present invention. Hereinafter, the decoding apparatus will be described.
  • the decoding apparatus derives a disparity vector for a current block (S700).
  • the disparity vector may be derived from a depth value using a camera parameter, or may be derived from a motion vector or disparity vector of a neighboring block in a current or another view.
  • the disparity vector may be derived based on a spatial or temporal neighboring block of the current block, in which case the neighboring block may be coded based on disparity compensated prediction (DCP).
  • DCP disparity compensated prediction
  • the picture to which the current block belongs is a texture picture
  • the disparity vector derived from the neighboring block may be called a disparity vector from neighboring blocks (NBDV).
  • the disparity vector may be derived based on the reference view and a predetermined depth value.
  • the predetermined value may be a middle value of a depth value range.
  • the decoding device may receive video information from the encoding device through the bitstream.
  • the video information may include block partition information, prediction mode information, residual information, and values of syntax elements for reconstructing a current block.
  • the bitstream may be transmitted from the encoding device to the decoding device via a network or a storage medium.
  • the decoding apparatus determines a corresponding sample position on a reference view based on the disparity vector (S710).
  • the position of the corresponding sample may be derived as the upper left position of the corresponding block determined based on the position of the current block and the disparity vector.
  • the position of the corresponding sample may be derived as the center position of the corresponding block.
  • the center position may indicate the position of the lower right sample among the center four samples of the corresponding block.
  • the decoding apparatus derives the position of the reference sample based on the position of the corresponding sample (S720).
  • the position of the reference sample may be a top-left sample position of a motion compression unit block including the corresponding sample.
  • the motion compression unit block may have a size of 8 * 8.
  • the position of the reference sample may be derived by performing a shift operation based on the position of the corresponding sample. Specifically, for example, the position of the reference sample may be determined based on Equation 1.
  • the decoding apparatus derives motion information of the representative block covering the position of the reference sample (S730).
  • the representative block may be a prediction block covering the position of the reference sample.
  • the representative block may have a 4 * 4 size.
  • the decoding apparatus may derive the IvMC for the current block based on the derived motion information. That is, the derived motion information may be used as an IvMC for the current block, and the decoding apparatus generates a predictive sample (or sample array) for the current block based on the IvMC and generates a residual sample (or sample array). In addition, a reconstruction sample (picture) can be generated.
  • the decoding apparatus may perform residual prediction based on the derived motion information. Specifically, for example, the decoding apparatus derives the first reference block on the current view based on the temporal motion vector of the corresponding block on the reference view derived based on the disparity vector, and temporal motion on the corresponding block. A second reference block on the reference view may be derived based on a vector. The decoding apparatus may generate a (residual) prediction sample (or sample array) of the current block based on the first reference block and the second reference block. A temporal motion vector for the corresponding block may be derived based on the motion information of the prediction block covering the position of the reference sample.
  • the motion information of the prediction block may include a motion vector, and the motion vector may be used as the temporal motion vector.
  • the residual prediction may be inter-view residual prediction.
  • the decoding apparatus may generate a reconstructed sample (picture) based on the (residual) prediction sample (or sample array) of the current block. In this case, the decoding apparatus may generate the reconstructed sample (picture) by adding the residual sample (or sample array) to the (residual) prediction sample (or sample array) as needed.
  • the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
  • the module may be stored in memory and executed by a processor.
  • the memory may be internal or external to the processor and may be coupled to the processor by various well known means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un dispositif et un procédé de codage de vidéo tridimensionnelle (3D). Un procédé de décodage selon la présente invention concerne un procédé de décodage de vidéo 3D. Un procédé de décodage comprend les étapes consistant à : dériver un vecteur de disparité par rapport à un bloc courant ; dériver l'emplacement d'un échantillon correspondant sur une vue de référence sur la base du vecteur de disparité ; dériver l'emplacement d'un échantillon de référence sur la base de l'emplacement de l'échantillon correspondant ; et dériver des informations de mouvement d'un bloc de prédiction qui couvre l'emplacement de l'échantillon de référence. Selon la présente invention, lorsqu'un bloc correspondant dans une vue de référence est dérivé au moyen d'un vecteur de disparité, des informations de mouvement du bloc correspondant peuvent être dérivées conformément à une compression d'informations de mouvement, une charge de tampon d'un codeur et d'un décodeur peut être réduite, et l'efficacité de codage peut être améliorée par réduction de la quantité d'informations qui sont traitées.
PCT/KR2015/010554 2014-10-08 2015-10-06 Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d) WO2016056821A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/517,712 US20170310993A1 (en) 2014-10-08 2015-10-06 Movement information compression method and device for 3d video coding
KR1020177009466A KR20170066411A (ko) 2014-10-08 2015-10-06 3d 비디오 코딩을 위한 움직임 정보 압축 방법 및 장치

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462061151P 2014-10-08 2014-10-08
US62/061,151 2014-10-08

Publications (1)

Publication Number Publication Date
WO2016056821A1 true WO2016056821A1 (fr) 2016-04-14

Family

ID=55653371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/010554 WO2016056821A1 (fr) 2014-10-08 2015-10-06 Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d)

Country Status (3)

Country Link
US (1) US20170310993A1 (fr)
KR (1) KR20170066411A (fr)
WO (1) WO2016056821A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018124821A1 (fr) * 2017-01-02 2018-07-05 주식회사 케이티 Procédé et appareil pour traiter des signaux vidéo
WO2018174541A1 (fr) * 2017-03-21 2018-09-27 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019050451A (ja) * 2017-09-07 2019-03-28 キヤノン株式会社 画像処理装置及びその制御方法及びプログラム及び画像処理システム
WO2019194436A1 (fr) * 2018-04-02 2019-10-10 엘지전자 주식회사 Procédé de codage d'images basé sur un vecteur de mouvement, et appareil associé
WO2019203517A1 (fr) * 2018-04-15 2019-10-24 엘지전자 주식회사 Procédé et dispositif de déduction d'informations de mouvement dans un système de codage d'image
CN115623199A (zh) * 2018-05-12 2023-01-17 数码士有限公司 使用参考样本的视频信号处理方法及设备
WO2020141881A1 (fr) * 2019-01-02 2020-07-09 엘지전자 주식회사 Procédé et appareil d'inter-prédiction basés sur des informations de mouvement dans des unités de sous-bloc
CN110008904A (zh) * 2019-04-08 2019-07-12 万维科研有限公司 生成基于视频文件格式的形状识别列表的方法
CN113709455B (zh) * 2021-09-27 2023-10-24 北京交通大学 一种使用Transformer的多层次图像压缩方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035910A (ko) * 2006-10-20 2008-04-24 삼성전자주식회사 두 배의 정밀 이득을 갖는 비율-독립 스위치 커패시터증폭기 및 그 동작 방법
KR20080066522A (ko) * 2007-01-11 2008-07-16 삼성전자주식회사 다시점 영상의 부호화, 복호화 방법 및 장치
KR20120080122A (ko) * 2011-01-06 2012-07-16 삼성전자주식회사 경쟁 기반의 다시점 비디오 부호화/복호화 장치 및 방법
WO2014106435A1 (fr) * 2013-01-07 2014-07-10 Mediatek Inc. Procédé et appareil de dérivation de prédiction de vecteur de mouvement spatial pour modes direct et de saut dans un codage vidéo en trois dimensions
WO2014137596A1 (fr) * 2013-03-08 2014-09-12 Qualcomm Incorporated Prédiction résiduelle intervue dans le codage vidéo multivue ou tridimensionnel

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035910A (ko) * 2006-10-20 2008-04-24 삼성전자주식회사 두 배의 정밀 이득을 갖는 비율-독립 스위치 커패시터증폭기 및 그 동작 방법
KR20080066522A (ko) * 2007-01-11 2008-07-16 삼성전자주식회사 다시점 영상의 부호화, 복호화 방법 및 장치
KR20120080122A (ko) * 2011-01-06 2012-07-16 삼성전자주식회사 경쟁 기반의 다시점 비디오 부호화/복호화 장치 및 방법
WO2014106435A1 (fr) * 2013-01-07 2014-07-10 Mediatek Inc. Procédé et appareil de dérivation de prédiction de vecteur de mouvement spatial pour modes direct et de saut dans un codage vidéo en trois dimensions
WO2014137596A1 (fr) * 2013-03-08 2014-09-12 Qualcomm Incorporated Prédiction résiduelle intervue dans le codage vidéo multivue ou tridimensionnel

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018124821A1 (fr) * 2017-01-02 2018-07-05 주식회사 케이티 Procédé et appareil pour traiter des signaux vidéo
WO2018174541A1 (fr) * 2017-03-21 2018-09-27 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo

Also Published As

Publication number Publication date
KR20170066411A (ko) 2017-06-14
US20170310993A1 (en) 2017-10-26

Similar Documents

Publication Publication Date Title
WO2016056821A1 (fr) Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d)
WO2018062921A1 (fr) Procédé et appareil de partitionnement et de prédiction intra de blocs dans un système de codage d'image
WO2018174402A1 (fr) Procédé de transformation dans un système de codage d'image et appareil associé
WO2018056603A1 (fr) Procédé et appareil d'inter-prédiction basée sur une compensation d'éclairage dans un système de codage d'images
WO2015142054A1 (fr) Procédé et appareil pour traiter des signaux vidéo multi-vues
WO2020197236A1 (fr) Codage d'image ou de vidéo s'appuyant sur une structure de manipulation de sous-images
WO2016056822A1 (fr) Procédé et dispositif de codage vidéo 3d
WO2016056782A1 (fr) Procédé et dispositif de codage d'image de profondeur en codage vidéo
WO2016200043A1 (fr) Procédé et appareil d'inter-prédiction en fonction d'une image de référence virtuelle dans un système de codage vidéo
WO2015057033A1 (fr) Méthode et appareil de codage/décodage de vidéo 3d
WO2019198997A1 (fr) Procédé de codage d'image à base d'intraprédiction et appareil pour cela
WO2018056602A1 (fr) Appareil et procédé de prédiction-inter dans un système de codage d'image
WO2019112071A1 (fr) Procédé et appareil de décodage d'image basés sur une transformation efficace de composante de chrominance dans un système de codage d'image
WO2015142057A1 (fr) Procédé et appareil pour traiter des signaux vidéo multi-vues
WO2016056754A1 (fr) Procédé et dispositif pour coder/décoder une vidéo 3d
WO2018174357A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2019194500A1 (fr) Procédé de codage d'images basé sur une prédication intra et dispositif associé
WO2016056779A1 (fr) Procédé et dispositif pour traiter un paramètre de caméra dans un codage de vidéo tridimensionnelle (3d)
WO2021225338A1 (fr) Procédé de décodage d'image et appareil associé
WO2018128223A1 (fr) Procédé et appareil d'inter-prédiction dans un système de codage d'image
WO2020141885A1 (fr) Procédé et dispositif de décodage d'image au moyen d'un filtrage de dégroupage
WO2019212230A1 (fr) Procédé et appareil de décodage d'image à l'aide d'une transformée selon une taille de bloc dans un système de codage d'image
WO2018084344A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2014171709A1 (fr) Procédé et appareil de compensation de luminosité adaptative basés sur objet
WO2015057032A1 (fr) Procédé et appareil de codage/décodage de vidéo multivue

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15849705

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177009466

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15517712

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15849705

Country of ref document: EP

Kind code of ref document: A1