WO2014010918A1 - 비디오 신호 처리 방법 및 장치 - Google Patents
비디오 신호 처리 방법 및 장치 Download PDFInfo
- Publication number
- WO2014010918A1 WO2014010918A1 PCT/KR2013/006099 KR2013006099W WO2014010918A1 WO 2014010918 A1 WO2014010918 A1 WO 2014010918A1 KR 2013006099 W KR2013006099 W KR 2013006099W WO 2014010918 A1 WO2014010918 A1 WO 2014010918A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- block
- interview
- neighboring block
- interview motion
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present invention relates to a method and apparatus for coding a video signal.
- Compression coding refers to a series of signal processing techniques that transmit digitized information through a communication line or store the data in a form suitable for a storage medium.
- the object of compression encoding includes objects such as voice, video, text, and the like.
- a technique of performing compression encoding on an image is called video image compression.
- a general feature of a multiview video image is that it has spatial redundancy, temporal redundancy and inter-view redundancy.
- An object of the present invention is to improve the coding efficiency of a video signal.
- the present invention is characterized by deriving the interview motion vector of the current texture block based on at least one of the interview motion vector and the reference interview motion vector of the spatial / temporal neighboring block.
- the present invention is characterized by deriving an interview motion vector in consideration of priorities among candidates including at least one of a spatial / temporal neighboring block and a reference interview motion vector.
- the reference interview motion vector corresponding to the reference view motion vector of the searched neighboring block may be stored in the candidate list by searching for a block coded with the reference view motion vector with respect to the neighboring block according to the present invention.
- the reference motion vector stored in the candidate list described above may be used to derive the interview motion vector of the current texture block.
- inter-view inter prediction using an interview motion vector may be used to increase the accuracy of video data prediction by utilizing the correlation between views.
- an accurate interview motion vector can be derived to increase the accuracy of inter-view inter prediction and reduce the amount of residual data transmitted. As a result, coding efficiency can be improved.
- a reference interview motion vector corresponding to the reference view motion vector is used as a candidate, thereby making the correct interview motion vector. Can be predicted.
- FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- FIG. 2 illustrates a schematic configuration of a motion vector list generator as an embodiment to which the present invention is applied.
- FIG. 3 is a diagram for describing an inter prediction method of a multiview texture image according to an embodiment to which the present invention is applied.
- FIG. 4 is a diagram for describing an interview motion vector and a reference view motion vector according to an embodiment to which the present invention is applied.
- FIG. 5 is a flowchart illustrating a method of deriving a disparity vector of a current texture block using depth data as an embodiment to which the present invention is applied.
- FIG. 6 is a flowchart illustrating a method of deriving an interview motion vector as an embodiment to which the present invention is applied.
- FIG. 7 is a diagram for describing a spatial neighboring block according to an embodiment to which the present invention is applied.
- FIG. 8 is a diagram for describing a temporal neighboring block according to an embodiment to which the present invention is applied.
- FIG. 9 is a flowchart illustrating a method of using reference interview motion vector information as an embodiment to which the present invention is applied.
- FIG. 10 is a flowchart illustrating a method of deriving an interview motion vector according to a prediction mode according to an embodiment to which the present invention is applied.
- the video processing method may include searching for a block coded with a reference view motion vector with respect to a spatial neighboring block of a current texture block, and referring to a reference view motion vector of the searched spatial neighboring block.
- Storing an interview motion vector in a candidate list searching for a block coded with a reference view motion vector with respect to the temporal neighboring block of the current texture block, and a reference interview corresponding to the reference view motion vector of the searched temporal neighboring block
- Storing a motion vector in the candidate list deriving an interview motion vector of the current texture block using any one of the reference interview motion vectors stored in the candidate list, and using the derived interview motion vector Performing inter-view inter prediction on the block.
- the video signal processing apparatus searches for a block coded with a reference view motion vector with respect to the spatial neighboring block of the current texture block, and corresponds to the reference view motion vector of the searched spatial neighboring block.
- Storing a reference interview motion vector in a candidate list searching for a block coded with a reference view motion vector with respect to a temporal neighboring block of the current texture block, and searching for a reference interview motion corresponding to the reference view motion vector of the searched temporal neighboring block
- An interview motion vector determiner configured to store a vector in the candidate list, wherein the interview motion vector determiner derives an interview motion vector of the current texture block using any one of reference interview motion vectors stored in the candidate list, The current text using the derived interview motion vector.
- the inter-view inter prediction is performed on the hit block.
- Techniques for compression encoding or decoding multi-view video signal data take into account spatial redundancy, temporal redundancy, and redundancy existing between viewpoints.
- a multiview texture image photographed from two or more viewpoints may be coded to implement a 3D image.
- depth data corresponding to a multiview texture image may be further coded as necessary.
- compression coding may be performed in consideration of spatial redundancy, temporal redundancy, or inter-view redundancy.
- the depth data represents distance information between the camera and the corresponding pixel, and within the present specification, the depth data may be flexibly interpreted as information related to depth, such as depth information, a depth image, a depth picture, a depth sequence, and a depth bitstream.
- the term coding in this specification may include both the concepts of encoding and decoding, and may be flexibly interpreted according to the technical spirit and technical scope of the present invention.
- FIG. 1 is a schematic block diagram of a video decoder according to an embodiment to which the present invention is applied.
- a video decoder includes a NAL parser 100, an entropy decoder 200, an inverse quantization / inverse transform unit 300, an intra predictor 400, an in-loop filter unit 500, and a decoded picture.
- the buffer unit 600 and the inter prediction unit 700 may be included.
- the NAL parser 100 may receive a bitstream including multi-view texture data.
- the bitstream including the encoded depth data may be further received.
- the input texture data and the depth data may be transmitted in one bitstream or may be transmitted in separate bitstreams.
- the NAL parser 100 may parse the NAL unit to decode the input bitstream.
- the input bitstream may further include a camera parameter.
- Camera parameters can have intrinsic camera parameters and extrinsic camera parameters, and inherent camera parameters include focal length, aspect ratio, and principal. point) and the like, and the non-unique camera parameter may include location information of the camera in the world coordinate system.
- the entropy decoding unit 200 may extract quantized transform coefficients, coding information for prediction of a texture picture, and the like through entropy decoding.
- the inverse quantization / inverse transform unit 300 may apply a quantization parameter to the quantized transform coefficients to obtain transform coefficients, and inversely transform the transform coefficients to decode texture data or depth data.
- the decoded texture data or depth data may include residual data according to the prediction process.
- the intra predictor 400 may perform intra prediction using the reconstructed texture data in the current texture picture.
- the coding information used for intra prediction may include intra prediction mode and partition information of intra prediction.
- the in-loop filter unit 500 may apply an in-loop filter to each coded block to reduce block distortion.
- the filter can smooth the edges of the block to improve the quality of the decoded picture.
- the filtered texture pictures may be output or stored in the decoded picture buffer unit 600 for use as a reference picture.
- the decoded picture buffer unit 600 stores or opens a previously coded texture picture in order to perform inter prediction.
- the frame_num and the POC (Picture Order Count) of each picture may be used.
- the inter prediction unit 700 may perform motion compensation of the current block by using the reference picture and the motion information stored in the decoded picture buffer unit 600.
- the motion information may be understood as a broad concept including a motion vector and reference index information.
- the inter prediction unit 700 may perform temporal inter prediction to perform motion compensation.
- Temporal inter prediction may refer to inter prediction using motion information of a reference picture and a current texture block located at the same time point and different time zone as the current texture block.
- inter-view inter prediction may be further performed as well as temporal inter prediction.
- Inter-view inter prediction may refer to inter prediction using a reference picture located at a different point in time from the current texture block and motion information of the current texture block. This will be described in detail with reference to FIG. 3.
- motion information used for inter-view prediction will be referred to as interview motion vector and interview reference index information. Therefore, in the present specification, motion information may be flexibly interpreted as a concept including interview motion vector and interview reference index information.
- a method of deriving motion information of a current texture block, in particular, a motion vector, from the inter prediction unit 700 will be described.
- FIG. 2 illustrates a schematic configuration of a motion vector list generator as an embodiment to which the present invention is applied.
- the motion vector list generator 710 of the present invention may be included in the inter predictor 700 of the decoder.
- the motion vector list generator 710 may be largely comprised of a list initializer 720, a list corrector 730, and a list storage 780.
- the list initializer 720 may generate a motion vector list composed of motion vector candidates.
- the motion vector candidate may mean a set of motion vectors available as the motion vector or the predicted motion vector of the current texture block.
- the motion vector candidate of the present invention may include at least one of a spatial motion vector, a temporal motion vector, a reference view motion vector, and an interview motion vector.
- the spatial motion vector, temporal motion vector, reference view motion vector, and interview motion vector included in the motion vector candidate of the present invention are determined by the spatial motion vector determiner 740 and temporal motion vector included in the motion vector list initializer 720.
- the unit 750, the reference view motion vector determiner 760, and the interview motion vector determiner 770 are respectively acquired.
- the spatial motion vector determiner 740 may derive the spatial motion vector from the motion vector of the neighboring block spatially adjacent to the current texture block.
- the spatially adjacent neighboring blocks may be any one of blocks located at the left, top, bottom left, top left, and top right of the current texture block. If it is determined that the reference pictures of the current texture block and the neighboring block are the same, the motion vector of the neighboring block having the same reference picture may be added to the motion vector list.
- the temporal motion vector determiner 750 may derive the temporal motion vector from the motion vector of the neighboring block temporally adjacent to the current texture block.
- a temporally adjacent neighbor block is adjacent to a block located at the same location as the current texture block or a block at the same location as the current texture block within a picture located at the same time and in a different time zone as the current texture block. Blocks may correspond.
- the picture including the temporal neighboring block may be specified by index information.
- the reference view motion vector determiner 760 may derive the reference view motion vector from the motion vector of the corresponding block located at a different time point than the current texture block.
- the corresponding block may be a block indicated by the interview motion vector of the current texture block.
- a corresponding block in another view may be specified using the interview motion vector of the current texture block, and the motion vector of the specified corresponding block may be set as a reference view motion vector of the current texture block.
- the interview motion vector determiner 770 may include a) an interview motion vector of a spatial neighboring block or a temporal neighboring block of a current texture block, coded by inter-view inter prediction, and b) a neighboring block of the current texture block may be referred to as a reference view motion vector.
- An interview motion vector for determining a reference view motion vector when temporal inter prediction is performed, and c) derive an interview motion vector representing a reference block of the current texture block from the disparity vector derived from the depth data corresponding to the current texture block. can do. Meanwhile, a method of deriving an interview motion vector according to the present invention will be described with reference to FIGS. 5 to 10.
- the motion vector list generated by the list initialization unit 720 may be used as a final motion vector list for deriving the motion vector of the current texture block.
- the list correction unit 730 may be used to remove redundancy between motion vector candidates. It may be modified through. For example, the list corrector 730 may check whether the spatial motion vectors are the same in the motion vector list generated by the list initializer 720. As a result of the check, when the same spatial motion vectors exist, either one may be removed from the motion vector list. Furthermore, if the number of motion vector candidates remaining in the motion vector list is less than two after removing redundancy between the motion vector candidates in the motion vector list, a zero motion vector may be added.
- the remaining motion vector candidates except for the two motion vector candidates may be removed from the motion vector list.
- two motion vector candidates remaining in the motion vector list may be candidates having a relatively small list identification index in the motion vector list.
- the list identification index is assigned to each motion vector candidate included in the motion vector list, and may mean information for identifying each motion vector candidate included in the motion vector list.
- the list storage unit 780 may temporarily store the motion vector list generated by the list initialization unit 720.
- the list storage unit 780 may store the candidate list of the reference interview motion vector in the interview motion vector determiner 770.
- the candidate list may store a reference interview motion vector corresponding to the reference view motion vector.
- a detailed description of the candidate list will be described with reference to FIG. 4.
- the list storage unit 780 is not an essential configuration of the motion vector list generation unit 710, but may be designed to be included in the list initialization unit 720 and the spatial motion vector determination unit 740. It can be designed to be included in each motion vector determiner.
- motion vector identification information about a current texture block may be extracted from a bitstream.
- the motion vector identification information may be information specifying a motion vector candidate used as a motion vector or a predicted motion vector of the current texture block, that is, extracting a motion vector candidate corresponding to the extracted motion vector identification information from the motion vector list. And, it can be set as the motion vector or predicted motion vector of the current texture block.
- the motion vector difference value may be used to reconstruct the motion vector of the current texture block.
- the motion vector difference value may mean a difference vector between the decoded motion vector and the predicted motion vector. Accordingly, the motion vector of the current texture block may be decoded using the predicted motion vector obtained from the motion vector list and the motion vector difference value extracted from the bitstream.
- the decoded motion vector and the reference picture list may be used to predict pixel values of the current texture block.
- the reference picture list may include not only a reference picture for temporal inter prediction but also a reference picture for inter-view inter prediction.
- a reference picture for temporal inter prediction and a reference picture for inter prediction between views will be described with reference to FIG. 3.
- the already reconstructed images are composed of an image V0 and t0 at the same time as the image V0 and t1 currently being coded, and an image V1 and t1 at a different viewpoint.
- MCP motion compensated prediction
- DCP disparity compensated prediction
- the interview motion vector can be derived from any one selected from spatial interview motion vectors, temporal interview motion vectors, disparity vectors, and reference interview motion vectors.
- interview motion vector and the reference view motion vector used for inter-view prediction will be described with reference to FIG. 4 for convenience of understanding.
- a corresponding block (x ') located in a current texture block (x, y) of a video (V0, t1) currently being coded and an image (V1, t1) at a different viewpoint. , y ') can be used for inter prediction.
- a vector 410 indicating a corresponding block (x ', y') of another view point V1 at the current texture block (x, y) may be defined as an interview motion vector.
- the motion vector 420 of the corresponding block (x ', y') of another view indicated by the interview motion vector may be set as the reference view motion vector 430 of the current texture block and used for inter prediction.
- the spatial interview motion vector of the present invention may be derived from the motion vector of a neighboring block coded with inter-view inter prediction among neighboring blocks spatially adjacent to the current texture block.
- the spatial interview motion vector may be derived using a neighboring block having an interview motion vector among the spatial neighboring blocks of the current texture block.
- the temporal interview motion vector of the present invention may be derived from the motion vector of the neighboring block coded with inter-view inter prediction among temporal neighboring blocks of the current texture block.
- the temporal interview motion vector may be derived using a neighboring block having an interview motion vector among temporal neighboring blocks of the current texture block.
- the temporal neighboring block may mean a block located at the same position and / or adjacent to the current texture block in the reference picture having the same view as the current picture including the current texture block.
- the disparity vector of the present invention may represent inter-view variation in a multiview image.
- inter-view variation may occur according to a camera position, and the disparity vector may compensate for such inter-view variation.
- a method of deriving a disparity vector will be described with reference to FIG. 3.
- the reference interview motion vector of the present invention may be derived from the interview motion vector for determining the reference view motion vector.
- the interview motion vector corresponding to the reference view motion vector may be referred to as a reference interview motion vector.
- FIG. 5 illustrates a method of deriving a disparity vector of a current texture block using depth data as an embodiment to which the present invention is applied.
- depth data indicating distance information between a camera and an object of a current texture block may be obtained (S510).
- the depth data may be transmitted separately from the texture image, such as a depth bitstream, a depth sequence, and a depth picture, or may be coded and transmitted together with a corresponding texture image.
- depth data of the current texture block may be obtained according to a transmission scheme.
- depth data corresponding to corner pixels of the current texture block may be used.
- depth data corresponding to the center pixel of the current texture block may be used.
- any one of a maximum value, a minimum value, and a mode value may be selectively used among the plurality of depth data corresponding to the plurality of pixels, and an average value among the plurality of depth data may be used.
- the disparity vector of the current texture block may be derived using the acquired depth data and the camera parameter (S520). A detailed derivation method will be described based on Equations 1 and 2 below.
- Z means a distance from the camera of the pixel
- D is a value quantized Z, corresponding to the depth data of the present invention.
- Z near and Z far mean a minimum value and a maximum value of Z defined for the time point at which the depth data belongs.
- Z near and Z far may be extracted from the bitstream through a sequence parameter set, a slice header, or the like, and may be information predefined in the decoder. Therefore, when the distance Z from the camera of the pixel is quantized to 256 levels, Z may be restored using depth data, Z near and Z far as in Equation 1. Then, the reconstructed Z may be used to derive the disparity vector for the current texture block as shown in Equation 2.
- Equation 2 f means the focal length of the camera, B means the distance between the camera. f and B may be assumed to be the same for all cameras, and thus may be information predefined in the decoder.
- a disparity vector map storing a disparity vector may be used.
- the disparity vector map may be a map in which disparity vectors composed of horizontal and vertical components are stored in a two-dimensional array.
- the disparity vector map of the present invention can be expressed in various sizes. For example, if only one disparity vector is used per picture, it may have a size of 1x1, and if the disparity vector is used for every 4x4 block in the picture, the width and height of the quarter may be adjusted for the picture size. Since the disparity vector map may have a 1/16 size of the picture.
- the size of the current texture block in one picture may be adaptively determined, and a disparity vector may be stored for each texture block.
- the interview motion vector may be derived using a global motion vector (GDV) derived from the syntax of the slice or picture.
- GDV global motion vector
- the global disparity vector is a vector indicating another view in which a reference picture is located at a current view in a slice or picture unit including a plurality of blocks. Since the global disparity vector is equally derived to the plurality of texture blocks, the case where the texture block has a different interview motion vector may additionally transmit an offset vector for compensating the motion vector to find the correct reference block in each texture block.
- An interview motion vector obtained through the vector sum of the global disparity vector and the offset vector may be included in the interview motion vector candidate of the current texture block.
- An interview motion vector candidate of the current texture block may include a spatial interview motion vector, a temporal interview motion vector, a disparity vector, and a reference interview motion vector, from which one may be derived the interview motion vector of the current texture block. I looked.
- a method of deriving the interview motion vector of the current texture block from the interview motion vector candidate will be described.
- the interview motion vector may be obtained in consideration of the priority among the interview motion vector candidates of the current texture block. Priority among the interview motion vector candidates may be set as shown in Table 1 below.
- the spatial interview motion vector may have the highest priority, and then the priority may be set in order of the temporal interview motion vector, the reference interview motion vector, and the disparity vector.
- priority may be set in order of temporal interview motion vector, spatial interview motion vector, disparity vector, reference interview motion vector, and then in order of variation vector, reference interview motion vector, spatial interview motion vector, temporal interview motion vector.
- Priority may be set, and priority may be set in order of a reference interview motion vector, a spatial interview motion vector, a temporal interview motion vector, and a disparity vector.
- the priority may be set in consideration of the frequency of which interview motion vector is used by the interview motion vector of the coded texture block.
- a priority may be set between only certain categories. For example, the priority may be set between the spatial interview motion vector and the temporal interview motion vector and the reference interview motion vector, or the priority may be set between the spatial interview motion vector and the reference interview motion vector.
- the spatial interview motion vector and the temporal interview motion vector have a higher priority than the reference interview motion vector. That is, the reference interview motion vector is used only when the spatial interview motion vector and the temporal interview motion vector do not exist. While searching for the spatial interview motion vector and the temporal interview motion vector, the reference interview motion vector may be stored in the candidate list, and if the search results indicate that the spatial interview motion vector and the temporal interview motion vector do not exist, the reference interview motion vector stored in the candidate list. Can be used to derive the interview motion vector of the current texture block.
- a block coded by inter-view inter prediction among spatial neighboring blocks may be searched for (S600).
- a block coded by inter-view inter prediction may mean a DCP coded block.
- FIG. 7 an example of the spatial neighboring block is shown in FIG. 7.
- the spatial neighboring block includes at least one of a lower left neighboring block A0, a left neighboring block A1, an upper right neighboring block B0, an upper neighboring block B1, and an upper left neighboring block B2. It may include.
- priority among spatial neighboring blocks may be considered.
- searching a block coded by inter-view inter prediction suppose that the priority between spatial neighboring blocks is set as shown in Table 2.
- the block coded by inter-view inter prediction in the order of the left neighbor block A1, the top neighbor block B1, the top right neighbor block B0, the bottom left neighbor block A0, and the top left neighbor block B2.
- the interview motion vector of the left neighboring block may be set as the interview motion vector of the current texture block, and the search may be terminated.
- the search may be terminated.
- the left neighboring block when the left neighboring block is not coded by inter-view inter prediction, it may be checked whether the upper neighboring block is coded by inter-view inter prediction.
- the search may be performed until a block coded by inter-prediction inter prediction is found in the order of the lower left neighboring block, the left neighboring block, the upper right neighboring block, the upper neighboring block, and the upper left neighboring block.
- the priority between spatial neighboring blocks is not limited to the above embodiment.
- a method of determining whether inter-view inter prediction is performed includes a) a method of using an inter-view reference picture list, b) a method of using reference index information, and c) a POC and a spatial neighbor of a picture including a spatial neighboring block.
- the inter-view reference picture list may mean a list composed of reference pictures located at different views from the viewpoint of the neighboring block.
- b) may be determined based on the reference index information of the neighboring block. For example, when reference index information of a corresponding neighboring block specifies a reference picture located at a different point in time than that of the corresponding neighboring block, the corresponding neighboring block may be coded by inter prediction between views.
- c) may be determined based on whether the POC of the picture including the neighboring block and the POC of the reference picture of the neighboring block are the same. The POC is output order information, and pictures in the same access unit may have the same POC. Therefore, the same POC of both will mean that the picture including the corresponding neighboring block and the reference picture are located at different viewpoints, and in this case, the neighboring block can be specified to be coded by inter-prediction inter prediction.
- the interview motion vector of the current texture block may be derived from the interview motion vector of the corresponding spatial neighboring block (S610).
- the block coded by the reference view motion vector among the spatial neighboring blocks is searched (S620), and the reference of the found block is performed.
- the interview motion vector may be stored in the candidate list (S630).
- the block coded with the reference view motion vector may mean a block coded with DV-MCP.
- a block coded with DV-MCP may refer to a block coded with MCP using a motion vector predicted by inter-view inter prediction (that is, a reference view motion vector).
- the candidate list is a list that can store the reference interview motion vector corresponding to the reference view motion vector in the block having the reference view motion vector.
- the candidate list may be implemented to store only the horizontal component of the reference interview motion vector. It can be implemented in the form of a vector.
- the candidate list may have a list identification index.
- the list identification index may be allocated to the reference interview motion vector included in the candidate list, and may mean information for identifying each reference interview motion vector included in the candidate list.
- the interview motion vector ie, the reference interview motion vector
- the interview motion vector used to determine the reference view motion vector
- the above-described neighboring block may be a spatial neighboring block or a temporal neighboring block.
- whether the neighboring block is a block having a reference view motion vector is consistent with whether the corresponding block is coded by temporal inter prediction. Therefore, whether to be coded by temporal inter prediction may be determined based on the motion vector identification information of the corresponding neighboring block.
- the motion vector identification information designates a motion vector using temporal inter prediction, it may be determined that the neighboring block is a block coded by temporal inter prediction.
- the order of searching whether it is coded by temporal inter prediction in a spatial neighboring block may be the same as the order of searching inter-view inter prediction.
- coding is performed by temporal inter prediction in order of left neighbor block A1, top neighbor block B1, top right neighbor block B0, bottom left neighbor block A0, and top left neighbor block B2. Searched blocks.
- the spatial neighboring blocks and the order of searching are not limited thereto.
- the block coded by the reference view motion vector among the spatial neighboring blocks is searched.
- S620 has been described.
- a block coded by inter-view inter prediction and a block coded by a reference view motion vector may be simultaneously searched.
- the reference interview motion vector corresponding to the reference view motion vector is stored in the candidate list. And search for spatial neighboring blocks of the next priority.
- a reference interview motion vector of a spatial neighboring block may be stored in a candidate list (S630), and a block coded by inter-view inter prediction among temporal neighboring blocks may be searched (S640). If there is a block coded by inter-view inter prediction among temporal neighboring blocks as a result of the search in S640, the interview motion vector of the current texture block may be derived using the interview motion vector of the temporal neighboring block (S650). However, when there is no block coded by inter-view inter prediction among temporal neighboring blocks as a result of the search in step S640, the block coded by the reference view motion vector among the temporal neighboring blocks is searched (S660), and the reference of the found block is performed. The interview motion vector may be stored in the candidate list (S670).
- FIG. 8 An example of a temporal neighboring block is shown in FIG. 8.
- a temporal neighboring block may refer to a block (hereinafter, referred to as a block of the same position) in the same position as the current texture block within a reference picture of the current texture block.
- the reference picture may mean a picture located at the same time point and a different time zone as the current picture including the current texture block.
- the co-located block of the present invention can be defined in three ways as shown in FIG.
- a block having the same position may be defined as a block including a C position within a reference picture corresponding to the position C of the center pixel of the current texture block.
- a block having the same position may be defined as a block including the X position in the reference picture corresponding to the position X of the upper left pixel of the current texture block.
- the temporal neighboring block of the present invention is not limited to the block of the same position, it may mean a neighboring block adjacent to the block of the same position. As shown in FIG.
- the lower left neighboring block A0, the left neighboring block A1, the upper right neighboring block B0, the upper neighboring block B1, At least one of the upper left neighboring blocks B2 may be used.
- neighboring blocks adjacent to the lower and right sides of the same block may also be used as temporal neighboring blocks.
- the lower right neighboring block C0, the lower neighboring block C1, and the right neighboring block C2 may be used as the temporal neighboring block.
- a block coded by inter prediction between views may be searched in consideration of a) blocks at the same location, b) neighboring blocks adjacent to the blocks at the same location, and priority, which are mentioned as candidates of the temporal neighboring blocks. That is, it is determined whether the temporal neighboring block having the highest priority is a block coded by inter-prediction inter prediction, and if the temporal neighboring block having the highest priority is not coded by inter-view inter prediction, the temporal neighboring block having sub-priority is It will be determined whether the block is coded by inter-view inter prediction. This may be performed until a block coded by inter-view inter prediction is found. For example, suppose the priority among temporal neighboring blocks is set as shown in Table 3. However, this is only one embodiment, and the present invention is not limited thereto.
- the temporal neighboring block may be determined according to the prediction mode of the lower right block RB of the reference picture, as shown in FIG. 8C. Specifically, it is determined whether the lower right block (RB) of the reference picture is a block coded by inter prediction, and when the result is not a block coded by inter prediction, the block in the reference picture corresponding to the current texture block is a temporal neighboring block. Can be defined as For example, when the prediction mode of the lower right block RB of the reference picture is not a block coded by inter prediction, the C position block in the reference picture corresponding to the center pixel position C of the current texture block is temporal neighboring block. Can be set with
- the discovery / storing may be performed in the same manner as the search / storage step performed in the spatial neighboring block, a detailed description thereof will be omitted.
- the interview motion vector of the current texture block may be derived using any one of the reference interview motion vectors stored in the candidate list ( S680). On the other hand, if there is no reference interview motion vector in the candidate list, it may be terminated without deriving the interview motion vector of the current texture block.
- the first searched reference interview motion vector is searched in the order in which the data is input (that is, the stored order) and the interview motion vector of the current texture block. Can be selected.
- the list identification indexes are assigned in the order input to the candidate list
- the list identification index value included in the candidate list may be selected as the interview motion vector of the current texture block using a vector value of 0. That is, a vector value having a minimum list identification index value may be selected as an interview motion vector of the current texture block.
- the interview motion vector information indicating that the interview motion vector of the current texture block is derived using the reference interview motion vector is activated. It is possible to prevent reuse of less accurate interview motion vectors. A detailed description thereof will be described in detail with reference to FIG. 9.
- the derived interview motion vector can be reused for deriving the interview motion vector of adjacent blocks in spatial or temporal manner. Can be stored. A detailed description thereof will be described in detail with reference to FIG. 10.
- 9 is a flowchart illustrating a method in which interview motion vector information is used as an embodiment to which the present invention is applied. 9 may correspond to an embodiment of step S680 of FIG. 6.
- an interview motion vector of a current texture block may be derived using a reference interview motion vector stored in a candidate list (S900). Then, the interview motion vector reuse information indicating that the interview motion vector of the current block is derived using the reference interview motion vector may be set (S910).
- a flag value included in the interview motion vector reuse information may be set.
- the above-described flag may indicate that the interview motion vector is derived using the reference interview motion vector. For example, by setting a flag value from 0 to 1, the flag value may indicate that the interview motion vector is derived using the reference interview motion vector.
- interview motion vector reuse information may be used to search for a block coded with a reference view motion vector among neighboring blocks (see S620 and S660 of FIG. 6). That is, when the interview motion vector reuse information of the neighboring block being searched indicates that the interview motion vector reuse information is derived using the reference interview motion vector, the reference interview motion vector of the neighboring block may not be stored in the candidate list.
- the reference interview motion vector value may be lower in accuracy than the interview motion vector value implemented by coding with inter-view inter prediction. Therefore, the spatial and temporal propagation of the reference interview motion vector can be prevented by using the interview motion vector information as described above, thereby improving the efficiency of inter prediction coding.
- FIG. 10 is a flowchart illustrating a method of deriving an interview motion vector according to a prediction mode according to an embodiment to which the present invention is applied.
- FIG. 10 may correspond to an embodiment of step S680 of FIG. 6.
- an interview motion vector of a current texture block may be derived using a reference interview motion vector stored in a candidate list (S1000).
- the prediction mode of the current texture block may be checked, and if the identified prediction mode is the skip mode, the derived interview motion vector may be stored in the video decoder (S1010).
- the skip mode refers to a mode in which inter prediction is performed using a motion vector list and a list identification index. Specifically, in the skip mode, when a block indicated by the derived interview motion vector is selected, the motion vector of the selected block is reused as it is, and the data of the block indicated by the motion vector can be used directly as the data of the current texture block. That is, residual data may not exist as zero.
- the normal mode is an inter prediction mode instead of a skip mode.
- the motion vector of the selected block is not reused as it is. That is, residual data may exist.
- the motion vector of the selected block is reused as it is, and means a mode of directly using data of the block indicated by the motion vector as data of the current texture block. That is, residual data may not exist as zero.
- the skip motion vector derived using the reference interview motion vector has the highest accuracy, so that it may be separately reused when deriving the interview motion vector of another neighboring block. Can be stored.
- interview motion vector may be stored for reuse even in other modes (eg, merge mode). Can be.
- the decoding / encoding device to which the present invention is applied may be provided in a multimedia broadcasting transmission / reception device such as DMB (Digital Multimedia Broadcasting), and may be used to decode video signals and data signals.
- the multimedia broadcasting transmission / reception apparatus may include a mobile communication terminal.
- the decoding / encoding method to which the present invention is applied may be stored in a computer-readable recording medium that is produced as a program for execution on a computer, and multimedia data having a data structure according to the present invention may also be read by a computer. It can be stored in the recording medium.
- the computer readable recording medium includes all kinds of storage devices in which data that can be read by a computer system is stored. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like, which are also implemented in the form of a carrier wave (eg, transmission over the Internet). It also includes.
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted using a wired / wireless communication network.
- the present invention can be used to code a video signal.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
우선순위 | 카테고리 |
0 | 공간적 인터뷰 모션 벡터 |
1 | 시간적 인터뷰 모션 벡터 |
2 | 참조 인터뷰 모션 벡터 |
3 | 변이 벡터 |
우선순위 | 공간적 이웃 블록 |
0 | 좌측 이웃 블록 |
1 | 상단 이웃 블록 |
2 | 우측상단 이웃 블록 |
3 | 좌측하단 이웃 블록 |
4 | 좌측상단 이웃 블록 |
우선순위 | 시간적 이웃 블록 |
0 | 동일 위치의 블록 |
1 | 동일 위치의 블록에 인접한 이웃 블록 |
Claims (15)
- 현재 텍스쳐 블록의 공간적 이웃 블록에 대해서 참조뷰 모션 벡터로 코딩된 블록을 탐색하는 단계;상기 탐색된 공간적 이웃 블록의 참조뷰 모션 벡터에 대응되는 참조 인터뷰 모션 벡터를 후보자 리스트에 저장하는 단계;상기 현재 텍스쳐 블록의 시간적 이웃 블록에 대해서 참조뷰 모션 벡터로 코딩된 블록을 탐색하는 단계;상기 탐색된 시간적 이웃 블록의 참조뷰 모션 벡터에 대응되는 참조 인터뷰 모션 벡터를 상기 후보자 리스트에 저장하는 단계;상기 후보자 리스트에 저장된 참조 인터뷰 모션 벡터 중 어느 하나를 이용하여 상기 현재 텍스쳐 블록의 인터뷰 모션 벡터를 유도하는 단계; 및상기 유도된 인터뷰 모션 벡터를 이용하여 현재 텍스쳐 블록에 대해 시점 간 인터 예측을 수행하는 단계;를 포함하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 후보자 리스트에 저장된 참조 인터뷰 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 인터뷰 모션 벡터를 유도하는 단계는,상기 공간적 이웃 블록 및 상기 시간적 이웃 블록에서 시점 간 인터 예측으로 코딩된 블록이 존재하지 않는 경우에만 수행되는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 후보자 리스트에 저장된 참조 인터뷰 모션 벡터를 이용하여 상기 현재 텍스쳐 블록의 인터뷰 모션 벡터를 유도하는 단계는,상기 후보자 리스트에 최초에 저장된 참조 인터뷰 모션 벡터를 이용하여 유도하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 현재 텍스쳐 블록의 예측 모드를 확인하는 단계; 및상기 예측 모드가 스킵(skip) 모드인 경우, 상기 유도된 인터뷰 모션 벡터를 저장하는 단계;를 더 포함하고,상기 저장된 인터뷰 모션 벡터는 다른 텍스쳐 블록의 인터뷰 모션 벡터 유도에 재사용되는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 공간적 이웃 블록은,상기 현재 텍스쳐 블록의 좌측하단 이웃 블록, 좌측 이웃 블록, 우측상단 이웃 블록, 상단 이웃 블록 및 좌측상단 이웃 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 시간적 이웃 블록은,동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 방법.
- 제1항에 있어서,상기 참조 인터뷰 모션 벡터는,상기 공간적 이웃 블록과 상기 시간적 이웃 블록 중 적어도 하나가 참조뷰 모션 벡터로 유도된 경우 상기 참조뷰 모션 벡터에 대응하는 인터뷰 모션 벡터인 것을 특징으로 하는 비디오 신호 처리 방법.
- 현재 텍스쳐 블록의 공간적 이웃 블록에 대해서 참조뷰 모션 벡터로 코딩된 블록을 탐색하여, 상기 탐색된 공간적 이웃 블록의 참조뷰 모션 벡터에 대응되는 참조 인터뷰 모션 벡터를 후보자 리스트에 저장하고, 상기 현재 텍스쳐 블록의 시간적 이웃 블록에 대해서 참조뷰 모션 벡터로 코딩된 블록을 탐색하여, 상기 탐색된 시간적 이웃 블록의 참조뷰 모션 벡터에 대응되는 참조 인터뷰 모션 벡터를 상기 후보자 리스트에 저장하는 인터뷰 모션 벡터 결정부;를 포함하고,상기 인터뷰 모션 벡터 결정부는,상기 후보자 리스트에 저장된 참조 인터뷰 모션 벡터 중 어느 하나를 이용하여 상기 현재 텍스쳐 블록의 인터뷰 모션 벡터를 유도하고, 상기 유도된 인터뷰 모션 벡터를 이용하여 현재 텍스쳐 블록에 대해 시점 간 인터 예측을 수행하는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 인터뷰 모션 벡터 결정부는,상기 공간적 이웃 블록 및 상기 시간적 이웃 블록에서 시점 간 인터 예측으로 코딩된 블록이 존재하지 않는 경우에만 수행되는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 인터뷰 모션 벡터 결정부는,상기 후보자 리스트에 최초에 저장된 참조 인터뷰 모션 벡터를 이용하여 유도하는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 인터뷰 모션 벡터 결정부는,현재 텍스쳐 블록의 예측 모드를 확인하고, 상기 예측 모드가 스킵(skip) 모드인 경우, 상기 유도된 인터뷰 모션 벡터를 저장하는 것을 특징으로 하고,상기 저장된 인터뷰 모션 벡터는 다른 텍스쳐 블록의 인터뷰 모션 벡터 유도에 재사용되는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 공간적 이웃 블록은,상기 현재 텍스쳐 블록의 좌측하단 이웃 블록, 좌측 이웃 블록, 우측상단 이웃 블록, 상단 이웃 블록 및 좌측상단 이웃 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 시간적 이웃 블록은,동일 위치의 블록 및 동일 위치의 블록에 인접한 이웃 블록을 포함한 코딩 블록 중 적어도 하나를 포함하는 것을 특징으로 하는 비디오 신호 처리 장치.
- 제8항에 있어서,상기 참조 인터뷰 모션 벡터는,상기 공간적 이웃 블록과 상기 시간적 이웃 블록 중 적어도 하나가 참조뷰 모션 벡터로 유도된 경우 상기 참조뷰 모션 벡터에 대응하는 인터뷰 모션 벡터인 것을 특징으로 하는 비디오 신호 처리 장치.
- 제1항의 비디오 신호 처리 방법이 저장된 비 일시적 컴퓨터 판독가능 기록 매체.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20157003576A KR20150043319A (ko) | 2012-07-10 | 2013-07-09 | 비디오 신호 처리 방법 및 장치 |
EP13816796.0A EP2874390A4 (en) | 2012-07-10 | 2013-07-09 | METHOD AND DEVICE FOR PROCESSING A VIDEO SIGNAL |
US14/413,139 US20150208083A1 (en) | 2012-07-10 | 2013-07-09 | Method and device for processing video signal |
JP2015521545A JP2015527805A (ja) | 2012-07-10 | 2013-07-09 | ビデオ信号処理方法及び装置 |
CN201380036652.4A CN104429078A (zh) | 2012-07-10 | 2013-07-09 | 用于处理视频信号的方法和设备 |
BR112015000240A BR112015000240A2 (pt) | 2012-07-10 | 2013-07-09 | método e dispositivo para o processamento de sinal de vídeo |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261670121P | 2012-07-10 | 2012-07-10 | |
US61/670,121 | 2012-07-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014010918A1 true WO2014010918A1 (ko) | 2014-01-16 |
Family
ID=49916296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/006099 WO2014010918A1 (ko) | 2012-07-10 | 2013-07-09 | 비디오 신호 처리 방법 및 장치 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20150208083A1 (ko) |
EP (1) | EP2874390A4 (ko) |
JP (1) | JP2015527805A (ko) |
KR (1) | KR20150043319A (ko) |
CN (1) | CN104429078A (ko) |
BR (1) | BR112015000240A2 (ko) |
WO (1) | WO2014010918A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014013880A1 (ja) * | 2012-07-18 | 2016-06-30 | ソニー株式会社 | 画像処理装置および方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014077573A2 (ko) * | 2012-11-13 | 2014-05-22 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
US9288507B2 (en) * | 2013-06-21 | 2016-03-15 | Qualcomm Incorporated | More accurate advanced residual prediction (ARP) for texture coding |
CN108235031B (zh) * | 2016-12-15 | 2019-11-05 | 华为技术有限公司 | 一种运动矢量解码方法及解码器 |
WO2019061537A1 (en) * | 2017-09-30 | 2019-04-04 | Huawei Technologies Co., Ltd. | METHOD AND APPARATUS FOR DERIVING CANDIDATES FROM TIME MOVEMENT DATA IN VIDEO CODING |
CN111316646B (zh) * | 2019-01-02 | 2023-06-23 | 北京大学 | 候选mv列表构建方法和装置 |
WO2020219952A1 (en) * | 2019-04-25 | 2020-10-29 | Op Solutions, Llc | Candidates in frames with global motion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008147951A (ja) * | 2006-12-08 | 2008-06-26 | Toshiba Corp | 補間フレーム作成方法および補間フレーム作成装置 |
US20090180539A1 (en) * | 2008-01-11 | 2009-07-16 | Arun Shankar Kudana | Interpolated Skip Mode Decision in Video Compression |
US20100208814A1 (en) * | 2007-10-15 | 2010-08-19 | Huawei Technologies Co., Ltd. | Inter-frame prediction coding method and device |
KR20120066593A (ko) * | 2010-12-14 | 2012-06-22 | 오수미 | 인터 예측 부호화된 동영상 복호화 방법 |
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2149262A4 (en) * | 2007-04-25 | 2010-09-01 | Lg Electronics Inc | METHOD AND DEVICE FOR DECODING / CODING A VIDEO SIGNAL |
EP2721825A4 (en) * | 2011-06-15 | 2014-12-24 | Mediatek Inc | METHOD AND DEVICE FOR MOTION AND DISPARITY VECTOR PREDICTION AND COMPENSATION IN 3D VIDEO CODING |
WO2013068548A2 (en) * | 2011-11-11 | 2013-05-16 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Efficient multi-view coding using depth-map estimate for a dependent view |
US9525861B2 (en) * | 2012-03-14 | 2016-12-20 | Qualcomm Incorporated | Disparity vector prediction in video coding |
US9445076B2 (en) * | 2012-03-14 | 2016-09-13 | Qualcomm Incorporated | Disparity vector construction method for 3D-HEVC |
WO2014005503A1 (en) * | 2012-07-02 | 2014-01-09 | Mediatek Inc. | Method and apparatus of inter-view candidate derivation in 3d video coding |
-
2013
- 2013-07-09 WO PCT/KR2013/006099 patent/WO2014010918A1/ko active Application Filing
- 2013-07-09 CN CN201380036652.4A patent/CN104429078A/zh active Pending
- 2013-07-09 BR BR112015000240A patent/BR112015000240A2/pt not_active IP Right Cessation
- 2013-07-09 US US14/413,139 patent/US20150208083A1/en not_active Abandoned
- 2013-07-09 KR KR20157003576A patent/KR20150043319A/ko not_active Application Discontinuation
- 2013-07-09 JP JP2015521545A patent/JP2015527805A/ja active Pending
- 2013-07-09 EP EP13816796.0A patent/EP2874390A4/en not_active Ceased
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008147951A (ja) * | 2006-12-08 | 2008-06-26 | Toshiba Corp | 補間フレーム作成方法および補間フレーム作成装置 |
US20100208814A1 (en) * | 2007-10-15 | 2010-08-19 | Huawei Technologies Co., Ltd. | Inter-frame prediction coding method and device |
US20090180539A1 (en) * | 2008-01-11 | 2009-07-16 | Arun Shankar Kudana | Interpolated Skip Mode Decision in Video Compression |
KR20120066593A (ko) * | 2010-12-14 | 2012-06-22 | 오수미 | 인터 예측 부호화된 동영상 복호화 방법 |
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014013880A1 (ja) * | 2012-07-18 | 2016-06-30 | ソニー株式会社 | 画像処理装置および方法 |
Also Published As
Publication number | Publication date |
---|---|
EP2874390A1 (en) | 2015-05-20 |
BR112015000240A2 (pt) | 2017-08-08 |
US20150208083A1 (en) | 2015-07-23 |
CN104429078A (zh) | 2015-03-18 |
EP2874390A4 (en) | 2016-02-24 |
JP2015527805A (ja) | 2015-09-17 |
KR20150043319A (ko) | 2015-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013169031A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014010918A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015142054A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2014010935A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2010068020A2 (ko) | 다시점 영상 부호화, 복호화 방법 및 그 장치 | |
WO2014168443A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013176485A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013191436A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2010087589A2 (ko) | 경계 인트라 코딩을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2013162273A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014107083A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015142057A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2016056822A1 (ko) | 3d 비디오 코딩 방법 및 장치 | |
WO2014107029A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016056782A1 (ko) | 비디오 코딩에서 뎁스 픽처 코딩 방법 및 장치 | |
WO2013133627A1 (ko) | 비디오 신호 처리 방법 | |
WO2013133587A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014058207A1 (ko) | 다시점 비디오 신호의 인코딩 방법, 디코딩 방법 및 이에 대한 장치 | |
WO2016003209A1 (ko) | 다시점 비디오 신호 처리 방법 및 장치 | |
WO2014073877A1 (ko) | 다시점 비디오 신호의 처리 방법 및 이에 대한 장치 | |
WO2015009098A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054896A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2015009091A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014054897A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2014109547A1 (ko) | 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13816796 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14413139 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015521545 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013816796 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20157003576 Country of ref document: KR Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015000240 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112015000240 Country of ref document: BR Kind code of ref document: A2 Effective date: 20150106 |