WO2015194915A1 - 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 - Google Patents
인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 Download PDFInfo
- Publication number
- WO2015194915A1 WO2015194915A1 PCT/KR2015/006283 KR2015006283W WO2015194915A1 WO 2015194915 A1 WO2015194915 A1 WO 2015194915A1 KR 2015006283 W KR2015006283 W KR 2015006283W WO 2015194915 A1 WO2015194915 A1 WO 2015194915A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- depth image
- flag
- depth
- unit
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
Definitions
- the present invention relates to an interlayer video encoding method and a decoding method, and more particularly, to an encoding method and a decoding method for a prediction mode of a depth image.
- the stereoscopic image refers to a 3D image that provides shape information about depth and space simultaneously with the image information.
- images of different viewpoints are provided to the left and right eyes, whereas stereoscopic images provide the same images as viewed from different directions whenever the viewer views different views. Therefore, in order to generate a stereoscopic image, images captured at various viewpoints are required.
- Images taken from various viewpoints to generate stereoscopic images have a large amount of data. Therefore, considering the network infrastructure, terrestrial bandwidth, etc. for stereoscopic video, even compression is performed using an encoding device optimized for Single-View Video Coding such as MPEG-2, H.264 / AVC, and HEVC. It is almost impossible to realize.
- the multi-view video codec may improve the compression rate by compressing the base view using single view video compression and encoding the reference view at the extended view.
- ancillary data such as a depth image
- the depth image is used to synthesize an intermediate view image rather than being directly displayed to the user.
- the multi-view video codec needs to efficiently compress not only the multi-view video but also the depth image.
- the interlayer video decoding and encoding apparatus and method may efficiently encode or decode a prediction mode of a depth image to lower the complexity of the apparatus and effectively generate an image at a synthesis view point.
- FIG. 1A is a block diagram of an interlayer video encoding apparatus, according to an embodiment.
- FIG. 1B is a flowchart of a video encoding method, according to an embodiment.
- FIG. 2A is a block diagram of an interlayer video decoding apparatus, according to an embodiment.
- 2B is a flowchart of a video decoding method, according to an embodiment.
- FIG 3 illustrates an interlayer prediction structure according to an embodiment.
- 4A illustrates an SPS 3D extended syntax according to one embodiment.
- 5 illustrates coding_unit syntax according to an embodiment.
- FIG. 6 illustrates an intra_mode_ext syntax for receiving a DMM parameter.
- FIG. 7 is a block diagram of a video encoding apparatus based on coding units having a tree structure, according to an embodiment.
- FIG. 8 is a block diagram of a video decoding apparatus based on coding units having a tree structure, according to an embodiment.
- FIG 9 illustrates a concept of coding units, according to an embodiment of the present invention.
- FIG. 10 is a block diagram of an image encoder based on coding units, according to an embodiment of the present invention.
- FIG. 11 is a block diagram of an image decoder based on coding units, according to an embodiment of the present invention.
- FIG. 12 is a diagram of deeper coding units according to depths, and partitions, according to an embodiment of the present invention.
- FIG. 13 illustrates a relationship between a coding unit and transformation units, according to an embodiment of the present invention.
- FIG. 14 is a diagram of deeper encoding information according to an embodiment.
- 15 is a diagram of deeper coding units according to depths, according to an embodiment of the present invention.
- 16, 17, and 18 illustrate a relationship between coding units, prediction units, and transformation units, according to an embodiment of the present invention.
- FIG. 19 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- FIG. 20 illustrates a physical structure of a disk in which a program is stored, according to an embodiment.
- Fig. 21 shows a disc drive for recording and reading a program by using the disc.
- FIG. 22 illustrates the overall structure of a content supply system for providing a content distribution service.
- 23 and 24 illustrate an external structure and an internal structure of a mobile phone to which the video encoding method and the video decoding method of the present invention are applied, according to an embodiment.
- 25 illustrates a digital broadcasting system employing a communication system according to the present invention.
- FIG. 26 illustrates a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to an embodiment of the present invention.
- An interlayer video decoding method may include obtaining prediction mode information of a current block constituting a depth image from a bitstream, generating a prediction block of the current block based on the obtained prediction mode information; And decoding the depth image by using the prediction block, and obtaining the prediction mode information of the current block from the bitstream, dividing the current block into two or more partitions according to a pattern to predict the depth block.
- the depth image contours the blocks constituting the depth image.
- the second flag may further include whether the depth image allows the prediction of blocks constituting the depth image by using an intra simplified depth coding (SDC) mode.
- SDC simplified depth coding
- the fourth flag includes a method of predicting by dividing the current block into two or more partitions using a wedgelet, and a method of predicting by dividing the current block into two or more partitions using a contour. It can be characterized by specifying any one of.
- the third flag may contour the blocks constituting the depth image. It may be characterized in that it does not allow a method of predicting by dividing into two or more partitions at the boundary.
- Acquiring prediction mode information of the depth image indicates that the first flag allows a method of predicting the current block by dividing the current block into two or more partitions according to a pattern. And a method of predicting the blocks constituting the depth image by dividing the blocks into two or more partitions with a wedgelet as a boundary, wherein the third flag is a contour of blocks in which the depth image forms the depth image.
- the method may include determining that the predetermined condition is satisfied only when the contour is divided into two or more partitions with a boundary as a boundary to allow a prediction method.
- the depth image blocks the blocks constituting the depth image of the depth image into two or more partitions around the wedgelet (wedgelet)
- the third flag indicates that the depth image does not allow the prediction method by dividing blocks constituting the depth image into two or more partitions with a contour boundary. And determining that the current block is divided into two or more partitions using a wedgelet to predict, and that the second flag does not satisfy the predetermined condition and that the depth flag constitutes the depth image. wedgelet is divided into two or more partitions as boundaries, and the third flag is not allowed.
- the depth image indicates that the blocks constituting the depth image allow a method of predicting by dividing the blocks constituting the depth image into two or more partitions with a contour boundary
- the current block is divided into two or more partitions using a contour. Can be determined to predict.
- An interlayer video encoding method may include determining a prediction mode of a current block constituting a depth image, generating a prediction block of the current block using the determined prediction mode, and using the prediction block. And encoding the depth image to generate a bitstream, and determining the prediction mode of the current block comprises: whether to allow the current block to be divided and predicted into two or more partitions according to a pattern.
- the second flag and the depth image indicating whether the first flag and whether the depth image is allowed to divide and predict the blocks constituting the depth image into two or more partitions at the boundary of the wedgelet (Wedgelet) and the depth image is the depth Blocks constituting the image are divided into two or more partitions with a contour boundary.
- An interlayer video decoding apparatus may include a prediction mode determiner that obtains prediction mode information of a current block constituting a depth image from a bitstream, and generates a prediction block of the current block based on the obtained prediction mode information.
- a prediction block generating unit and a decoding unit decoding the depth image using the prediction block, wherein the prediction mode determiner is configured to allow the prediction of the current block by dividing the current block into two or more partitions according to a pattern;
- the second flag and the depth image indicating whether or not the depth image is to allow the prediction of the blocks constituting the depth image by dividing the blocks constituting the depth image into two or more partitions bordering the wedgelet (Wedgelet)
- Two or more blocks constituting the depth image with a contour boundary The current block is received only when a third flag indicating whether to partition and predict the partition is received and the predetermined condition determined based on the received first flag, the second flag, and the third flag is satisfied. It may be characterized in that for receiving a fourth flag indicating information on the
- An interlayer video encoding apparatus includes a prediction mode determiner that determines a prediction mode of a current block constituting a depth image, a prediction block generator that generates a prediction block of the current block by using the determined prediction mode; And an encoder to generate a bitstream by encoding the depth image by using the prediction block, wherein the prediction mode determiner determines whether to allow the current block to be divided and predicted into two or more partitions according to a pattern.
- the second flag and the depth image indicating whether the first flag and whether the depth image is allowed to divide and predict the blocks constituting the depth image into two or more partitions at the boundary of the wedgelet (Wedgelet) and the depth image is the depth Two or more partitions of the blocks that make up the image, bordering the contour
- a third flag indicating whether to allow the prediction by dividing by; and if the predetermined condition determined based on the first flag, the second flag, and the third flag is satisfied, the current block is assigned to the pattern. Accordingly, the fourth flag indicating information on the type of the method of dividing into two or more partitions may be generated.
- a computer-readable recording medium having recorded thereon a program for executing the method performed in the inter-layer decoding method or the inter-layer encoding method according to an embodiment may be provided.
- a video encoding technique and a video decoding technique based on coding units having a tree structure according to an embodiment applicable to the interlayer video encoding technique and the decoding technique proposed above are disclosed. Also, with reference to FIGS. 20 through 26, embodiments in which the above-described video encoding method and video decoding method are applicable are disclosed.
- the 'image' may be a still image of the video or a video, that is, the video itself.
- sample means data to be processed as data allocated to a sampling position of an image.
- the pixels in the spatial domain image may be samples.
- 'current block' may mean a unit block of a depth image to be encoded or decoded.
- 1A is a block diagram of an interlayer video encoding apparatus 10 according to an embodiment.
- 1B is a flowchart of a video encoding method, according to an embodiment.
- the interlayer video encoding apparatus 10 may include a prediction mode determiner 12, a prediction block generator 14, a residual data generator 16, and an encoder 18.
- the interlayer video encoding apparatus 10 may collectively control the prediction mode determiner 12, the prediction block generator 14, the residual data generator 16, and the encoder 18. It may include a central processor (not shown).
- the prediction mode determiner 12, the predictive block generator 14, the residual data generator 16, and the encoder 18 are each operated by their own processor (not shown), and the processor (not shown).
- the inter-layer video encoding apparatus 10 may be operated as a whole as the? Alternatively, under the control of an external processor (not shown) of the interlayer video encoding apparatus 10, the prediction mode determiner 12, the prediction block generator 14, the residual data generator 16, and the encoder ( 18) may be controlled.
- the interlayer video encoding apparatus 10 may include one or more data storage units for storing input / output data of the prediction mode determiner 12, the prediction block generator 14, the residual data generator 16, and the encoder 18. (Not shown).
- the interlayer video encoding apparatus 10 may include a memory controller (not shown) that controls data input / output of the data storage unit (not shown).
- the interlayer video encoding apparatus 10 may perform a video encoding operation including transformation by operating in conjunction with an internal video encoding processor or an external video encoding processor to output a video encoding result.
- the internal video encoding processor of the interlayer video encoding apparatus 10 may implement a video encoding operation as a separate processor.
- the inter-layer video encoding apparatus 10, the central computing unit, or the graphics processing unit may include a video encoding processing module to implement a basic video encoding operation.
- the interlayer video encoding apparatus 10 may classify and encode a plurality of image sequences for each layer according to a scalable video coding scheme, and include a separate stream including data encoded for each layer. You can output The interlayer video encoding apparatus 10 may encode the first layer image sequence and the second layer image sequence into different layers.
- low resolution images may be encoded as first layer images, and high resolution images may be encoded as second layer images.
- An encoding result of the first layer images may be output as a first layer stream, and an encoding result of the second layer images may be output as a second layer stream.
- a multiview video may be encoded according to a scalable video coding scheme.
- the center view images may be encoded as first layer images
- the left view images and right view images may be encoded as second layer images referring to the first layer image.
- the interlayer video encoding apparatus 10 allows three or more layers such as a first layer, a second layer, and a third layer
- the center view images are encoded as first layer images
- the left view images are second layer images.
- the and right view images may be encoded as third layer images.
- the configuration is not necessarily limited to this configuration, and the layer in which the center view, the left view, and the right view images are encoded and the referenced layer may be changed.
- a scalable video coding scheme may be performed according to temporal hierarchical prediction based on temporal scalability.
- a first layer stream including encoding information generated by encoding images of a base frame rate may be output.
- Temporal levels may be classified according to frame rates, and each temporal layer may be encoded into each layer.
- the second layer stream including the encoding information of the high frame rate may be output by further encoding the high frame rate images by referring to the images of the base frame rate.
- scalable video coding may be performed on the first layer and the plurality of second layers.
- the first layer images, the first second layer images, the second second layer images, ..., and the K-th second layer images may be encoded. Accordingly, the encoding results of the first layer images are output to the first layer stream, and the encoding results of the first, second, ..., K-th second layer images are respectively the first, second, ..., K-th second layer. Can be output as a stream.
- the interlayer video encoding apparatus 10 may perform inter prediction to predict a current image by referring to images of a single layer. Through inter prediction, a motion vector representing motion information between the current picture and the reference picture and a residual component between the current picture and the reference picture may be generated.
- the interlayer video encoding apparatus 10 may perform inter-layer prediction for predicting second layer images by referring to the first layer images.
- one first layer image and a first layer image may be formed according to a multilayer prediction structure. Inter-layer prediction between three layer images and inter-layer prediction between a second layer image and a third layer image may be performed.
- a position difference component between the current image and a reference image of another layer and a residual component between the current image and a reference image of another layer may be generated.
- the interlayer prediction structure will be described later with reference to FIG. 3.
- the interlayer video encoding apparatus 10 encodes each block of each image of the video for each layer.
- the type of block may be square or rectangular, and may be any geometric shape. It is not limited to data units of a certain size.
- the block may be a maximum coding unit, a coding unit, a prediction unit, a transformation unit, or the like among coding units having a tree structure.
- the maximum coding unit including the coding units of the tree structure may be a coding tree unit, a coding block tree, a block tree, a root block tree, a coding tree, a coding root, or a tree. It may also be called variously as a trunk trunk.
- Video encoding and decoding methods based on coding units having a tree structure will be described later with reference to FIGS. 7 to 19.
- an additional data such as a depth image is additionally encoded so that an image having more views than a view input through a decoding stage of the image is additionally encoded. Can be generated.
- the depth image is used to synthesize an image of an intermediate view, rather than being directly displayed to a user, and thus whether or not the depth image is degraded may affect the quality of the synthesized image.
- the amount of change in the depth value of the depth image is large near the boundary of the object and relatively small inside the object or in the background area. Therefore, minimizing an error occurring at a boundary of an object having a large difference in depth value may be directly connected to minimizing an error of the synthesized image. In addition, reducing the amount of data relative to the inside of an object or a background area having a small amount of change in depth may increase the coding efficiency of the depth image.
- the interlayer video encoding apparatus 10 may encode the current block by using an intra prediction mode such as a predetermined DC, planar, or angular.
- the interlayer video encoding apparatus 10 may encode a depth image using prediction modes such as a depth modeling mode (DMM) mode, a simplified depth coding (SDC), and a chain coding mode (CCD) mode.
- DMM depth modeling mode
- SDC simplified depth coding
- CCD chain coding mode
- the interlayer video encoding apparatus 10 may generate a flag for each layer including information on whether the above-described prediction mode is used.
- the interlayer video encoding apparatus 10 may generate a prediction block based on a predetermined prediction mode, and generate differential data, that is, residual data between the generated prediction block and the current block to be encoded.
- All of the residual data generated using the predetermined prediction mode may not be encoded or only a part of the residual data may be encoded.
- the interlayer video encoding apparatus 10 may encode an average value of the residual data.
- the interlayer video encoding apparatus 10 calculates a DC value (hereinafter, referred to as an average value) of a block to be encoded, and maps the calculated average value to a depth information lookup table to determine an index. You can decide.
- the depth information lookup table represents a table in which an index and a depth value that a depth image may have are matched.
- the interlayer video encoding apparatus 10 may transmit only the difference value between the index calculated by mapping the average value of the original block to the depth information lookup table and the average value obtained from the prediction block, to the decoding apparatus.
- the difference value of the index may be encoded.
- the prediction mode determiner 12 may determine a prediction mode for the current block constituting the depth image.
- the prediction mode may include a DC, a planar, an angular and a depth modeling mode (DMM) mode, and a simplified depth coding (SDC) mode.
- DMM depth modeling mode
- SDC simplified depth coding
- the DC mode is an intra prediction mode using a method of filling prediction samples of a prediction block with an average value of neighboring reference samples of the current block.
- planar mode is an intra prediction mode calculated according to Equation 1 below for the predicted samples predSample [x], [y] (where x and y are 0 to nTbs ⁇ 1) for the reference sample.
- nTbS represents the horizontal or vertical size of the prediction block.
- the angular mode refers to an intra prediction mode that determines a prediction sample from reference samples in consideration of the direction from mode 2 to mode 34 among intra prediction modes.
- DMM prediction mode is a depth modeling mode technique for accurately and efficiently expressing object boundaries of depth images.
- the DMM prediction mode is a mode for performing prediction by dividing the current block into at least two regions according to a pattern.
- the DMM prediction mode divides the region using Wedgelet and Contour and calculates an average value for each region. have.
- the DMM prediction mode may include a type of DMM mode-1 (also called DMM_WFULL mode or INTRA_DEP_WEDGE) and DMM mode-4 (also called DMM_CPREDTEX mode or INTRA_DEP_CONTOUR).
- the DMM mode-1 refers to a Wedgelet mode in which the interlayer video encoding apparatus 10 divides two regions by applying various boundary lines to the current block and then divides the region based on the most suitable boundary line.
- the wedgelet refers to an oblique line and the wedgelet partition refers to two or more partitions partitioned from the current prediction coding block with respect to the oblique line.
- the DMM mode-4 is a mode for dividing the prediction block into at least two regions according to the texture pattern of the current block.
- the DMM mode-4 refers to a mode in which a contour partition can be determined using a block of a texture image.
- the contour (Countour) means a curve encompassing an arbitrary shape
- the contour partition means two or more partitions partitioned from the current prediction coding block divided by contour lines.
- DMM mode-1 and DMM mode-4 will be described in detail later with reference to FIG. 4B.
- the SDC prediction mode is a mode used when the residual data is encoded or not encoded in the DC form based on the small amount of change in the depth value of the inside of the object and the background region.
- the DC component of the residual data may be determined as a pixel value of the residual block and an average value of pixel values of all or part of the residual block.
- the SDC prediction mode may include an SDC intra prediction mode and an SDC inter prediction mode.
- the SDC intra prediction mode may include a DC, a DMM mode-1, a DMM mode-4, and a planar prediction mode
- the interlayer video encoding apparatus 10 is one of the representative modes included in the SDC intra prediction mode.
- the most probable representative mode can predict and encode the current block.
- the SDC inter prediction mode may be configured using a predetermined prediction mode and may be configured differently according to the partition mode. For example, the SDC inter prediction mode may be allowed only when the partition mode is 2N ⁇ 2N, and the SDC inter prediction mode may not be allowed in the case of 2N ⁇ N, N ⁇ 2N, or N ⁇ N.
- the prediction mode determiner 12 may generate a flag including information about a current depth image and a prediction mode used by the current block constituting the current depth image.
- the flag including the prediction mode information will be described later with reference to FIGS. 4 to 6.
- the prediction block generator 14 may generate a prediction block of the current block based on the determined prediction mode.
- the residual data generator 16 may generate residual data that is a difference between the current block and the prediction block. According to an exemplary embodiment, the residual data generator 16 may not transmit the residual data to the encoder 18 or may average all or part of the residual data and transmit the average data to the encoder 18.
- the residual data generator 16 calculates an average value by using an upper left pixel value, an upper right pixel value, a lower left pixel value, and a lower right pixel value in a residual block that is a difference between a current block and a prediction block. Can be transmitted to the encoder 18.
- the residual data generator 16 calculates an average value using all pixel values in the residual block, instead of using the upper left pixel value, upper right pixel value, lower left pixel value, and lower right pixel value of the prediction block. A weighted sum can be performed.
- an average value for the residual block may be predicted using at least one pixel value for each pixel position (for example, using four upper left pixel values and four upper right pixel values).
- the residual data generator 16 may obtain a mean value for the residual block differently according to the prediction mode. For example, when the prediction block is predicted in the DC or planar mode, the residual data generator 16 may determine the upper left pixel value, the upper right pixel value, the lower left pixel value, and the lower right pixel value of the residual block. The average value of the generated residual block may be calculated using the average and transmitted to the encoder 17.
- the residual data generator 16 may include an upper left pixel value, an upper right pixel value, a lower left pixel value, and a lower right pixel of the prediction block for each divided region of the residual block. Values can be used to predict average values for each region.
- the average value calculator 16 may predict an average value of the residual block by using pixel values at different positions according to the prediction mode of the current block.
- the residual data generator 16 may not transmit the residual data to the encoder 18 when the prediction block is predicted in the horizontal or vertical prediction mode among the angular modes.
- the interlayer video encoding apparatus 10 may generate a bitstream including information about a prediction mode and residual data in order to encode a depth image.
- FIG. 2A is a block diagram of an interlayer video decoding apparatus 20 according to an embodiment.
- the interlayer video decoding apparatus 20 may include a parser 22, a prediction block generator 24, a residual data generator 26, and a decoder 28.
- the interlayer video decoding apparatus 20 may collectively control the parser 22, the prediction block generator 24, the residual data generator 26, and the decoder 28. It may include a central processor (not shown).
- the parser 22, the predictive block generator 24, the residual data generator 26, and the decoder 28 are operated by their own processors (not shown).
- the interlayer video decoding apparatus 20 may be operated as a whole as it is organically operated.
- the parser 22, the prediction block generator 24, the residual data generator 26, and The decoder 28 may be controlled.
- the interlayer video decoding apparatus 20 may store input / output data of the parser 22, the prediction block generator 24, the residual data generator 26, and the decoder 28. It may include one or more data storage (not shown).
- the interlayer video decoding apparatus 20 may include a memory controller (not shown) that controls data input / output of the data storage unit (not shown).
- the interlayer video decoding apparatus 20 operates in conjunction with an internal video decoding processor or an external video decoding processor to restore video through video decoding, thereby performing a video decoding operation including an inverse transform. Can be done.
- the internal video decoding processor of the interlayer video decoding apparatus 20 includes not only a separate processor, but also the interlayer video decoding apparatus 20, the central computing unit, and the graphics processing unit include a video decoding processing module. It may also include the case of implementing a basic video decoding operation.
- the interlayer video decoding apparatus 20 may receive bitstreams for each layer according to a scalable encoding method.
- the number of layers of the bitstreams received by the interlayer video decoding apparatus 20 is not limited.
- the interlayer video decoding apparatus 20 may receive a stream in which image sequences having different resolutions are encoded in different layers.
- the low resolution image sequence may be reconstructed by decoding the first layer stream, and the high resolution image sequence may be reconstructed by decoding the second layer stream.
- a multiview video may be decoded according to a scalable video coding scheme.
- left view images may be reconstructed by decoding the first layer stream.
- Right-view images may be reconstructed by further decoding the second layer stream in addition to the first layer stream.
- the center view images may be reconstructed by decoding the first layer stream.
- Left view images may be reconstructed by further decoding a second layer stream in addition to the first layer stream.
- Right-view images may be reconstructed by further decoding the third layer stream in addition to the first layer stream.
- a scalable video coding scheme based on temporal scalability may be performed. Images of the base frame rate may be reconstructed by decoding the first layer stream. The high frame rate images may be reconstructed by further decoding the second layer stream in addition to the first layer stream.
- first layer images may be reconstructed from the first layer stream, and second layer images may be further reconstructed by further decoding the second layer stream with reference to the first layer reconstructed images.
- the K-th layer images may be further reconstructed by further decoding the K-th layer stream with reference to the second layer reconstruction image.
- the interlayer video decoding apparatus 20 obtains encoded data of first layer images and second layer images from a first layer stream and a second layer stream, and adds a motion vector and an interlayer generated by inter prediction.
- the prediction information generated by the prediction can be further obtained.
- the interlayer video decoding apparatus 20 may decode inter-predicted data for each layer and may decode inter-layer predicted data among a plurality of layers. Reconstruction through motion compensation and inter-layer decoding may be performed based on a coding unit or a prediction unit.
- images may be reconstructed by performing motion compensation for the current image with reference to reconstructed images predicted through inter prediction of the same layer.
- Motion compensation refers to an operation of reconstructing a reconstructed image of the current image by synthesizing the reference image determined using the motion vector of the current image and the residual component of the current image.
- interlayer video decoding apparatus 20 may perform interlayer decoding with reference to prediction information of the first layer images in order to decode a second layer image predicted through interlayer prediction.
- Inter-layer decoding refers to an operation of reconstructing prediction information of the current image using prediction information of a reference block of another layer to determine prediction information of the current image.
- the interlayer video decoding apparatus 20 may perform interlayer decoding for reconstructing third layer images predicted with reference to the second layer images.
- the interlayer prediction structure will be described in detail later with reference to FIG. 3.
- the interlayer video decoding apparatus 20 decodes each block of each image of the video.
- the block may be a maximum coding unit, a coding unit, a prediction unit, a transformation unit, or the like among coding units having a tree structure.
- a video encoding and decoding method based on coding units having a tree structure will be described later with reference to FIGS. 7 to 20.
- the interlayer video decoding apparatus 20 when the interlayer video decoding apparatus 20 according to an exemplary embodiment reconstructs a multiview video image, an image having more views than a view input through a decoding stage of the image by additionally decoding auxiliary data such as a depth image. Can be generated.
- the depth image is used to synthesize an image of an intermediate view, rather than being directly displayed to a user, and thus whether or not the depth image is degraded may affect the quality of the synthesized image.
- the amount of change in the depth value of the depth image is large near the boundary of the object and relatively small inside the object. Therefore, minimizing an error occurring at a boundary of an object having a large difference in depth value may be directly connected to minimizing an error of the synthesized image. Also, for the inside of an object having a small change in depth value, reducing the amount of data can increase the decoding efficiency of the depth image.
- the interlayer video decoding apparatus 20 may decode the depth image using an intra prediction mode such as a predetermined DC, planar, or angular.
- an intra prediction mode such as a predetermined DC, planar, or angular.
- the interlayer video decoding apparatus 20 may decode the depth image using a prediction mode such as a depth modeling mode (DMM) mode, a simplified depth coding (SDC), or a chain coding mode (CCD) mode.
- DMM depth modeling mode
- SDC simplified depth coding
- CCD chain coding mode
- the interlayer video decoding apparatus 20 may obtain a flag including information about a prediction mode used in a depth image and a current block (that is, a decoding unit) constituting the current block.
- the interlayer video decoding apparatus 20 may obtain prediction mode information from a VPS NAL unit including parameter information commonly used to decode base layer and enhancement layer encoded data. It can receive a flag to include.
- the interlayer video decoding apparatus 20 may receive a flag including prediction mode information from a Sequence Parameter Set (SPS) NAL unit or a Picture Parameter Set (PPS) NAL unit.
- SPS Sequence Parameter Set
- PPS Picture Parameter Set
- PPS is a parameter set for at least one picture.
- the PPS is a parameter set including parameter information commonly used to encode image coded data of at least one picture.
- the PPS NAL unit is a NAL unit containing a PPS.
- SPS is a set of parameters for a sequence.
- a sequence is a collection of at least one picture.
- the SPS may include parameter information that is commonly used to encode encoded data of pictures for encoding with reference to at least one PPS.
- the interlayer video decoding apparatus 20 may generate a prediction block for the current block using intra prediction modes such as DC, planar, and angular to decode the depth image.
- the interlayer video decoding apparatus 20 may generate a prediction block for the current block by using a depth modeling mode (DMM) mode, a simplified depth coding (SDC), and a chain coding mode (CCD) mode.
- the interlayer video decoding apparatus 20 may receive difference data, that is, residual data, between the generated prediction block and the current block to be decoded from the bitstream.
- the interlayer video decoding apparatus 20 may calculate a index by calculating a DC value (hereinafter, referred to as an average value) for a prediction block and mapping the calculated average value to a depth information lookup table. have.
- the interlayer video decoding apparatus 20 may receive an index difference value between a reconstruction index corresponding to the average value for the reconstruction block and the prediction index corresponding to the average value for the prediction block through the bitstream.
- the parser 22 may obtain prediction mode information about a current block constituting the depth image from the bitstream.
- the prediction mode information is information used for reconstructing the depth image and the current block and may include whether to use DC, planar, angular, depth modeling mode (DMM), and simplified depth coding (SDC) mode.
- the parser 22 may parse a flag including prediction mode information.
- the parser 22 indicates whether the depth image allows the prediction of blocks constituting the depth image using DMM mode-1 or DMM mode-4, and whether the DMM prediction mode is allowed for the current image to be decoded.
- the flag may be received from the bitstream.
- the interlayer video decoding apparatus 10 allows a flag indicating whether to allow a DMM prediction mode for a current image and allowing prediction of blocks constituting the depth image using a DMM mode-1 or a DMM mode-4.
- the complexity may be reduced by additionally receiving a flag indicating information on the type of the DMM prediction mode of the current video from the bitstream only when a predetermined condition is satisfied using the flag indicating whether or not the flag is present.
- the flag including the prediction mode information will be described later in detail with reference to FIGS. 4A to 6.
- the prediction block generator 24 may generate a prediction block of the current block based on the obtained prediction mode information.
- the residual data generator 26 may acquire the residual data from the bitstream. However, when the prediction mode is generated in the predetermined prediction mode, the residual data may not be decoded.
- the decoder 28 may decode the current block and the depth image by using the prediction block.
- interlayer prediction structure that may be performed by the interlayer video encoding apparatus 10 according to an embodiment will be described with reference to FIG. 3.
- FIG 3 illustrates an interlayer prediction structure according to an embodiment.
- the interlayer video encoding apparatus 10 may predict and encode base view images, left view images, and right view images according to the reproduction order 30 of the multiview video prediction structure illustrated in FIG. 3. Can be.
- images of the same view are arranged in the horizontal direction. Therefore, left view images labeled 'Left' are arranged in a row in the horizontal direction, basic view images labeled 'Center' are arranged in a row in the horizontal direction, and right view images labeled 'Right' are arranged in a row in the horizontal direction. It is becoming.
- the base view images may be center view images, in contrast to left / right view images.
- images having the same POC order are arranged in the vertical direction.
- the POC order of an image indicates a reproduction order of images constituting the video.
- 'POC X' displayed in the multi-view video prediction structure 30 indicates a relative reproduction order of images located in a corresponding column. The smaller the number of X is, the higher the playback order is and the larger the playback order is, the slower the playback order is.
- the left view images labeled 'Left' are arranged in the horizontal direction according to the POC order (playing order), and the base view images labeled 'Center' These images are arranged in the horizontal direction according to the POC order (playing order), and right-view images marked as 'Right' are arranged in the horizontal direction according to the POC order (playing order).
- both the left view image and the right view image located in the same column as the base view image are images having different viewpoints but having the same POC order (playing order).
- Each GOP includes images between successive anchor pictures and one anchor picture.
- An anchor picture is a random access point.
- the anchor picture When a video is played at random, when the playback position is randomly selected from among images arranged according to the playback order of the video, that is, the POC order, the anchor picture has the nearest POC order at the playback position. Is played.
- Base view images include base view anchor pictures 31, 32, 33, 34, and 35
- left view images include left view anchor pictures 131, 132, 133, 134, and 135
- the images include right-view anchor pictures 231, 232, 233, 234, and 235.
- Multi-view images may be played back in GOP order and predicted (restored).
- images included in GOP 0 may be reproduced, and then images included in GOP 1 may be reproduced. That is, images included in each GOP may be reproduced in the order of GOP 0, GOP 1, GOP 2, and GOP 3.
- the images included in GOP 1 may be predicted (restored). That is, images included in each GOP may be predicted (restored) in the order of GOP 0, GOP 1, GOP 2, and GOP 3.
- inter-view prediction inter layer prediction
- inter prediction inter prediction
- an image starting with an arrow is a reference image
- an image ending with an arrow is an image predicted using the reference image.
- the prediction result of the base view images may be encoded and output in the form of a base view image stream, and the prediction result of the additional view images may be encoded and output in the form of a layer bitstream.
- the prediction encoding result of the left view images may be output as the first layer bitstream, and the prediction encoding result of the right view images may be output as the second layer bitstream.
- B-picture type pictures are predicted with reference to an I-picture type anchor picture followed by a POC order and an I-picture type anchor picture following it.
- the b-picture type pictures are predicted by referring to an I-picture type anchor picture followed by a POC order and a subsequent B-picture type picture or by referring to a B-picture type picture followed by a POC order and an I-picture type anchor picture following it. .
- inter-view prediction (inter layer prediction) referring to different view images and inter prediction referring to the same view images are performed, respectively.
- inter-view prediction (inter layer prediction) with reference to the base view anchor pictures 31, 32, 33, 34, and 35 having the same POC order, respectively. This can be done.
- the base view images 31, 32, 33, 34, 35 having the same POC order or the left view anchor pictures 131, 132, 133, 134 and 135 may perform inter-view prediction.
- the remaining images other than the anchor pictures 131, 132, 133, 134, 135, 231, 232, 233, 234, and 235 among the left view images and the right view images other view images having the same POC are also displayed.
- Reference inter-view prediction (inter layer prediction) may be performed.
- the remaining images other than the anchor pictures 131, 132, 133, 134, 135, 231, 232, 233, 234, and 235 among the left view images and the right view images are predicted with reference to the same view images.
- left view images and the right view images may not be predicted with reference to the anchor picture having the playback order that precedes the additional view images of the same view. That is, for inter prediction of the current left view image, left view images other than a left view anchor picture having a playback order preceding the current left view image may be referenced. Similarly, for inter prediction of a current right view point image, right view images except for a right view anchor picture whose reproduction order precedes the current right view point image may be referred to.
- the left view image that belongs to the previous GOP that precedes the current GOP to which the current left view image belongs is not referenced and is left view point that belongs to the current GOP but is reconstructed before the current left view image.
- the prediction is performed with reference to the image. The same applies to the right view image.
- the interlayer video decoding apparatus 20 may reconstruct base view images, left view images, and right view images according to the reproduction order 30 of the multiview video prediction structure illustrated in FIG. 3. have.
- the left view images may be reconstructed through inter-view disparity compensation referring to the base view images and inter motion compensation referring to the left view images.
- the right view images may be reconstructed through inter-view disparity compensation referring to the base view images and the left view images and inter motion compensation referring to the right view images.
- Reference images must be reconstructed first for disparity compensation and motion compensation of left view images and right view images.
- the left view images may be reconstructed through inter motion compensation referring to the reconstructed left view reference image.
- the right view images may be reconstructed through inter motion compensation referring to the reconstructed right view reference image.
- a left view image belonging to a previous GOP that precedes the current GOP to which the current left view image belongs is not referenced, and is left in the current GOP but reconstructed before the current left view image. It is preferable that only the viewpoint image is referred to. The same applies to the right view image.
- SPS Sequence Parameter Set
- the interlayer video decoding apparatus 20 may receive a flag indicating whether the depth image allows inter SDC in the SDC mode. In addition, the interlayer video decoding apparatus 20 may receive a flag indicating whether the depth image allows DMM mode-4 (DMM_CPREDTEX) in the DMM prediction mode. In addition, the interlayer video decoding apparatus 20 may receive a flag indicating whether the depth image allows the DMM mode-1 (DMM_WFULL) mode or the intra SDC mode among the DMM prediction modes.
- DMM mode-4 DMM mode-4
- DMM_WFULL DMM mode-1
- intra_contour_enabled_flag [d] 410 of FIG. 4A shows a flag intra_contour_enabled_flag [d] indicating whether to use DMM mode-4 in the DMM prediction mode. If intra_contour_enabled_flag [d] 410 is a value of 1, the current depth image may allow prediction of blocks constituting the depth image using a prediction mode of DMM mode-4. Therefore, the current depth image may be generated and decoded with a prediction block using DMM mode-4. On the contrary, when intra_contour_enabled_flag [d] 410 has a value of 0, the current depth image cannot be decoded using DMM mode-4. If intra_contour_enabled_flag [d] 410 is not defined, the value of intra_contour_enabled_flag [d] 410 may be estimated as zero.
- 420 of FIG. 4A shows a flag intra_dc_only_wedge_enabled_flag [d] indicating whether to allow use of DMM mode-1 and whether to use intra SDC mode among DMM prediction modes.
- intra_dc_only_wedge_enabled_flag [d] 420 has a value of 1
- the current depth image may allow prediction of blocks constituting the current depth image using at least one of DMM mode-1 and intra SDC mode. . Therefore, the current depth image may generate and decode a prediction block using at least one of the DMM mode-1 and the intra SDC mode.
- intra_dc_only_wedge_enabled_flag [d] 420 has a value of 0, the current depth image cannot be decoded using DMM mode-1 and intra SDC mode. If intra_dc_only_wedge_enabled_flag [d] 420 is not defined, the value of intra_dc_only_wedge_enabled_flag [d] 420 may be estimated as zero.
- inter_dc_only_enabled_flag [d] 430 of FIG. 4A shows a flag inter_dc_only_enabled_flag [d] indicating whether inter SDC mode is allowed in the DMM prediction mode.
- inter_dc_only_enabled_flag [d] 430 has a value of 1
- the current depth image may allow prediction of blocks constituting the current depth image using an inter SDC mode. Therefore, the current depth image may be generated and decoded with a prediction block using the inter SDC mode.
- inter_dc_only_enabled_flag [d] 430 has a value of 0, the current depth image cannot be decoded using the inter SDC mode. If inter_dc_only_enabled_flag [d] 430 is not defined, the value of inter_dc_only_enabled_flag [d] 430 may be estimated to be zero.
- the depth modeling mode (DMM) prediction mode is a depth modeling mode technique and is a method for accurately and efficiently expressing an object boundary of a depth image.
- the DMM prediction mode is a mode for performing prediction by dividing the current block into at least two regions according to a pattern.
- the DMM prediction mode divides the current block into two or more regions using Wedgelet and Contour. The average value can be calculated for each area.
- the DMM prediction mode may include a type of DMM mode-1 (also called DMM_WFULL mode or INTRA_DEP_WEDGE) and DMM mode-4 (also called DMM_CPREDTEX mode or INTRA_DEP_CONTOUR).
- DMM mode-1 also called DMM_WFULL mode or INTRA_DEP_WEDGE
- DMM mode-4 also called DMM_CPREDTEX mode or INTRA_DEP_CONTOUR.
- the interlayer video decoding apparatus 20 uses DMM mode-1 or DMM mode-4, which is a Wedgelet mode that splits an area based on the most suitable boundary line after applying several boundary lines to the current block. By generating the prediction block for the current block and to restore the current block. In addition, the interlayer video decoding apparatus 20 applies a plurality of boundary lines to the current block, divides them into two regions, and then divides the region based on the most suitable boundary line, which is DMM mode-1 or DMM mode-4. Can be used to generate the prediction block for the current block and to restore the current block.
- the wedgelet refers to an oblique line
- the wedgelet partition refers to two or more partitions divided from the current block on the oblique line boundary.
- the interlayer video decoding apparatus 20 reconstructs the current block by using the DMM prediction mode, for example.
- the encoding may be performed by the encoding apparatus 10.
- the interlayer video decoding apparatus 20 may divide the current block 440 into the P1 442 and the P2 444 by using the Wedgelet 443. In addition, the interlayer video decoding apparatus 20 may divide the current block 460 into P1 462 and P2 464 using the contour 463.
- DMM mode-1 is a method of directly transmitting the information about the wedgelet 443 to the bitstream, and is a prediction mode that expresses and transmits the positions of the start and end points of the wedgelet 443 using a predefined table.
- the interlayer video decoding apparatus 20 may divide the current block 440 into P1 442 and P2 444 by obtaining positions of a start point and an end point of the Wedgelet 443 from the bitstream.
- the DMM mode-4 is a prediction mode that refers to a co-located texture luma block (CTLB), which is a texture image block 480 at the same position as the current block 460, to obtain information about the contour 463.
- CTLB co-located texture luma block
- the interlayer video decoding apparatus 20 calculates a luminance average of the texture image block 480, divides the corresponding texture image block 480 using the calculated luminance average as a threshold value, and then divides the partition information into the current block ( The same applies to 460.
- the interlayer video decoding apparatus 20 may divide the current block 460 into P1 462 and P2 464 by using split information of the corresponding texture image block 480.
- the interlayer video decoding apparatus 20 may predict using one DC value for each region divided into a wedgelet or a contour. For example, all pixel values belonging to P1 442 may be predicted as DC values of P1 442, and all pixel values belonging to P2 444 may be predicted as DC values of P2 444.
- FIG. 5 illustrates a portion of coding_unit syntax according to an embodiment.
- FIG. 5 is a conditional statement confirming whether to allow the DMM prediction mode for the current depth image in order to perform intra_mode_ext for obtaining the parameter of the DMM prediction mode. That is, if any one of the flags IntraContourEnabledFlag and IntraDCOnlyWedgeEnabledFlag is 1, intra_mode_ext for acquiring a parameter for performing prediction using the DMM prediction mode for the current block may be performed. IntraDCOnlyWedgeEnabledFlag and IntraContourEnabledFlag will be described later with reference to [Equation 2].
- no_dim_flag [x0 + i] [y0 + j] which is a flag indicating whether the DMM prediction mode is allowed for the current block. If no_dim_flag [x0 + i] [y0 + j] (570) has a value of 1, the DMM prediction mode is not allowed in the current block corresponding to no_dim_flag [x0 + i] [y0 + j], and conversely no_dim_flag [x0 When + i] [y0 + j] 570 has a value of 0, it may represent that the DMM prediction mode is allowed in the current block corresponding to no_dim_flag [x0 + i] [y0 + j].
- FIG. 6 illustrates an intra_mode_ext syntax for receiving a DMM parameter.
- the interlayer video decoding apparatus 20 may additionally receive a flag depth_intra_mode_idx_flag [x0] [y0] 650 indicating a type of the DMM prediction mode used in the current block.
- a value of depth_intra_mode_idx_flag [x0] [y0] (650) of 0 indicates that the current block can be predicted using DMM mode-1, and 1 indicates that the current block can be predicted using DMM mode-4. This is not restrictive.
- Condition 630 represents a predetermined condition for receiving depth_intra_mode_flag [x0] [y0] 650.
- depth_intra_mode_flag [x0] [y0] 650 may be received only when no_dim_flag 570 is 0, IntraDCOnlyWedgeEnabledFlag is 1, and IntraContourEnabledFlag is 1.
- IntraDCOnlyWedgeEnabledFlag and IntraContourEnabledFlag may indicate whether to allow the prediction of the current block using DMM mode-1 and DMM mode-4 for the current block, respectively.
- IntraDCOnlyWedgeEnabledFlag and IntraContourEnabledFlag may be defined as shown in Equation 2 below.
- IntraContourEnabledFlag intra_contour_enabled_flag [Depthflag] && in_comp_pred_flag
- IntraDCOnlyWedgeEnabledFlag intra_dc_only_wedge_enabled_flag [DepthFlag]
- IntraContourEnabledFlag is determined by the above-described intra_contour_enabled_flag [d] and the flag in_comp_pred_flag.
- IntraDCOnlyWedgeEnabledFlag is the same as intra_dc_only_wedge_enabled_flag [d] described above.
- IntraDCOnlyWedgeEnabledFlag has a value of 1, this indicates that the depth image to which the current block belongs is predicted using DMM mode-1 or intra SDC mode. If 0, the depth image to which the current block belongs is DMM mode-. 1 and intra SDC mode may indicate that the prediction is not allowed.
- IntraContourEnabledFlag has a value of 1, this indicates that the depth image to which the current block belongs is predicted using DMM mode-4. It may indicate that the prediction is not allowed.
- the complexity of the device can be reduced by not receiving a flag indicating the type information of the DMM prediction mode for the current block. Can be.
- the interlayer video decoding apparatus 20 receives a bitstream encoded by CABAC (Context-based Adaptive Binary Arithmetic Coding) including no_dim_flag 570 from the interlayer video encoding apparatus 10, and receives the received bitstream from the interlayer video encoding apparatus 10. Can be decrypted In this case, the flag no_dim_flag 570 may be transmitted using an independent context model without referring to neighboring block information.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- FIGS. 4 to 6 For convenience of description, in FIGS. 4 to 6, only operations performed by the interlayer video decoding apparatus 20 are described in detail, and operations in the interlayer video encoding apparatus 10 are omitted. A person skilled in the art to which the present embodiment belongs may readily understand that the corresponding operation may be performed in 10).
- blocks in which video data is divided are divided into coding units having a tree structure, and As described above, coding units, prediction units, and transformation units are sometimes used for inter-layer prediction or inter prediction.
- coding units, prediction units, and transformation units are sometimes used for inter-layer prediction or inter prediction.
- a video encoding method and apparatus therefor, a video decoding method, and an apparatus based on coding units and transformation units of a tree structure according to an embodiment will be described with reference to FIGS. 7 to 19.
- FIG. 7 is a block diagram of a video encoding apparatus 100 based on coding units having a tree structure, according to an embodiment.
- the video encoding apparatus 100 is an embodiment of the interlayer video encoding apparatus 10 described above with reference to FIG. 1A. Therefore, even if omitted below, the above description of the interlayer video encoding apparatus 10 may be applied to the video encoding apparatus 100.
- the video encoding apparatus 100 including video prediction based on coding units having a tree structure may include a maximum coding unit splitter 110, a coding unit determiner 120, and an outputter 130.
- the video encoding apparatus 100 that includes video prediction based on coding units having a tree structure is abbreviated as “video encoding apparatus 100”.
- the maximum coding unit splitter 110 may partition the current picture based on the maximum coding unit that is a coding unit of the maximum size for the current picture of the image. If the current picture is larger than the maximum coding unit, image data of the current picture may be split into at least one maximum coding unit.
- the maximum coding unit may be a data unit having a size of 32x32, 64x64, 128x128, 256x256, or the like, and may be a square data unit having a square of two horizontal and vertical sizes.
- the image data may be output to the coding unit determiner 120 for at least one maximum coding unit.
- the coding unit according to an embodiment may be characterized by a maximum size and depth.
- the depth indicates the number of times the coding unit is spatially divided from the maximum coding unit, and as the depth increases, the coding unit for each depth may be split from the maximum coding unit to the minimum coding unit.
- the depth of the largest coding unit is the highest depth and the minimum coding unit may be defined as the lowest coding unit.
- the maximum coding unit decreases as the depth increases, the size of the coding unit for each depth decreases, and thus, the coding unit of the higher depth may include coding units of a plurality of lower depths.
- the image data of the current picture may be divided into maximum coding units according to the maximum size of the coding unit, and each maximum coding unit may include coding units divided by depths. Since the maximum coding unit is divided according to depths, image data of a spatial domain included in the maximum coding unit may be hierarchically classified according to depths.
- the maximum depth and the maximum size of the coding unit that limit the total number of times of hierarchically dividing the height and the width of the maximum coding unit may be preset.
- the coding unit determiner 120 encodes at least one divided region obtained by dividing the region of the largest coding unit for each depth, and determines a depth at which the final encoding result is output for each of the at least one divided region. That is, the coding unit determiner 120 encodes the image data in coding units according to depths for each maximum coding unit of the current picture, and selects a depth at which the smallest coding error occurs to determine the coding depth. The determined coded depth and the image data for each maximum coding unit are output to the outputter 130.
- Image data in the largest coding unit is encoded based on coding units according to depths according to at least one depth less than or equal to the maximum depth, and encoding results based on the coding units for each depth are compared. As a result of comparing the encoding error of the coding units according to depths, a depth having the smallest encoding error may be selected. At least one coding depth may be determined for each maximum coding unit.
- the coding unit is divided into hierarchically and the number of coding units increases.
- a coding error of each data is measured and it is determined whether to divide into lower depths. Therefore, even in the data included in one largest coding unit, since the encoding error for each depth is different according to the position, the coding depth may be differently determined according to the position. Accordingly, one or more coding depths may be set for one maximum coding unit, and data of the maximum coding unit may be partitioned according to coding units of one or more coding depths.
- the coding unit determiner 120 may determine coding units having a tree structure included in the current maximum coding unit.
- the coding units having a tree structure according to an embodiment include coding units having a depth determined as a coding depth among all deeper coding units included in the maximum coding unit.
- the coding unit of the coding depth may be hierarchically determined according to the depth in the same region within the maximum coding unit, and may be independently determined for the other regions.
- the coded depth for the current region may be determined independently of the coded depth for the other region.
- the maximum depth according to an embodiment is an index related to the number of divisions from the maximum coding unit to the minimum coding unit.
- the first maximum depth according to an embodiment may represent the total number of divisions from the maximum coding unit to the minimum coding unit.
- the second maximum depth according to an embodiment may represent the total number of depth levels from the maximum coding unit to the minimum coding unit. For example, when the depth of the largest coding unit is 0, the depth of the coding unit obtained by dividing the largest coding unit once may be set to 1, and the depth of the coding unit divided twice may be set to 2. In this case, if the coding unit divided four times from the maximum coding unit is the minimum coding unit, since depth levels of 0, 1, 2, 3, and 4 exist, the first maximum depth is set to 4 and the second maximum depth is set to 5. Can be.
- Predictive encoding and transformation of the largest coding unit may be performed. Similarly, prediction encoding and transformation are performed based on depth-wise coding units for each maximum coding unit and for each depth less than or equal to the maximum depth.
- encoding including prediction encoding and transformation should be performed on all the coding units for each depth generated as the depth deepens.
- the prediction encoding and the transformation will be described based on the coding unit of the current depth among at least one maximum coding unit.
- the video encoding apparatus 100 may variously select a size or shape of a data unit for encoding image data.
- the encoding of the image data is performed through prediction encoding, transforming, entropy encoding, and the like.
- the same data unit may be used in every step, or the data unit may be changed in steps.
- the video encoding apparatus 100 may select not only a coding unit for encoding the image data, but also a data unit different from the coding unit in order to perform predictive encoding of the image data in the coding unit.
- prediction encoding may be performed based on a coding unit of a coding depth, that is, a more strange undivided coding unit, according to an embodiment.
- a more strange undivided coding unit that is the basis of prediction coding is referred to as a 'prediction unit'.
- the partition in which the prediction unit is divided may include a data unit in which at least one of the prediction unit and the height and the width of the prediction unit are divided.
- the partition may be a data unit in which the prediction unit of the coding unit is split, and the prediction unit may be a partition having the same size as the coding unit.
- the partition type includes not only symmetric partitions in which the height or width of the prediction unit is divided by a symmetrical ratio, but also partitions divided in an asymmetrical ratio, such as 1: n or n: 1, by a geometric form. It may optionally include partitioned partitions, arbitrary types of partitions, and the like.
- the prediction mode of the prediction unit may be at least one of an intra mode, an inter mode, and a skip mode.
- the intra mode and the inter mode may be performed on partitions having sizes of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, and N ⁇ N.
- the skip mode may be performed only for partitions having a size of 2N ⁇ 2N.
- the encoding may be performed independently for each prediction unit within the coding unit to select a prediction mode having the smallest encoding error.
- the video encoding apparatus 100 may perform conversion of image data of a coding unit based on not only a coding unit for encoding image data, but also a data unit different from the coding unit.
- the transformation may be performed based on a transformation unit having a size smaller than or equal to the coding unit.
- the transformation unit may include a data unit for intra mode and a transformation unit for inter mode.
- the transformation unit in the coding unit is also recursively divided into smaller transformation units, so that the residual data of the coding unit is determined according to the tree structure according to the transformation depth. Can be partitioned according to the conversion unit.
- a transform depth indicating a number of divisions between the height and the width of the coding unit divided to the transform unit may be set. For example, if the size of the transform unit of the current coding unit of size 2Nx2N is 2Nx2N, the transform depth is 0, the transform depth 1 if the size of the transform unit is NxN, and the transform depth 2 if the size of the transform unit is N / 2xN / 2. Can be. That is, the transformation unit having a tree structure may also be set for the transformation unit according to the transformation depth.
- the encoded information for each coded depth requires not only the coded depth but also prediction related information and transformation related information. Accordingly, the coding unit determiner 120 may determine not only the coded depth that generated the minimum coding error, but also a partition type obtained by dividing a prediction unit into partitions, a prediction mode for each prediction unit, and a size of a transformation unit for transformation.
- a method of determining a coding unit, a prediction unit / partition, and a transformation unit according to a tree structure of a maximum coding unit according to an embodiment will be described later in detail with reference to FIGS. 7 to 19.
- the coding unit determiner 120 may measure a coding error of coding units according to depths using a Lagrangian Multiplier-based rate-distortion optimization technique.
- the output unit 130 outputs the image data of the maximum coding unit encoded based on the at least one coded depth determined by the coding unit determiner 120 and the information about the encoding modes according to depths in the form of a bit stream.
- the encoded image data may be a result of encoding residual data of the image.
- the information about the encoding modes according to depths may include encoding depth information, partition type information of a prediction unit, prediction mode information, size information of a transformation unit, and the like.
- the coded depth information may be defined using depth-specific segmentation information indicating whether to encode to a coding unit of a lower depth without encoding to the current depth. If the current depth of the current coding unit is a coding depth, since the current coding unit is encoded in a coding unit of the current depth, split information of the current depth may be defined so that it is no longer divided into lower depths. On the contrary, if the current depth of the current coding unit is not the coding depth, encoding should be attempted using the coding unit of the lower depth, and thus split information of the current depth may be defined to be divided into coding units of the lower depth.
- encoding is performed on the coding unit divided into the coding units of the lower depth. Since at least one coding unit of a lower depth exists in the coding unit of the current depth, encoding may be repeatedly performed for each coding unit of each lower depth, and recursive coding may be performed for each coding unit of the same depth.
- coding units having a tree structure are determined in one largest coding unit and information about at least one coding mode should be determined for each coding unit of a coding depth, information about at least one coding mode may be determined for one maximum coding unit. Can be.
- the coding depth may be different for each location, and thus information about the coded depth and the coding mode may be set for the data.
- the output unit 130 may allocate encoding information about a corresponding coding depth and an encoding mode to at least one of a coding unit, a prediction unit, and a minimum unit included in the maximum coding unit. .
- the minimum unit according to an embodiment is a square data unit having a size obtained by dividing the minimum coding unit, which is the lowest coding depth, into four divisions.
- the minimum unit according to an embodiment may be a square data unit having a maximum size that may be included in all coding units, prediction units, partition units, and transformation units included in the maximum coding unit.
- the encoding information output through the output unit 130 may be classified into encoding information according to depth coding units and encoding information according to prediction units.
- the encoding information for each coding unit according to depth may include prediction mode information and partition size information.
- the encoding information transmitted for each prediction unit includes information about an estimation direction of the inter mode, information about a reference image index of the inter mode, information about a motion vector, information about a chroma component of an intra mode, and information about an inter mode of an intra mode. And the like.
- Information about the maximum size and information about the maximum depth of the coding unit defined for each picture, slice, or GOP may be inserted into a header, a sequence parameter set, or a picture parameter set of the bitstream.
- the information on the maximum size of the transform unit and the minimum size of the transform unit allowed for the current video may also be output through a header, a sequence parameter set, a picture parameter set, or the like of the bitstream.
- the output unit 130 may encode and output reference information, prediction information, unidirectional prediction information, slice type information including a fourth slice type, etc. related to the prediction described above with reference to FIGS. 1 to 6.
- a coding unit according to depths is a coding unit having a size in which a height and a width of a coding unit of one layer higher depth are divided by half. That is, if the size of the coding unit of the current depth is 2Nx2N, the size of the coding unit of the lower depth is NxN.
- the current coding unit having a size of 2N ⁇ 2N may include up to four lower depth coding units having a size of N ⁇ N.
- the video encoding apparatus 100 determines a coding unit having an optimal shape and size for each maximum coding unit based on the size and the maximum depth of the maximum coding unit determined in consideration of the characteristics of the current picture. Coding units may be configured. In addition, since each of the maximum coding units may be encoded in various prediction modes and transformation methods, an optimal coding mode may be determined in consideration of image characteristics of coding units having various image sizes.
- the video encoding apparatus may adjust the coding unit in consideration of the image characteristics while increasing the maximum size of the coding unit in consideration of the size of the image, thereby increasing image compression efficiency.
- the video encoding apparatus 100 of FIG. 7 may perform an operation of the video encoding apparatus 10 described above with reference to FIG. 1.
- the coding unit determiner 120 may perform an operation of the intra predictor 12 of the video encoding apparatus 10. For each largest coding unit, a prediction unit for intra prediction may be determined for each coding unit having a tree structure, and intra prediction may be performed for each prediction unit.
- the output unit 130 may perform an operation of the symbol encoder 14 of the video encoding apparatus 10.
- the MPM flag may be encoded.
- the intra prediction mode of the current prediction unit is the same as at least one of the intra prediction modes of the left / top prediction unit, there is always a fixed number of plural numbers regardless of whether the left intra prediction mode and the top intra prediction mode are the same or different.
- the candidate intra prediction modes may be determined, and current intra mode information for the current prediction unit may be determined and encoded based on the subsequent intra prediction modes.
- the output unit 130 may determine the number of candidate intra prediction modes for each picture. Similarly, the number of candidate intra prediction modes may be determined per slice, per maximum coding unit, per coding unit, or per prediction unit. Without being limited thereto, the number of candidate intra prediction modes may be determined again for each data unit.
- the output unit 130 may include a picture parameter set (PPS), a slice parameter set (SPS), a maximum coding unit level, a coding unit level, a prediction unit level, and the like according to a level of a data unit in which the number of candidate intra prediction modes is updated.
- PPS picture parameter set
- SPS slice parameter set
- maximum coding unit level a maximum coding unit level
- coding unit level a prediction unit level
- prediction unit level a prediction unit level
- information indicating the number of candidate intra prediction modes may be encoded. However, even if the number of candidate intra prediction modes is determined every predetermined data unit, information indicating the number of candidate intra prediction modes is not always encoded.
- FIG. 8 is a block diagram of a video decoding apparatus 200 based on coding units according to a tree structure, according to an exemplary embodiment.
- the video decoding apparatus 200 is an embodiment of the interlayer video decoding apparatus 20 described above with reference to FIG. 2A. Therefore, even if omitted below, the above description of the interlayer video decoding apparatus 20 may be applied to the video decoding apparatus 200.
- a video decoding apparatus 200 including video prediction based on coding units having a tree structure includes a receiver 210, image data and encoding information extractor 220, and image data decoder 230. do.
- the video decoding apparatus 200 that includes video prediction based on coding units having a tree structure is abbreviated as “video decoding apparatus 200”.
- Definition of various terms such as a coding unit, a depth, a prediction unit, a transformation unit, and information about various encoding modes for a decoding operation of the video decoding apparatus 200 according to an embodiment may be described with reference to FIG. 7 and the video encoding apparatus 100. Same as described above with reference.
- the receiver 210 receives and parses a bitstream of an encoded video.
- the image data and encoding information extractor 220 extracts image data encoded for each coding unit from the parsed bitstream according to coding units having a tree structure for each maximum coding unit, and outputs the encoded image data to the image data decoder 230.
- the image data and encoding information extractor 220 may extract information about a maximum size of a coding unit of the current picture from a header, a sequence parameter set, or a picture parameter set for the current picture.
- the image data and encoding information extractor 220 extracts information about a coded depth and an encoding mode for the coding units having a tree structure for each maximum coding unit, from the parsed bitstream.
- the extracted information about the coded depth and the coding mode is output to the image data decoder 230. That is, the image data of the bit string may be divided into maximum coding units so that the image data decoder 230 may decode the image data for each maximum coding unit.
- the information about the coded depth and the encoding mode for each largest coding unit may be set with respect to one or more coded depth information, and the information about the coding mode according to the coded depths may include partition type information, prediction mode information, and transformation unit of the corresponding coding unit. May include size information and the like.
- split information for each depth may be extracted as the coded depth information.
- the information about the coded depth and the encoding mode according to the maximum coding units extracted by the image data and the encoding information extractor 220 may be encoded according to the depth according to the maximum coding unit, as in the video encoding apparatus 100 according to an embodiment.
- the image data and the encoding information extractor 220 may determine the predetermined data.
- Information about a coded depth and an encoding mode may be extracted for each unit. If the information about the coded depth and the coding mode of the maximum coding unit is recorded for each of the predetermined data units, the predetermined data units having the information about the same coded depth and the coding mode are inferred as data units included in the same maximum coding unit. Can be.
- the image data decoder 230 reconstructs the current picture by decoding image data of each maximum coding unit based on the information about the coded depth and the encoding mode for each maximum coding unit. That is, the image data decoder 230 may decode the encoded image data based on the read partition type, the prediction mode, and the transformation unit for each coding unit among the coding units having the tree structure included in the maximum coding unit. Can be.
- the decoding process may include a prediction process including intra prediction and motion compensation, and an inverse transform process.
- the image data decoder 230 may perform intra prediction or motion compensation according to each partition and prediction mode for each coding unit based on partition type information and prediction mode information of the prediction unit of the coding unit for each coding depth. .
- the image data decoder 230 may read transform unit information having a tree structure for each coding unit, and perform inverse transform based on the transformation unit for each coding unit, for inverse transformation for each largest coding unit. Through inverse transformation, the pixel value of the spatial region of the coding unit may be restored.
- the image data decoder 230 may determine the coded depth of the current maximum coding unit by using the split information for each depth. If the split information indicates that the split information is no longer split at the current depth, the current depth is the coded depth. Therefore, the image data decoder 230 may decode the coding unit of the current depth using the partition type, the prediction mode, and the transformation unit size information of the prediction unit with respect to the image data of the current maximum coding unit.
- the image data decoder 230 It may be regarded as one data unit to be decoded in the same encoding mode.
- the decoding of the current coding unit may be performed by obtaining information about an encoding mode for each coding unit determined in this way.
- the video decoding apparatus 200 of FIG. 8 may perform an operation of the video decoding apparatus 20 described above with reference to FIG. 2.
- the receiver 210 may perform an operation of the parser 22 of the video decoding apparatus 20.
- the image data and encoding information extractor 220 and the image data decoder 230 may perform an operation of the intra predictor 24 of the video decoding apparatus 20.
- the parser 22 may parse the MPM flag for prediction of the intra prediction mode from the bitstream for each prediction unit. Without the need to determine whether the left intra prediction mode and the top intra prediction mode are the same or different from each other, the current intra mode information can be parsed from the bitstream in succession to the MPM flag.
- the image data and encoding information extractor 220 may reconstruct the current intra prediction mode from the parsed information after completing parsing of symbols of blocks including the MPM flag and the intra mode information.
- the current intra prediction mode may be predicted using a fixed number of candidate intra prediction modes.
- the image data decoder 230 may perform intra prediction on the current prediction unit by using the reconstructed current intra prediction mode and the residual data.
- the image data and encoding information extractor 220 may re-determine the number of candidate intra prediction modes for each picture.
- the parsing unit 22 uses a fixed number of parameters of various data unit levels, such as a picture parameter set (PPS), a slice parameter set (SPS), a maximum coding unit level, a coding unit level, and a prediction unit level of a bitstream.
- PPS picture parameter set
- SPS slice parameter set
- maximum coding unit level a maximum coding unit level
- coding unit level a prediction unit level of a bitstream.
- the image data and encoding information extractor 220 may determine as many candidate intra prediction modes as the number of pieces of the parsed information for each data unit corresponding to the level at which the information is parsed.
- the image data and encoding information extracting unit 220 does not parse the candidate intra for each slice, for each maximum coding unit, for each coding unit, or for every predetermined data unit such as a prediction unit.
- the number of prediction modes may be updated.
- the video decoding apparatus 200 may obtain information about a coding unit that generates a minimum coding error by recursively encoding each maximum coding unit in the encoding process, and use the same to decode the current picture. That is, decoding of encoded image data of coding units having a tree structure determined as an optimal coding unit for each maximum coding unit can be performed.
- the image data can be efficiently used according to the coding unit size and the encoding mode that are adaptively determined according to the characteristics of the image by using the information about the optimum encoding mode transmitted from the encoding end. Can be decoded and restored.
- FIG 9 illustrates a concept of coding units, according to an embodiment of the present invention.
- a size of a coding unit may be expressed by a width x height, and may include 32x32, 16x16, and 8x8 from a coding unit having a size of 64x64.
- Coding units of size 64x64 may be partitioned into partitions of size 64x64, 64x32, 32x64, and 32x32, coding units of size 32x32 are partitions of size 32x32, 32x16, 16x32, and 16x16, and coding units of size 16x16 are 16x16.
- Coding units of size 8x8 may be divided into partitions of size 8x8, 8x4, 4x8, and 4x4, into partitions of 16x8, 8x16, and 8x8.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 2.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 3.
- the resolution is set to 352x288, the maximum size of the coding unit is 16, and the maximum depth is 1.
- the maximum depth illustrated in FIG. 9 represents the total number of divisions from the maximum coding unit to the minimum coding unit.
- the maximum size of the coding size is relatively large not only to improve the coding efficiency but also to accurately shape the image characteristics. Accordingly, the video data 310 or 320 having a higher resolution than the video data 330 may be selected to have a maximum size of 64.
- the coding unit 315 of the video data 310 is divided twice from a maximum coding unit having a long axis size of 64, and the depth is deepened by two layers, so that the long axis size is 32, 16. Up to coding units may be included.
- the coding unit 335 of the video data 330 is divided once from coding units having a long axis size of 16, and the depth is deepened by one layer to increase the long axis size to 8. Up to coding units may be included.
- the coding unit 325 of the video data 320 is divided three times from the largest coding unit having a long axis size of 64, and the depth is three layers deep, so that the long axis size is 32, 16. , Up to 8 coding units may be included. As the depth increases, the expressive power of the detailed information may be improved.
- FIG. 10 is a block diagram of an image encoder 400 based on coding units, according to an embodiment of the present invention.
- the image encoder 400 includes operations performed by the encoding unit determiner 120 of the video encoding apparatus 100 to encode image data. That is, the intra predictor 410 performs intra prediction on the coding unit of the intra mode among the current frame 405, and the motion estimator 420 and the motion compensator 425 are the current frame 405 of the inter mode. And the inter frame estimation and the motion compensation using the reference frame 495.
- Data output from the intra predictor 410, the motion estimator 420, and the motion compensator 425 is output as a quantized transform coefficient through the transform unit 430 and the quantization unit 440.
- the quantized transform coefficients are restored to the data of the spatial domain through the inverse quantizer 460 and the inverse transformer 470, and the recovered data of the spatial domain is passed through the deblocking block 480 and the loop filtering unit 490. Processed and output to the reference frame 495.
- the quantized transform coefficients may be output to the bitstream 455 via the entropy encoder 450.
- the intra predictor 410, the motion estimator 420, the motion compensator 425, and the transform unit may be components of the image encoder 400.
- quantizer 440, entropy encoder 450, inverse quantizer 460, inverse transform unit 470, deblocking unit 480, and loop filtering unit 490 are all maximal per maximum coding unit. In consideration of the depth, a task based on each coding unit among the coding units having a tree structure should be performed.
- the intra predictor 410, the motion estimator 420, and the motion compensator 425 partition each coding unit among coding units having a tree structure in consideration of the maximum size and the maximum depth of the current maximum coding unit.
- a prediction mode, and the transform unit 430 should determine the size of a transform unit in each coding unit among the coding units having a tree structure.
- the intra predictor 410 may perform an operation of the intra predictor 12 of the video encoding apparatus 10. For each largest coding unit, a prediction unit for intra prediction may be determined for each coding unit having a tree structure, and intra prediction may be performed for each prediction unit.
- the entropy encoder 450 may determine each prediction unit.
- the MPM flag may be encoded, and the current intra mode information determined based on the subsequent intra prediction modes for the current prediction unit may be encoded.
- FIG. 11 is a block diagram of an image decoder 500 based on coding units, according to an embodiment of the present invention.
- the bitstream 505 is parsed through the parsing unit 510, and the encoded image data to be decoded and information about encoding necessary for decoding are parsed.
- the encoded image data is output as inverse quantized data through the entropy decoding unit 520 and the inverse quantization unit 530, and the image data of the spatial domain is restored through the inverse transformation unit 540.
- the intra prediction unit 550 performs intra prediction on the coding unit of the intra mode, and the motion compensator 560 uses the reference frame 585 together to apply the coding unit of the inter mode. Perform motion compensation for the
- Data in the spatial domain that has passed through the intra predictor 550 and the motion compensator 560 may be post-processed through the deblocking unit 570 and the loop filtering unit 580 to be output to the reconstructed frame 595.
- the post-processed data through the deblocking unit 570 and the loop filtering unit 580 may be output as the reference frame 585.
- step-by-step operations after the parser 510 of the image decoder 500 may be performed.
- the parser 510, the entropy decoder 520, the inverse quantizer 530, and the inverse transform unit 540 which are components of the image decoder 500, may be used.
- the intra predictor 550, the motion compensator 560, the deblocking unit 570, and the loop filtering unit 580 must all perform operations based on coding units having a tree structure for each maximum coding unit. do.
- the intra predictor 550 and the motion compensator 560 determine partitions and prediction modes for each coding unit having a tree structure, and the inverse transform unit 540 must determine the size of the transform unit for each coding unit. .
- the parser 510 may parse the MPM flag for prediction of the intra prediction mode from the bitstream for each prediction unit. Without the need to determine whether the left intra prediction mode and the top intra prediction mode are the same or different from each other, the current intra mode information can be parsed from the bitstream in succession to the MPM flag.
- the entropy decoder 520 may reconstruct the current intra prediction mode from the parsed information after completing parsing of symbols of blocks including the MPM flag and the current intra mode information.
- the intra predictor 550 may perform intra prediction on the current prediction unit by using the reconstructed current intra prediction mode and the residual data.
- FIG. 12 is a diagram of deeper coding units according to depths, and partitions, according to an embodiment of the present invention.
- the video encoding apparatus 100 according to an embodiment and the video decoding apparatus 200 according to an embodiment use hierarchical coding units to consider image characteristics.
- the maximum height, width, and maximum depth of the coding unit may be adaptively determined according to the characteristics of the image, and may be variously set according to a user's request. According to the maximum size of the preset coding unit, the size of the coding unit for each depth may be determined.
- the hierarchical structure 600 of a coding unit illustrates a case in which a maximum height and a width of a coding unit are 64 and a maximum depth is four.
- the maximum depth indicates the total number of divisions from the maximum coding unit to the minimum coding unit. Since the depth deepens along the vertical axis of the hierarchical structure 600 of the coding unit according to an embodiment, the height and the width of the coding unit for each depth are divided.
- a prediction unit and a partition on which the prediction encoding of each depth-based coding unit is shown along the horizontal axis of the hierarchical structure 600 of the coding unit are illustrated.
- the coding unit 610 has a depth of 0 as the largest coding unit of the hierarchical structure 600 of the coding unit, and the size, ie, the height and width, of the coding unit is 64x64.
- the depth along the vertical axis is deep, the coding unit 620 of depth 1 having a size of 32x32, the coding unit 630 of depth 2 having a size of 16x16, the coding unit 640 of depth 3 having a size of 8x8, and the depth 4 of depth 4 having a size of 4x4.
- the coding unit 650 exists.
- a coding unit 650 having a depth of 4 having a size of 4 ⁇ 4 is a minimum coding unit.
- Prediction units and partitions of the coding unit are arranged along the horizontal axis for each depth. That is, if the coding unit 610 of size 64x64 having a depth of zero is a prediction unit, the prediction unit may include a partition 610 of size 64x64, partitions 612 of size 64x32, and size included in the coding unit 610 of size 64x64. 32x64 partitions 614, 32x32 partitions 616.
- the prediction unit of the coding unit 620 having a size of 32x32 having a depth of 1 includes a partition 620 of size 32x32, partitions 622 of size 32x16 and a partition of size 16x32 included in the coding unit 620 of size 32x32. 624, partitions 626 of size 16x16.
- the prediction unit of the coding unit 630 of size 16x16 having a depth of 2 includes a partition 630 of size 16x16, partitions 632 of size 16x8, and a partition of size 8x16 included in the coding unit 630 of size 16x16. 634, partitions 636 of size 8x8.
- the prediction unit of the coding unit 640 of size 8x8 having a depth of 3 includes a partition 640 of size 8x8, partitions 642 of size 8x4 and a partition of size 4x8 included in the coding unit 640 of size 8x8. 644, partitions 646 of size 4x4.
- the coding unit 650 of size 4x4 having a depth of 4 is the minimum coding unit and the coding unit of the lowest depth, and the corresponding prediction unit may also be set only as the partition 650 having a size of 4x4.
- the coding unit determiner 120 of the video encoding apparatus 100 may determine a coding depth of the maximum coding unit 610.
- the number of deeper coding units according to depths for including data having the same range and size increases as the depth increases. For example, four coding units of depth 2 are required for data included in one coding unit of depth 1. Therefore, in order to compare the encoding results of the same data for each depth, each of the coding units having one depth 1 and four coding units having four depths 2 should be encoded.
- encoding may be performed for each prediction unit of a coding unit according to depths along a horizontal axis of the hierarchical structure 600 of the coding unit, and a representative coding error, which is the smallest coding error at a corresponding depth, may be selected. .
- a depth deeper along the vertical axis of the hierarchical structure 600 of the coding unit the encoding may be performed for each depth, and the minimum coding error may be searched by comparing the representative coding error for each depth.
- the depth and the partition in which the minimum coding error occurs in the maximum coding unit 610 may be selected as the coding depth and the partition type of the maximum coding unit 610.
- FIG. 13 illustrates a relationship between a coding unit and transformation units, according to an embodiment of the present invention.
- the video encoding apparatus 100 encodes or decodes an image in coding units having a size smaller than or equal to the maximum coding unit for each maximum coding unit.
- the size of a transformation unit for transformation in the encoding process may be selected based on a data unit that is not larger than each coding unit.
- the 32x32 size conversion unit 720 is The conversion can be performed.
- the data of the 64x64 coding unit 710 is transformed into 32x32, 16x16, 8x8, and 4x4 transform units of 64x64 size or less, and then encoded, and the transform unit having the least error with the original is selected. Can be.
- FIG. 14 is a diagram of deeper encoding information according to an embodiment.
- the output unit 130 of the video encoding apparatus 100 is information about an encoding mode, and information about a partition type 800 and information 810 about a prediction mode for each coding unit of each coded depth.
- the information 820 about the size of the transformation unit may be encoded and transmitted.
- the information about the partition type 800 is a data unit for predictive encoding of the current coding unit and indicates information about a partition type in which the prediction unit of the current coding unit is divided.
- the current coding unit CU_0 of size 2Nx2N may be any one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It can be divided and used.
- the information 800 about the partition type of the current coding unit represents one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It is set to.
- Information 810 relating to the prediction mode indicates the prediction mode of each partition. For example, through the information 810 about the prediction mode, whether the partition indicated by the information 800 about the partition type is performed in one of the intra mode 812, the inter mode 814, and the skip mode 816 is performed. Whether or not can be set.
- the information about the transform unit size 820 indicates whether to transform the current coding unit based on the transform unit.
- the transform unit may be one of a first intra transform unit size 822, a second intra transform unit size 824, a first inter transform unit size 826, and a second intra transform unit size 828. have.
- the image data and encoding information extractor 210 of the video decoding apparatus 200 may include information about a partition type 800, information 810 about a prediction mode, and transformation for each depth-based coding unit. Information 820 about the unit size may be extracted and used for decoding.
- 15 is a diagram of deeper coding units according to depths, according to an embodiment of the present invention.
- Segmentation information may be used to indicate a change in depth.
- the split information indicates whether a coding unit of a current depth is split into coding units of a lower depth.
- the prediction unit 910 for predictive encoding of the coding unit 900 having depth 0 and 2N_0x2N_0 size includes a partition type 912 having a size of 2N_0x2N_0, a partition type 914 having a size of 2N_0xN_0, a partition type 916 having a size of N_0x2N_0, and a N_0xN_0 It may include a partition type 918 of size. Although only partitions 912, 914, 916, and 918 in which the prediction unit is divided by a symmetrical ratio are illustrated, as described above, the partition type is not limited thereto, and asymmetric partitions, arbitrary partitions, geometric partitions, and the like. It may include.
- prediction coding For each partition type, prediction coding must be performed repeatedly for one 2N_0x2N_0 partition, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions.
- prediction encoding For partitions having a size 2N_0x2N_0, a size N_0x2N_0, a size 2N_0xN_0, and a size N_0xN_0, prediction encoding may be performed in an intra mode and an inter mode. The skip mode may be performed only for prediction encoding on partitions having a size of 2N_0x2N_0.
- the depth 0 is changed to 1 and split (920), and the encoding is repeatedly performed on the depth 2 and the coding units 930 of the partition type having the size N_0xN_0.
- the prediction unit 940 for predictive encoding of the coding unit 930 having a depth of 1 and a size of 2N_1x2N_1 includes a partition type 942 having a size of 2N_1x2N_1, a partition type 944 having a size of 2N_1xN_1, and a partition type having a size of N_1x2N_1.
- 946, a partition type 948 of size N_1 ⁇ N_1 may be included.
- the depth 1 is changed to the depth 2 and divided (950), and repeatedly for the depth 2 and the coding units 960 of the size N_2xN_2.
- the encoding may be performed to search for a minimum encoding error.
- depth-based coding units may be set until depth d-1, and split information may be set up to depth d-2. That is, when encoding is performed from the depth d-2 to the depth d-1 to the depth d-1, the prediction encoding of the coding unit 980 of the depth d-1 and the size 2N_ (d-1) x2N_ (d-1)
- the prediction unit for 990 is a partition type 992 of size 2N_ (d-1) x2N_ (d-1), partition type 994 of size 2N_ (d-1) xN_ (d-1), size A partition type 996 of N_ (d-1) x2N_ (d-1) and a partition type 998 of size N_ (d-1) xN_ (d-1) may be included.
- one partition 2N_ (d-1) x2N_ (d-1), two partitions 2N_ (d-1) xN_ (d-1), two sizes N_ (d-1) x2N_ Prediction encoding is repeatedly performed for each partition of (d-1) and four partitions of size N_ (d-1) xN_ (d-1), so that a partition type having a minimum encoding error may be searched. .
- the coding unit CU_ (d-1) of the depth d-1 is no longer
- the encoding depth of the current maximum coding unit 900 may be determined as the depth d-1, and the partition type may be determined as N_ (d-1) xN_ (d-1) without going through a division process into lower depths.
- split information is not set for the coding unit 952 having the depth d-1.
- the data unit 999 may be referred to as a 'minimum unit' for the current maximum coding unit.
- the minimum unit may be a square data unit having a size obtained by dividing the minimum coding unit, which is the lowest coding depth, into four divisions.
- the video encoding apparatus 100 compares the encoding errors for each depth of the coding unit 900, selects a depth at which the smallest encoding error occurs, and determines a coding depth.
- the partition type and the prediction mode may be set to the encoding mode of the coded depth.
- the depth with the smallest error can be determined by comparing the minimum coding errors for all depths of depths 0, 1, ..., d-1, d, and can be determined as the coding depth.
- the coded depth, the partition type of the prediction unit, and the prediction mode may be encoded and transmitted as information about an encoding mode.
- the coding unit since the coding unit must be split from the depth 0 to the coded depth, only the split information of the coded depth is set to '0', and the split information for each depth except the coded depth should be set to '1'.
- the image data and encoding information extractor 220 of the video decoding apparatus 200 may extract information about a coding depth and a prediction unit for the coding unit 900 and use the same to decode the coding unit 912. Can be.
- the video decoding apparatus 200 may identify a depth having split information of '0' as a coding depth using split information for each depth, and may use the decoding depth by using information about an encoding mode for a corresponding depth. have.
- 16, 17, and 18 illustrate a relationship between coding units, prediction units, and transformation units, according to an embodiment of the present invention.
- the coding units 1010 are coding units according to coding depths determined by the video encoding apparatus 100 according to an embodiment with respect to the maximum coding unit.
- the prediction unit 1060 is partitions of prediction units of each coding depth of each coding depth among the coding units 1010, and the transformation unit 1070 is transformation units of each coding depth for each coding depth.
- the depth-based coding units 1010 have a depth of 0
- the coding units 1012 and 1054 have a depth of 1
- the coding units 1014, 1016, 1018, 1028, 1050, and 1052 have depths.
- coding units 1020, 1022, 1024, 1026, 1030, 1032, and 1048 have a depth of three
- coding units 1040, 1042, 1044, and 1046 have a depth of four.
- partitions 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 of the prediction units 1060 are obtained by splitting coding units. That is, partitions 1014, 1022, 1050, and 1054 are partition types of 2NxN, partitions 1016, 1048, and 1052 are partition types of Nx2N, and partitions 1032 are partition types of NxN. Prediction units and partitions of the coding units 1010 according to depths are smaller than or equal to each coding unit.
- the image data of the part 1052 of the transformation units 1070 is transformed or inversely transformed into a data unit having a smaller size than the coding unit.
- the transformation units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 are data units having different sizes or shapes when compared to corresponding prediction units and partitions among the prediction units 1060. That is, the video encoding apparatus 100 according to an embodiment and the video decoding apparatus 200 according to an embodiment may be intra prediction / motion estimation / motion compensation operations and transform / inverse transform operations for the same coding unit. Each can be performed on a separate data unit.
- coding is performed recursively for each coding unit having a hierarchical structure for each largest coding unit to determine an optimal coding unit.
- coding units having a recursive tree structure may be configured.
- the encoding information may include split information about a coding unit, partition type information, prediction mode information, and transformation unit size information. Table 2 below shows an example that can be set in the video encoding apparatus 100 and the video decoding apparatus 200 according to an embodiment.
- the output unit 130 of the video encoding apparatus 100 outputs encoding information about coding units having a tree structure
- the encoding information extraction unit of the video decoding apparatus 200 according to an embodiment 220 may extract encoding information about coding units having a tree structure from the received bitstream.
- the split information indicates whether the current coding unit is split into coding units of a lower depth. If the split information of the current depth d is 0, partition type information, prediction mode, and transform unit size information are defined for the coded depth because the depth in which the current coding unit is no longer divided into the lower coding units is a coded depth. Can be. If it is to be further split by the split information, encoding should be performed independently for each coding unit of the divided four lower depths.
- the prediction mode may be represented by one of an intra mode, an inter mode, and a skip mode.
- Intra mode and inter mode can be defined in all partition types, and skip mode can be defined only in partition type 2Nx2N.
- the partition type information indicates the symmetric partition types 2Nx2N, 2NxN, Nx2N and NxN, in which the height or width of the prediction unit is divided by the symmetrical ratio, and the asymmetric partition types 2NxnU, 2NxnD, nLx2N, nRx2N, which are divided by the asymmetrical ratio.
- the asymmetric partition types 2NxnU and 2NxnD are divided into heights 1: 3 and 3: 1, respectively, and the asymmetric partition types nLx2N and nRx2N are divided into 1: 3 and 3: 1 widths, respectively.
- the conversion unit size may be set to two kinds of sizes in the intra mode and two kinds of sizes in the inter mode. That is, if the transformation unit split information is 0, the size of the transformation unit is set to the size 2Nx2N of the current coding unit. If the transform unit split information is 1, a transform unit having a size obtained by dividing the current coding unit may be set. In addition, if the partition type for the current coding unit having a size of 2Nx2N is a symmetric partition type, the size of the transform unit may be set to NxN, and if the asymmetric partition type is N / 2xN / 2.
- Encoding information of coding units having a tree structure may be allocated to at least one of a coding unit, a prediction unit, and a minimum unit unit of a coding depth.
- the coding unit of the coding depth may include at least one prediction unit and at least one minimum unit having the same encoding information.
- the encoding information held by each adjacent data unit is checked, it may be determined whether the adjacent data units are included in the coding unit having the same coding depth.
- the coding unit of the corresponding coding depth may be identified by using the encoding information held by the data unit, the distribution of the coded depths within the maximum coding unit may be inferred.
- the encoding information of the data unit in the depth-specific coding unit adjacent to the current coding unit may be directly referred to and used.
- the prediction coding when the prediction coding is performed by referring to the neighboring coding unit, the data adjacent to the current coding unit in the coding unit according to depths is encoded by using the encoding information of the adjacent coding units according to depths.
- the neighboring coding unit may be referred to by searching.
- FIG. 19 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- the maximum coding unit 1300 includes coding units 1302, 1304, 1306, 1312, 1314, 1316, and 1318 of a coded depth. Since one coding unit 1318 is a coding unit of a coded depth, split information may be set to zero.
- the partition type information of the coding unit 1318 having a size of 2Nx2N is partition type 2Nx2N 1322, 2NxN 1324, Nx2N 1326, NxN 1328, 2NxnU 1332, 2NxnD 1334, nLx2N (1336). And nRx2N 1338.
- the transform unit split information (TU size flag) is a type of transform index, and a size of a transform unit corresponding to the transform index may be changed according to a prediction unit type or a partition type of a coding unit.
- the partition type information is set to one of the symmetric partition types 2Nx2N 1322, 2NxN 1324, Nx2N 1326, and NxN 1328
- the conversion unit partition information is 0, a conversion unit of size 2Nx2N ( 1342 is set, and if the transform unit split information is 1, a transform unit 1344 of size NxN may be set.
- the partition type information is set to one of the asymmetric partition types 2NxnU (1332), 2NxnD (1334), nLx2N (1336), and nRx2N (1338), if the conversion unit partition information (TU size flag) is 0, a conversion unit of size 2Nx2N ( 1352 is set, and if the transform unit split information is 1, a transform unit 1354 of size N / 2 ⁇ N / 2 may be set.
- the conversion unit splitting information (TU size flag) is a flag having a value of 0 or 1, but the conversion unit splitting information according to an embodiment is not limited to a 1-bit flag and is 0, 1, 2, or 3.
- the conversion unit may be divided hierarchically.
- the transformation unit partition information may be used as an embodiment of the transformation index.
- the size of the transformation unit actually used may be expressed.
- the video encoding apparatus 100 may encode maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information.
- the encoded maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information may be inserted into the SPS.
- the video decoding apparatus 200 may use the maximum transform unit size information, the minimum transform unit size information, and the maximum transform unit split information to use for video decoding.
- the maximum transform unit split information is defined as 'MaxTransformSizeIndex'
- the minimum transform unit size is 'MinTransformSize'
- the transform unit split information is 0,
- the minimum transform unit possible in the current coding unit is defined as 'RootTuSize'.
- the size 'CurrMinTuSize' can be defined as in relation (1) below.
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may indicate a maximum transform unit size that can be adopted in the system. That is, according to relation (1), 'RootTuSize / (2 ⁇ MaxTransformSizeIndex)' is a transformation obtained by dividing 'RootTuSize', which is the size of the transformation unit when the transformation unit division information is 0, by the number of times corresponding to the maximum transformation unit division information. Since the unit size is 'MinTransformSize' is the minimum transform unit size, a smaller value among them may be the minimum transform unit size 'CurrMinTuSize' possible in the current coding unit.
- the maximum transform unit size RootTuSize may vary depending on a prediction mode.
- RootTuSize may be determined according to the following relation (2).
- 'MaxTransformSize' represents the maximum transform unit size
- 'PUSize' represents the current prediction unit size.
- RootTuSize min (MaxTransformSize, PUSize) ......... (2)
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may be set to a smaller value among the maximum transform unit size and the current prediction unit size.
- 'RootTuSize' may be determined according to Equation (3) below.
- 'PartitionSize' represents the size of the current partition unit.
- RootTuSize min (MaxTransformSize, PartitionSize) ........... (3)
- the conversion unit size 'RootTuSize' when the conversion unit split information is 0 may be set to a smaller value among the maximum conversion unit size and the current partition unit size.
- the current maximum conversion unit size 'RootTuSize' according to an embodiment that changes according to the prediction mode of the partition unit is only an embodiment, and a factor determining the current maximum conversion unit size is not limited thereto.
- the image data of the spatial domain is encoded for each coding unit of the tree structure, and the video decoding method based on the coding units of the tree structure.
- decoding is performed for each largest coding unit, and image data of a spatial region may be reconstructed to reconstruct a picture and a video that is a picture sequence.
- the reconstructed video can be played back by a playback device, stored in a storage medium, or transmitted over a network.
- the above-described embodiments of the present invention can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable recording medium.
- the computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).
- the video encoding method and / or video encoding method described above with reference to FIGS. 1A through 19 will be collectively referred to as the video encoding method of the present invention.
- the video decoding method and / or video decoding method described above with reference to FIGS. 1A to 19 are referred to as the video decoding method of the present invention.
- the video encoding apparatus including the video encoding apparatus, the video encoding apparatus, or the video encoding unit described above with reference to FIGS. 1A to 19 is collectively referred to as the "video encoding apparatus of the present invention.”
- the video decoding apparatus including the interlayer video decoding apparatus, the video decoding apparatus, or the video decoding unit described above with reference to FIGS. 1A to 19 is collectively referred to as the video decoding apparatus of the present invention.
- a computer-readable storage medium in which a program is stored according to an embodiment of the present invention will be described in detail below.
- the disk 26000 described above as a storage medium may be a hard drive, a CD-ROM disk, a Blu-ray disk, or a DVD disk.
- the disk 26000 is composed of a plurality of concentric tracks tr, and the tracks are divided into a predetermined number of sectors Se in the circumferential direction.
- a program for implementing the above-described quantization parameter determination method, video encoding method, and video decoding method may be allocated and stored in a specific region of the disc 26000 which stores the program according to the above-described embodiment.
- a computer system achieved using a storage medium storing a program for implementing the above-described video encoding method and video decoding method will be described below with reference to FIG. 21.
- the computer system 26700 may store a program for implementing at least one of the video encoding method and the video decoding method of the present invention on the disc 26000 using the disc drive 26800.
- the program may be read from the disk 26000 by the disk drive 26800, and the program may be transferred to the computer system 26700.
- a program for implementing at least one of the video encoding method and the video decoding method may be stored in a memory card, a ROM cassette, and a solid state drive (SSD). .
- FIG. 22 illustrates the overall structure of a content supply system 11000 for providing a content distribution service.
- the service area of the communication system is divided into cells of a predetermined size, and wireless base stations 11700, 11800, 11900, and 12000 that serve as base stations are installed in each cell.
- the content supply system 11000 includes a plurality of independent devices.
- independent devices such as a computer 12100, a personal digital assistant (PDA) 12200, a camera 12300, and a mobile phone 12500 may be an Internet service provider 11200, a communication network 11400, and a wireless base station. 11700, 11800, 11900, and 12000 to connect to the Internet 11100.
- PDA personal digital assistant
- the content supply system 11000 is not limited to the structure shown in FIG. 24, and devices may be selectively connected.
- the independent devices may be directly connected to the communication network 11400 without passing through the wireless base stations 11700, 11800, 11900, and 12000.
- the video camera 12300 is an imaging device capable of capturing video images like a digital video camera.
- the mobile phone 12500 is such as Personal Digital Communications (PDC), code division multiple access (CDMA), wideband code division multiple access (W-CDMA), Global System for Mobile Communications (GSM), and Personal Handyphone System (PHS). At least one communication scheme among various protocols may be adopted.
- PDC Personal Digital Communications
- CDMA code division multiple access
- W-CDMA wideband code division multiple access
- GSM Global System for Mobile Communications
- PHS Personal Handyphone System
- the video camera 12300 may be connected to the streaming server 11300 through the wireless base station 11900 and the communication network 11400.
- the streaming server 11300 may stream and transmit the content transmitted by the user using the video camera 12300 through real time broadcasting.
- Content received from the video camera 12300 may be encoded by the video camera 12300 or the streaming server 11300.
- Video data captured by the video camera 12300 may be transmitted to the streaming server 11300 via the computer 12100.
- Video data captured by the camera 12600 may also be transmitted to the streaming server 11300 via the computer 12100.
- the camera 12600 is an imaging device capable of capturing both still and video images, like a digital camera.
- Video data received from the camera 12600 may be encoded by the camera 12600 or the computer 12100.
- Software for video encoding and decoding may be stored in a computer-readable recording medium such as a CD-ROM disk, a floppy disk, a hard disk drive, an SSD, or a memory card accessible by the computer 12100.
- video data may be received from the mobile phone 12500.
- the video data may be encoded by a large scale integrated circuit (LSI) system installed in the video camera 12300, the mobile phone 12500, or the camera 12600.
- LSI large scale integrated circuit
- a user is recorded using a video camera 12300, a camera 12600, a mobile phone 12500, or another imaging device.
- the content is encoded and sent to the streaming server 11300.
- the streaming server 11300 may stream and transmit content data to other clients who have requested the content data.
- the clients are devices capable of decoding the encoded content data, and may be, for example, a computer 12100, a PDA 12200, a video camera 12300, or a mobile phone 12500.
- the content supply system 11000 allows clients to receive and play encoded content data.
- the content supply system 11000 enables clients to receive and decode and reproduce encoded content data in real time, thereby enabling personal broadcasting.
- the video encoding apparatus and the video decoding apparatus of the present invention may be applied to encoding and decoding operations of independent devices included in the content supply system 11000.
- the mobile phone 12500 is not limited in functionality and may be a smart phone that can change or expand a substantial portion of its functions through an application program.
- the mobile phone 12500 includes a built-in antenna 12510 for exchanging RF signals with the wireless base station 12000, and displays images captured by the camera 1530 or images received and decoded by the antenna 12510. And a display screen 12520 such as an LCD (Liquid Crystal Display) and an OLED (Organic Light Emitting Diodes) screen for displaying.
- the smartphone 12510 includes an operation panel 12540 including a control button and a touch panel. When the display screen 12520 is a touch screen, the operation panel 12540 further includes a touch sensing panel of the display screen 12520.
- the smart phone 12510 includes a speaker 12580 or another type of audio output unit for outputting voice and sound, and a microphone 12550 or another type of audio input unit for inputting voice and sound.
- the smartphone 12510 further includes a camera 1530 such as a CCD camera for capturing video and still images.
- the smartphone 12510 may be a storage medium for storing encoded or decoded data, such as video or still images captured by the camera 1530, received by an e-mail, or obtained in another form. 12570); And a slot 12560 for mounting the storage medium 12570 to the mobile phone 12500.
- the storage medium 12570 may be another type of flash memory such as an electrically erasable and programmable read only memory (EEPROM) embedded in an SD card or a plastic case.
- EEPROM electrically erasable and programmable read only memory
- FIG. 24 illustrates an internal structure of the mobile phone 12500.
- the power supply circuit 12700 the operation input controller 12640, the image encoder 12720, and the camera interface (12630), LCD control unit (12620), image decoding unit (12690), multiplexer / demultiplexer (12680), recording / reading unit (12670), modulation / demodulation unit (12660) and
- the sound processor 12650 is connected to the central controller 12710 through the synchronization bus 1730.
- the power supply circuit 12700 supplies power to each part of the mobile phone 12500 from the battery pack, thereby causing the mobile phone 12500 to operate. Can be set to an operating mode.
- the central controller 12710 includes a CPU, a read only memory (ROM), and a random access memory (RAM).
- the digital signal is generated in the mobile phone 12500 under the control of the central controller 12710, for example, the digital sound signal is generated in the sound processor 12650.
- the video encoder 12720 may generate a digital video signal, and text data of the message may be generated through the operation panel 12540 and the operation input controller 12640.
- the modulator / demodulator 12660 modulates a frequency band of the digital signal, and the communication circuit 12610 is a band-modulated digital signal. Digital-to-analog conversion and frequency conversion are performed on the acoustic signal.
- the transmission signal output from the communication circuit 12610 may be transmitted to the voice communication base station or the radio base station 12000 through the antenna 12510.
- the sound signal acquired by the microphone 12550 is converted into a digital sound signal by the sound processor 12650 under the control of the central controller 12710.
- the generated digital sound signal may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the text data of the message is input using the operation panel 12540, and the text data is transmitted to the central controller 12610 through the operation input controller 12640.
- the text data is converted into a transmission signal through the modulator / demodulator 12660 and the communication circuit 12610, and transmitted to the radio base station 12000 through the antenna 12510.
- the image data photographed by the camera 1530 is provided to the image encoder 12720 through the camera interface 12630.
- the image data photographed by the camera 1252 may be directly displayed on the display screen 12520 through the camera interface 12630 and the LCD controller 12620.
- the structure of the image encoder 12720 may correspond to the structure of the video encoding apparatus as described above.
- the image encoder 12720 encodes the image data provided from the camera 1252 according to the video encoding method of the present invention described above, converts the image data into compression-encoded image data, and multiplexes / demultiplexes the encoded image data. (12680).
- the sound signal obtained by the microphone 12550 of the mobile phone 12500 is also converted into digital sound data through the sound processor 12650 during recording of the camera 1250, and the digital sound data is converted into the multiplex / demultiplexer 12680. Can be delivered.
- the multiplexer / demultiplexer 12680 multiplexes the encoded image data provided from the image encoder 12720 together with the acoustic data provided from the sound processor 12650.
- the multiplexed data may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the signal received through the antenna converts the digital signal through a frequency recovery (Analog-Digital conversion) process .
- the modulator / demodulator 12660 demodulates the frequency band of the digital signal.
- the band demodulated digital signal is transmitted to the video decoder 12690, the sound processor 12650, or the LCD controller 12620 according to the type.
- the mobile phone 12500 When the mobile phone 12500 is in the call mode, the mobile phone 12500 amplifies a signal received through the antenna 12510 and generates a digital sound signal through frequency conversion and analog-to-digital conversion processing.
- the received digital sound signal is converted into an analog sound signal through the modulator / demodulator 12660 and the sound processor 12650 under the control of the central controller 12710, and the analog sound signal is output through the speaker 12580. .
- a signal received from the radio base station 12000 via the antenna 12510 is converted into multiplexed data as a result of the processing of the modulator / demodulator 12660.
- the output and multiplexed data is transmitted to the multiplexer / demultiplexer 12680.
- the multiplexer / demultiplexer 12680 demultiplexes the multiplexed data to separate the encoded video data stream and the encoded audio data stream.
- the encoded video data stream is provided to the video decoder 12690, and the encoded audio data stream is provided to the sound processor 12650.
- the structure of the image decoder 12690 may correspond to the structure of the video decoding apparatus as described above.
- the image decoder 12690 generates the reconstructed video data by decoding the encoded video data by using the video decoding method of the present invention described above, and displays the reconstructed video data through the LCD controller 1262 through the display screen 1252. ) Can be restored video data.
- video data of a video file accessed from a website of the Internet can be displayed on the display screen 1252.
- the sound processor 1265 may convert the audio data into an analog sound signal and provide the analog sound signal to the speaker 1258. Accordingly, audio data contained in a video file accessed from a website of the Internet can also be reproduced in the speaker 1258.
- the mobile phone 1250 or another type of communication terminal is a transmitting / receiving terminal including both the video encoding apparatus and the video decoding apparatus of the present invention, a transmitting terminal including only the video encoding apparatus of the present invention described above, or the video decoding apparatus of the present invention. It may be a receiving terminal including only.
- FIG. 26 illustrates a digital broadcasting system employing a communication system, according to an exemplary embodiment.
- the digital broadcasting system may receive a digital broadcast transmitted through a satellite or terrestrial network using the video encoding apparatus and the video decoding apparatus.
- the broadcast station 12890 transmits the video data stream to the communication satellite or the broadcast satellite 12900 through radio waves.
- the broadcast satellite 12900 transmits a broadcast signal, and the broadcast signal is received by the antenna 12860 in the home to the satellite broadcast receiver.
- the encoded video stream may be decoded and played back by the TV receiver 12610, set-top box 12870, or other device.
- the playback device 12230 can read and decode the encoded video stream recorded on the storage medium 12020 such as a disk and a memory card.
- the reconstructed video signal may thus be reproduced in the monitor 12840, for example.
- the video decoding apparatus of the present invention may also be mounted in the set-top box 12870 connected to the antenna 12860 for satellite / terrestrial broadcasting or the cable antenna 12850 for cable TV reception. Output data of the set-top box 12870 may also be reproduced by the TV monitor 12880.
- the video decoding apparatus of the present invention may be mounted on the TV receiver 12810 instead of the set top box 12870.
- An automobile 12920 with an appropriate antenna 12910 may receive signals from satellite 12800 or radio base station 11700.
- the decoded video may be played on the display screen of the car navigation system 12930 mounted on the car 12920.
- the video signal may be encoded by the video encoding apparatus of the present invention and recorded and stored in a storage medium.
- the video signal may be stored in the DVD disk 12960 by the DVD recorder, or the video signal may be stored in the hard disk by the hard disk recorder 12950.
- the video signal may be stored in the SD card 12970. If the hard disk recorder 12950 includes the video decoding apparatus of the present invention according to an embodiment, the video signal recorded on the DVD disk 12960, the SD card 12970, or another type of storage medium is output from the monitor 12880. Can be recycled.
- the vehicle navigation system 12930 may not include the camera 1530, the camera interface 12630, and the video encoder 12720 of FIG. 26.
- the computer 12100 and the TV receiver 12610 may not include the camera 1250, the camera interface 12630, and the video encoder 12720 of FIG. 26.
- FIG. 26 illustrates a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to an embodiment.
- the cloud computing system of the present invention may include a cloud computing server 14100, a user DB 14100, a computing resource 14200, and a user terminal.
- the cloud computing system provides an on demand outsourcing service of computing resources through an information communication network such as the Internet at the request of a user terminal.
- service providers integrate the computing resources of data centers located in different physical locations into virtualization technology to provide users with the services they need.
- the service user does not install and use computing resources such as application, storage, operating system, and security in each user's own terminal, but services in virtual space created through virtualization technology. You can choose as many times as you want.
- a user terminal of a specific service user accesses the cloud computing server 14100 through an information communication network including the Internet and a mobile communication network.
- the user terminals may be provided with a cloud computing service, particularly a video playback service, from the cloud computing server 14100.
- the user terminal may be any electronic device capable of accessing the Internet, such as a desktop PC 14300, a smart TV 14400, a smartphone 14500, a notebook 14600, a portable multimedia player (PMP) 14700, a tablet PC 14800, and the like. It can be a device.
- the cloud computing server 14100 may integrate and provide a plurality of computing resources 14200 distributed in a cloud network to a user terminal.
- the plurality of computing resources 14200 include various data services and may include data uploaded from a user terminal.
- the cloud computing server 14100 integrates a video database distributed in various places into a virtualization technology to provide a service required by a user terminal.
- the user DB 14100 stores user information subscribed to a cloud computing service.
- the user information may include login information and personal credit information such as an address and a name.
- the user information may include an index of the video.
- the index may include a list of videos that have been played, a list of videos being played, and a stop time of the videos being played.
- Information about a video stored in the user DB 14100 may be shared among user devices.
- the playback history of the predetermined video service is stored in the user DB 14100.
- the cloud computing server 14100 searches for and plays a predetermined video service with reference to the user DB 14100.
- the smartphone 14500 receives the video data stream through the cloud computing server 14100, the operation of decoding the video data stream and playing the video may be performed by the operation of the mobile phone 12500 described above with reference to FIG. 24. similar.
- the cloud computing server 14100 may refer to a playback history of a predetermined video service stored in the user DB 14100. For example, the cloud computing server 14100 receives a playback request for a video stored in the user DB 14100 from a user terminal. If the video was being played before, the cloud computing server 14100 may have a streaming method different depending on whether the video is played from the beginning or from the previous stop point according to the user terminal selection. For example, when the user terminal requests to play from the beginning, the cloud computing server 14100 streams the video to the user terminal from the first frame. On the other hand, if the terminal requests to continue playing from the previous stop point, the cloud computing server 14100 streams the video to the user terminal from the frame at the stop point.
- the user terminal may include the video decoding apparatus as described above with reference to FIGS. 1A through 19.
- the user terminal may include the video encoding apparatus as described above with reference to FIGS. 1A through 19.
- the user terminal may include both the video encoding apparatus and the video decoding apparatus as described above with reference to FIGS. 1A through 19.
- FIGS. 20A to 26B illustrate embodiments in which the video encoding method, the video decoding method, the video encoding apparatus, and the video decoding apparatus described above with reference to FIGS. 1A through 19 are utilized.
- embodiments in which the video encoding method and the video decoding method described above with reference to FIGS. 1A through 19 are stored in a storage medium or the video encoding apparatus and the video decoding apparatus are implemented in a device are illustrated in FIGS. 20 to 26. It is not limited to.
- the methods, processes, devices, products and / or systems according to the present invention are simple, cost effective, and not complicated and are very versatile and accurate.
- efficient and economical manufacturing, application and utilization can be realized while being readily available.
- Another important aspect of the present invention is that it is in line with current trends that call for cost reduction, system simplification and increased performance. Useful aspects found in such embodiments of the present invention may consequently increase the level of current technology.
Abstract
Description
Claims (10)
- 깊이 영상을 구성하는 현재 블록의 예측 모드 정보를 비트스트림으로부터 획득하는 단계;상기 획득된 예측 모드 정보에 기초하여 상기 현재 블록의 예측 블록을 생성하는 단계; 및상기 예측 블록을 이용하여 상기 깊이 영상을 복호화하는 단계를 포함하고,상기 현재 블록의 예측 모드 정보를 비트스트림으로부터 획득하는 단계는,상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제1플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제2플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제3플래그를 수신하는 단계; 및상기 제1플래그, 제2플래그 및 제3플래그에 기초하여 결정되는 소정의 조건을 만족하는 경우에 한해 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하는 방법의 타입에 대한 정보를 나타내는 제4플래그를 상기 비트스트림으로부터 수신하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1항에 있어서, 상기 제2플래그는,상기 깊이 영상이, 상기 깊이 영상을 구성하는 구성하는 블록들을 인트라 SDC(Simplified Depth Coding)모드를 사용하여 예측하는 것에 대한 허용 여부를 더 포함하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1항에 있어서, 상기 제4플래그는,상기 현재 블록을 웨지렛(Wedgelet)을 사용하여 두 개 이상의 파티션으로 분할하여 예측하는 방법 및 상기 현재 블록을 컨투어(Contour)를 사용하여 두 개 이상의 파티션으로 분할하여 예측하는 방법 중 어느 하나를 특정하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1항에 있어서, 상기 제3플래그는,상기 깊이 영상이 상기 깊이 영상을 구성하는 각 블록들과 대응되는 텍스처영상을 참조하는 것을 허용하지 않는 경우에는, 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용하지 않음을 나타내는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1항에 있어서, 상기 깊이 영상의 예측 모드 정보를 획득하는 단계는,상기 제1플래그가 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용함을 나타내고, 상기 제2플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용함을 나타내고, 상기 제3플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용함을 나타내는 경우에만 상기 소정의 조건을 만족하는 것으로 결정하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 복호화 방법.
- 제1항에 있어서, 상기 깊이 영상의 예측 모드 정보를 획득하는 단계는,상기 소정의 조건을 만족하지 않고 제2플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용함을 나타내고 제3플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용하지 않음을 나타낼 때, 상기 현재 블록을 웨지렛을 사용하여 두 개 이상의 파티션으로 분할하여 예측하는 것으로 결정하고,상기 소정의 조건을 만족하지 않고 제2플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용하지 않음을 나타내고 제3플래그가 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 방법을 허용함을 나타낼 때, 상기 현재 블록을 컨투어를 사용하여 두 개 이상의 파티션으로 분할하여 예측하는 것으로 결정하는 인터 레이어 비디오 복호화 방법.
- 깊이 영상을 구성하는 현재 블록의 예측 모드를 결정하는 단계;상기 결정된 예측 모드를 이용하여 상기 현재 블록의 예측 블록을 생성하는 단계; 및상기 예측 블록을 이용하여 상기 깊이 영상을 부호화하여 비트스트림을 생성하는 단계를 포함하고,상기 현재 블록의 예측 모드를 결정하는 단계는, 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제1플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제2플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제3플래그를 생성하는 단계; 및상기 제1플래그, 제2플래그 및 제3플래그에 기초하여 결정되는 소정의 조건을 만족하는 경우에 한해 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하는 방법의 타입에 대한 정보를 나타내는 제4플래그를 생성하는 단계를 포함하는 것을 특징으로 하는 인터 레이어 비디오 부호화 방법.
- 깊이 영상을 구성하는 현재 블록의 예측 모드 정보를 비트스트림으로부터 획득하는 예측모드 결정부;상기 획득된 예측 모드 정보에 기초하여 상기 현재 블록의 예측 블록을 생성하는 예측 블록 생성부; 및상기 예측 블록을 이용하여 상기 깊이 영상을 복호화하는 복호화부를 포함하고,상기 예측모드 결정부는, 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제1플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제2플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제3플래그를 수신하고, 상기 수신된 제1플래그, 제2플래그 및 제3플래그에 기초하여 결정되는 소정의 조건을 만족하는 경우에 한해 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하는 방법의 타입에 대한 정보를 나타내는 제4플래그를 상기 비트스트림으로부터 수신하는 것을 특징으로 하는 인터 레이어 비디오 복호화 장치.
- 깊이 영상을 구성하는 현재 블록의 예측 모드를 결정하는 예측모드 결정부;상기 결정된 예측 모드를 이용하여 상기 현재 블록의 예측 블록을 생성하는 예측 블록 생성부; 및상기 예측 블록을 이용하여 상기 깊이 영상을 부호화하여 비트스트림을 생성하는 부호화부를 포함하고,상기 예측모드 결정부는, 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제1플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 웨지렛(wedgelet)을 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제2플래그 및 상기 깊이 영상이 상기 깊이 영상을 구성하는 블록들을 컨투어(contour)를 경계로 두 개 이상의 파티션으로 분할하여 예측하는 것에 대한 허용 여부를 나타내는 제3플래그를 생성하고, 상기 제1플래그, 제2플래그 및 제3플래그에 기초하여 결정되는 소정의 조건을 만족하는 경우에 한해 상기 현재 블록을 패턴에 따라 두 개 이상의 파티션으로 분할하는 방법의 타입에 대한 정보를 나타내는 제4플래그를 생성하는 것을 특징으로 하는 인터 레이어 비디오 부호화 장치.
- 제 1항 내지 7항 중 어느 한 항에서 수행되는 방법을 컴퓨터에서 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020167035686A KR20170023000A (ko) | 2014-06-20 | 2015-06-22 | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 |
US15/320,538 US10368098B2 (en) | 2014-06-20 | 2015-06-22 | Method and device for transmitting prediction mode of depth image for interlayer video encoding and decoding |
JP2016573764A JP2017523682A (ja) | 2014-06-20 | 2015-06-22 | インターレイヤビデオ符号化/復号のためのデプス映像の予測モード伝送方法及びその装置 |
CN201580033274.3A CN106464908A (zh) | 2014-06-20 | 2015-06-22 | 用于传输深度图像的预测模式以供层间视频编码和解码的方法和装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462014811P | 2014-06-20 | 2014-06-20 | |
US62/014,811 | 2014-06-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015194915A1 true WO2015194915A1 (ko) | 2015-12-23 |
Family
ID=54935817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/006283 WO2015194915A1 (ko) | 2014-06-20 | 2015-06-22 | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10368098B2 (ko) |
JP (1) | JP2017523682A (ko) |
KR (1) | KR20170023000A (ko) |
CN (1) | CN106464908A (ko) |
WO (1) | WO2015194915A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071478B (zh) * | 2017-03-30 | 2019-08-20 | 成都图必优科技有限公司 | 基于双抛物线分区模板的深度图编码方法 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993012135A1 (en) | 1991-12-12 | 1993-06-24 | Gilead Sciences, Inc. | Nuclease stable and binding competent oligomers and methods for their use |
WO2016056550A1 (ja) * | 2014-10-08 | 2016-04-14 | シャープ株式会社 | 画像復号装置 |
JP2018506106A (ja) * | 2014-12-22 | 2018-03-01 | トムソン ライセンシングThomson Licensing | 再帰的階層処理を使用して外挿画像を生成するための装置および方法 |
JP2018050091A (ja) * | 2015-02-02 | 2018-03-29 | シャープ株式会社 | 画像復号装置、画像符号化装置および予測ベクトル導出装置 |
US11463689B2 (en) | 2015-06-18 | 2022-10-04 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US10142627B2 (en) | 2015-06-18 | 2018-11-27 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US10841593B2 (en) | 2015-06-18 | 2020-11-17 | Qualcomm Incorporated | Intra prediction and intra mode coding |
CN113473122A (zh) * | 2016-07-05 | 2021-10-01 | 株式会社Kt | 对视频进行解码或编码的方法和计算机可读介质 |
RU2722495C1 (ru) * | 2017-04-11 | 2020-06-01 | Долби Лэборетериз Лайсенсинг Корпорейшн | Восприятия многослойных дополненных развлечений |
CN108234987A (zh) * | 2018-01-23 | 2018-06-29 | 西南石油大学 | 一种用于深度图像边界拟合的双抛物线分区模板优化方法 |
US11277644B2 (en) | 2018-07-02 | 2022-03-15 | Qualcomm Incorporated | Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching |
US11303885B2 (en) | 2018-10-25 | 2022-04-12 | Qualcomm Incorporated | Wide-angle intra prediction smoothing and interpolation |
RU2767513C1 (ru) | 2018-12-28 | 2022-03-17 | Телефонактиеболагет Лм Эрикссон (Пабл) | Способ и оборудование для проведения выбора преобразования в кодере и декодере |
CN116260979A (zh) * | 2019-02-15 | 2023-06-13 | 华为技术有限公司 | 从帧内子划分译码模式工具限制子分区的尺寸的编码器、解码器、及对应方法 |
CN114402593A (zh) * | 2019-06-24 | 2022-04-26 | 交互数字Vc控股法国有限公司 | 用于视频编码和解码的帧内预测 |
EP4052469A4 (en) * | 2019-12-03 | 2023-01-25 | Huawei Technologies Co., Ltd. | METHOD, DEVICE, CODING SYSTEM WITH FUSION MODE |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
WO2013042884A1 (ko) * | 2011-09-19 | 2013-03-28 | 엘지전자 주식회사 | 영상 부호화/복호화 방법 및 그 장치 |
KR20130047650A (ko) * | 2011-10-28 | 2013-05-08 | 삼성전자주식회사 | 비디오의 인트라 예측 방법 및 장치 |
KR20130079261A (ko) * | 2011-12-30 | 2013-07-10 | (주)휴맥스 | 3차원 영상 부호화 방법 및 장치, 및 복호화 방법 및 장치 |
KR20140043243A (ko) * | 2011-07-22 | 2014-04-08 | 퀄컴 인코포레이티드 | 심도 범위 변동을 갖는 모션 심도 맵들의 코딩 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ZA200805337B (en) | 2006-01-09 | 2009-11-25 | Thomson Licensing | Method and apparatus for providing reduced resolution update mode for multiview video coding |
ES2625902T3 (es) | 2006-01-09 | 2017-07-20 | Dolby International Ab | Procedimientos y aparatos para la compensación de iluminación y color en la codificación de vídeo de múltiples vistas |
KR102185765B1 (ko) * | 2010-08-11 | 2020-12-03 | 지이 비디오 컴프레션, 엘엘씨 | 멀티-뷰 신호 코덱 |
CN101945288B (zh) * | 2010-10-19 | 2011-12-21 | 浙江理工大学 | 一种基于h.264压缩域图像深度图生成方法 |
CN102857763B (zh) | 2011-06-30 | 2016-02-17 | 华为技术有限公司 | 一种基于帧内预测的解码方法和解码装置 |
KR102468287B1 (ko) * | 2011-11-11 | 2022-11-18 | 지이 비디오 컴프레션, 엘엘씨 | 분할 코딩을 이용한 효과적인 예측 |
CN109257596B (zh) * | 2011-11-11 | 2023-06-13 | Ge视频压缩有限责任公司 | 解码器、编码器及重构、编码、解码、传送和处理方法 |
EP2777284B1 (en) | 2011-11-11 | 2018-09-05 | GE Video Compression, LLC | Effective wedgelet partition coding using spatial prediction |
CN103067716B (zh) | 2013-01-10 | 2016-06-29 | 华为技术有限公司 | 深度图像的编解码方法和编解码装置 |
CN103237214B (zh) * | 2013-04-12 | 2016-06-08 | 华为技术有限公司 | 深度图像的编解码方法和编解码装置 |
US10404999B2 (en) * | 2013-09-27 | 2019-09-03 | Qualcomm Incorporated | Residual coding for depth intra prediction modes |
US9756359B2 (en) * | 2013-12-16 | 2017-09-05 | Qualcomm Incorporated | Large blocks and depth modeling modes (DMM'S) in 3D video coding |
CN104010196B (zh) * | 2014-03-14 | 2017-02-15 | 北方工业大学 | 基于hevc的3d质量可伸缩视频编码 |
-
2015
- 2015-06-22 JP JP2016573764A patent/JP2017523682A/ja active Pending
- 2015-06-22 US US15/320,538 patent/US10368098B2/en active Active
- 2015-06-22 WO PCT/KR2015/006283 patent/WO2015194915A1/ko active Application Filing
- 2015-06-22 CN CN201580033274.3A patent/CN106464908A/zh not_active Withdrawn
- 2015-06-22 KR KR1020167035686A patent/KR20170023000A/ko unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120068743A (ko) * | 2010-12-17 | 2012-06-27 | 한국전자통신연구원 | 인터 예측 방법 및 그 장치 |
KR20140043243A (ko) * | 2011-07-22 | 2014-04-08 | 퀄컴 인코포레이티드 | 심도 범위 변동을 갖는 모션 심도 맵들의 코딩 |
WO2013042884A1 (ko) * | 2011-09-19 | 2013-03-28 | 엘지전자 주식회사 | 영상 부호화/복호화 방법 및 그 장치 |
KR20130047650A (ko) * | 2011-10-28 | 2013-05-08 | 삼성전자주식회사 | 비디오의 인트라 예측 방법 및 장치 |
KR20130079261A (ko) * | 2011-12-30 | 2013-07-10 | (주)휴맥스 | 3차원 영상 부호화 방법 및 장치, 및 복호화 방법 및 장치 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071478B (zh) * | 2017-03-30 | 2019-08-20 | 成都图必优科技有限公司 | 基于双抛物线分区模板的深度图编码方法 |
Also Published As
Publication number | Publication date |
---|---|
US20170251224A1 (en) | 2017-08-31 |
JP2017523682A (ja) | 2017-08-17 |
KR20170023000A (ko) | 2017-03-02 |
US10368098B2 (en) | 2019-07-30 |
CN106464908A (zh) | 2017-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015194915A1 (ko) | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 | |
WO2015137783A1 (ko) | 인터 레이어 비디오의 복호화 및 부호화를 위한 머지 후보 리스트 구성 방법 및 장치 | |
WO2014163461A1 (ko) | 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2014163467A1 (ko) | 랜덤 엑세스를 위한 멀티 레이어 비디오 부호화 방법 및 그 장치, 랜덤 엑세스를 위한 멀티 레이어 비디오 복호화 방법 및 그 장치 | |
WO2013162311A1 (ko) | 다시점 비디오 예측을 위한 참조픽처세트를 이용하는 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 예측을 위한 참조픽처세트를 이용하는 다시점 비디오 복호화 방법 및 그 장치 | |
WO2014109594A1 (ko) | 휘도차를 보상하기 위한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015102441A1 (ko) | 효율적인 파라미터 전달을 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2014163460A1 (ko) | 계층 식별자 확장에 따른 비디오 스트림 부호화 방법 및 그 장치, 계층 식별자 확장에 따른 따른 비디오 스트림 복호화 방법 및 그 장치 | |
WO2014163458A1 (ko) | 인터 레이어 복호화 및 부호화 방법 및 장치를 위한 인터 예측 후보 결정 방법 | |
WO2015009113A1 (ko) | 인터 레이어 비디오 복호화 및 부호화 장치 및 방법을 위한 깊이 영상의 화면내 예측 방법 | |
WO2014175647A1 (ko) | 시점 합성 예측을 이용한 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 복호화 방법 및 그 장치 | |
WO2015053597A1 (ko) | 멀티 레이어 비디오 부호화 방법 및 장치, 멀티 레이어 비디오 복호화 방법 및 장치 | |
WO2015012622A1 (ko) | 움직임 벡터 결정 방법 및 그 장치 | |
WO2013022281A2 (ko) | 다시점 비디오 예측 부호화 방법 및 그 장치, 다시점 비디오 예측 복호화 방법 및 그 장치 | |
WO2015056945A1 (ko) | 깊이 인트라 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2013162251A1 (ko) | 다시점 비디오 예측을 위한 참조리스트를 이용하는 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 예측을 위한 참조리스트를 이용하는 다시점 비디오 복호화 방법 및 그 장치 | |
WO2014171769A1 (ko) | 시점 합성 예측을 이용한 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 복호화 방법 및 그 장치 | |
WO2014129872A1 (ko) | 메모리 대역폭 및 연산량을 고려한 스케일러블 비디오 부호화 장치 및 방법, 스케일러블 비디오 복호화 장치 및 방법 | |
WO2015005749A1 (ko) | 인터 레이어 비디오 복호화 및 부호화 장치 및 방법을 위한 블록 기반 디스패리티 벡터 예측 방법 | |
WO2014163465A1 (ko) | 깊이맵 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2015093920A1 (ko) | 휘도 보상을 이용한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015137736A1 (ko) | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 | |
WO2015053593A1 (ko) | 부가 영상을 부호화하기 위한 스케일러블 비디오 부호화 방법 및 장치, 부가 영상을 복호화하기 위한 스케일러블 비디오 복호화 방법 및 장치 | |
WO2015009041A1 (ko) | 적응적 휘도 보상을 위한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015102439A1 (ko) | 멀티 레이어 비디오의 복호화 및 부호화를 위한 버퍼 관리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15809154 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016573764 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20167035686 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15320538 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15809154 Country of ref document: EP Kind code of ref document: A1 |