WO2015142075A1 - 3d 영상에 관련된 블록의 파티션 경계에서 필터링 수행 방법 - Google Patents
3d 영상에 관련된 블록의 파티션 경계에서 필터링 수행 방법 Download PDFInfo
- Publication number
- WO2015142075A1 WO2015142075A1 PCT/KR2015/002675 KR2015002675W WO2015142075A1 WO 2015142075 A1 WO2015142075 A1 WO 2015142075A1 KR 2015002675 W KR2015002675 W KR 2015002675W WO 2015142075 A1 WO2015142075 A1 WO 2015142075A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- filtering
- image
- unit
- depth
- block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/21—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with binary alpha-plane coding for video objects, e.g. context-based arithmetic encoding [CAE]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present invention relates to encoding and decoding of video data including performing filtering around partition boundaries of a block.
- the video content is encoded by transforming and quantizing the residual signal minus the original signal and the prediction signal.
- the video content can be played back by decoding the encoded video content.
- a partition of a block corresponding to the original signal, the prediction signal, and the residual signal may be determined.
- the residual signal corresponding to the difference between the original signal and the prediction signal is converted, quantized, and transmitted as a bitstream in the encoding process of the video content, if the difference between the original signal or the prediction signal between the adjacent partitions is large, it corresponds to the adjacent partition.
- the difference in the residual signal can be large. In this case, degradation of video content quality to be reproduced may occur, such as blocking artifacts. Therefore, it is necessary to alleviate the sudden change of the residual signal at the partition boundary.
- One embodiment can provide a method and apparatus for obtaining a residual signal having a small difference between partitions through a filtering process at a partition boundary in a process of encoding or decoding video content.
- a method of decoding an image including: obtaining a residual signal of a block of an image from a bitstream; Determining at least one partition of the block of the image based on a maskmap, which is information on partitioning of the block; Performing filtering at the boundary of the at least one partition; And generating a reconstruction signal based on a result of the filtering, wherein the filtering may be performed using at least one of the residual signal and a prediction signal corresponding to the residual signal.
- a boundary having a large difference between samples of a block may be detected flexibly and accurately.
- errors such as blocking artifacts occurring at the boundary of the image to be decoded or encoded can be reduced, and the image can be smoothly reproduced.
- FIG. 1 illustrates a multiview video system according to an embodiment.
- FIG. 2 is a diagram illustrating texture images and depth images configuring a multiview video.
- 3A is a block diagram of an image decoding apparatus 30 according to an embodiment.
- 3B is a flowchart illustrating a process of performing an image decoding method, according to an embodiment.
- FIG. 4A is a block diagram of an image encoding apparatus 40 according to an embodiment.
- 4B is a flowchart illustrating a process of performing an image encoding method, according to an embodiment.
- 5A illustrates a process of using mask map information in dividing a block of a color image into at least one partition.
- 5B is a flowchart of a process of determining a mask map for a partition shape of a block of a depth image, according to an exemplary embodiment.
- FIG. 6 illustrates a process of performing filtering in a plurality of directions on one sample.
- FIG. 7 is a block diagram of a video encoding apparatus based on coding units having a tree structure, according to an embodiment.
- FIG. 8 is a block diagram of a video decoding apparatus based on coding units according to a tree structure, according to an embodiment.
- FIG 9 illustrates a concept of coding units, according to an embodiment.
- FIG. 10 is a block diagram of an image encoder based on coding units, according to an embodiment.
- FIG. 11 is a block diagram of an image decoder based on coding units, according to an embodiment.
- FIG. 12 is a diagram of deeper coding units according to depths, and partitions, according to an embodiment.
- FIG. 13 illustrates a relationship between a coding unit and transformation units, according to an embodiment.
- FIG. 14 is a diagram of deeper encoding information, according to an embodiment.
- FIG. 15 is a diagram of deeper coding units according to depths, according to an exemplary embodiment.
- 16, 17, and 18 illustrate a relationship between a coding unit, a prediction unit, and a transformation unit, according to an embodiment.
- FIG. 19 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit according to encoding mode information of Table 1.
- FIG. 20 illustrates a physical structure of a disk in which a program is stored, according to an embodiment.
- 21 shows a disc drive for recording and reading a program by using the disc.
- FIG. 22 illustrates the overall structure of a content supply system for providing a content distribution service.
- 23 and 24 illustrate an external structure and an internal structure of a mobile phone to which a video encoding method and a video decoding method according to an embodiment are applied.
- 25 illustrates a digital broadcasting system employing a communication system, according to an embodiment.
- FIG. 26 illustrates a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to an embodiment.
- An image decoding method includes: obtaining a residual signal of a block of an image from a bitstream; Determining at least one partition of the block of the image based on a maskmap, which is information on partitioning of the block; Performing filtering at the boundary of the at least one partition; And generating a reconstruction signal based on a result of the filtering, wherein the filtering may be performed using at least one of the residual signal and a prediction signal corresponding to the residual signal.
- the mask map corresponding to the image may be related to a partition form of a block of a depth image or a block of a color image, respectively.
- the mask map may be divided into a region where a value of the plurality of samples is greater than or equal to an average value based on an average value of the plurality of samples of the block.
- the performing of the filtering may include performing filtering based on a filtering coefficient, and the filtering coefficient may be adaptive according to an image characteristic.
- the filtering may include performing horizontal filtering when the block is divided into a plurality of partitions having a vertical direction, and a plurality of partitions having a horizontal direction.
- the method may include performing vertical filtering.
- the filtering may include: when the block is divided into a plurality of partitions having a vertical direction and a horizontal direction, a sample approaching the vertical boundary and the horizontal boundary;
- the method may include performing horizontal filtering and vertical filtering.
- the filtering may include: detecting a plurality of samples on a mask map corresponding to a position of a sample of the image to be filtered; Comparing the detected plurality of samples with each other; And when it is determined that the detected plurality of samples are different from each other, performing filtering on the samples of the image corresponding to the positions of the detected plurality of samples.
- the image decoding method may further include scaling to change the size of the mask map to be the same as the block when the size of the mask map is different from the size of the block.
- an image encoding method may include: obtaining an original signal of a block of an image; Determining at least one partition of the block of the image based on a mask map which is information about the division of the block; Performing filtering at the boundary of the at least one partition; And generating a filtering residual signal based on a result of the filtering, wherein the filtering comprises filtering the residual signal related to the original signal, the prediction signal corresponding to the original signal, and the original signal and the prediction signal. It may be characterized by using at least one.
- the mask map may be obtained based on a partition form of a block of a depth image or a block of a color image.
- the mask map may be divided into a region where a value of the plurality of samples is larger than an average value and a region which is not large, based on an average value of the plurality of samples of the block.
- the performing of the filtering may include performing filtering based on a filtering coefficient, and the filtering coefficient may be adaptive according to an image characteristic.
- the filtering may include horizontal filtering when the block is divided into a plurality of partitions having a vertical direction of a boundary, and a plurality of partitions having a horizontal direction of the boundary.
- the method may include performing vertical filtering.
- the filtering may include: when the block is divided into a plurality of partitions having a vertical direction and a horizontal direction, a sample approaching the vertical boundary and the horizontal boundary;
- the method may include performing horizontal filtering and vertical filtering.
- the filtering may include: detecting a plurality of samples on a mask map corresponding to a position of a sample of the image to be filtered; Comparing the detected plurality of samples with each other; And if it is determined that the detected plurality of samples are different from each other, performing filtering on the samples of the image corresponding to the positions of the detected plurality of samples.
- the image encoding method may further include scaling to change the size of the mask map to be the same as the block when the size of the mask map is different from the size of the block.
- an image decoding apparatus includes: a residual signal obtaining unit obtaining a residual signal associated with a block constituting the image from a bitstream; A partition determination unit determining at least one partition of the block of the image; A filtering unit which performs filtering at a boundary of the at least one partition; And a decoder configured to generate a reconstruction signal based on at least one of the prediction signal and the residual signal on which the filtering is performed.
- an image encoding apparatus may include: a partition determiner configured to determine at least one partition of a block of the image; A filtering unit which performs filtering at a boundary of the at least one partition; And an encoder configured to generate a filtering residual signal based on at least one of the original signal, the prediction signal, and the residual signal from which the filtering is performed.
- a computer readable recording medium having stored thereon a program for implementing an image decoding method according to another embodiment may be provided.
- a computer readable recording medium having stored thereon a program for implementing an image encoding method may be provided.
- a depth image decoding technique and a depth image encoding technique are proposed according to various embodiments. 7 to 19, a video encoding technique and a video decoding technique based on coding units having a tree structure according to various embodiments applicable to the above-described depth image decoding technique and depth image encoding technique are disclosed. 20 to 26, various embodiments to which the video encoding method and the video decoding method proposed above may be applied are disclosed.
- the "picture” may be a still picture of the video or a moving picture, that is, the video itself.
- sample means data to be processed as data allocated to a sampling position of an image.
- the pixels in the spatial domain image may be samples.
- a “layer image” refers to images of a specific viewpoint or the same type.
- one layer image represents color images or depth images input at a specific viewpoint.
- FIG. 1 illustrates a multiview video system according to an embodiment.
- the multiview video system 10 includes a multiview video image acquired through two or more multiview cameras 11, a depth image of a multiview image acquired through a depth camera 14, and multiview cameras 11.
- Multi-view video encoding apparatus 12 for generating a bitstream by encoding the camera parameter information associated with the ()) and multi-view video decoding apparatus for decoding the bitstream and provide the decoded multi-view video frame in various forms according to the request of the viewer (13).
- the multi-view camera 11 is configured by combining a plurality of cameras having different viewpoints and provides a multi-view video image every frame.
- a color image acquired for each viewpoint according to a predetermined color format such as YUV, YCbCr, or the like, may be referred to as a texture image.
- the depth camera 14 provides a depth image representing depth information of a scene as an 8-bit image in 256 steps.
- the number of bits for representing one pixel of the depth image may be changed rather than eight bits.
- the depth camera 14 may provide a depth image having a value proportional to or inversely proportional to the distance by measuring the distance from the camera to the subject and the background using an infrared light.
- the image of one viewpoint includes a texture image and a depth image.
- the multiview video decoding apparatus 13 uses the multiview texture image and the depth image provided in the bitstream.
- the bitstream of the multi-view video data may include information for indicating whether the data packet also includes information about the depth image, and information indicating the image type whether each data packet is for a texture image or a depth image. .
- the multiview video decoding apparatus 13 decodes the multiview video using the received depth image when the depth image is used to restore the multiview video, and the receiving side hardware decodes the multiview video. If the depth image cannot be utilized because it does not support, the received data packet associated with the depth image may be discarded. As described above, when the multi-view video decoding apparatus 13 cannot display the multi-view image on the receiving side, the image of any one of the multi-view images may be displayed as a 2D image (2D image).
- Multi-view video data Since the amount of data to be encoded is increased in proportion to the number of viewpoints, and the depth image for realizing a three-dimensional effect must be encoded, a large amount of multi-view video data to implement a multi-view video system as shown in FIG. Multi-view video data needs to be compressed efficiently.
- FIG. 2 is a diagram illustrating texture images and depth images configuring a multiview video.
- the depth picture picture d0 (24) and the second view (view 1) corresponding to the texture picture v0 (21) at the first view (view 0), the texture picture v0 (21) at the first view (view 0).
- a depth image picture d2 26 corresponding to a texture picture v2 23 at three views 2 is illustrated.
- the multi-view texture pictures v0, v1, v2 (21, 22, 23) and the corresponding depth images d0, d1, d2 (24, 25, 26) are all acquired at the same time to obtain the same POC ( pictures with picture order count).
- the same n for example, multi-view texture pictures v0, v1, v2) 21, 22, 23 and corresponding depth picture pictures d0, d1, d2 (24, 25, 26
- a picture group 1500 having a POC value of n may be referred to as an nth picture group.
- Picture groups having the same POC may constitute one access unit.
- the coding order of the access units is not necessarily the same as the capture order (acquisition order) or display order of the image, and the coding order of the access units may be different from the capture order or the display order in consideration of a reference relationship.
- a view identifier ViewId which is a view order index may be used.
- the texture image and the depth image of the same view have the same view identifier.
- the view identifier may be used to determine the encoding order.
- the multi-view video encoding apparatus 12 may encode a multi-view video in order of the values of the viewpoint identifiers from the smallest to the largest. That is, the multi-view video encoding apparatus 12 may encode a texture image and a depth image having a ViewId of 0 and then encode a texture image and a depth image having a ViewId of 1.
- the multiview video decoding apparatus 13 may identify whether an error occurs in the received data using the view identifier in an environment where an error is likely to occur.
- the order of encoding / decoding of each view image may be changed without depending on the size order of the view identifiers.
- the image decoding apparatus 30 may include a residual signal acquirer 32, a partition determiner 34, a filter 36, and a decoder 38.
- FIG. 3B is a flowchart illustrating a process of performing an image decoding method, according to an embodiment.
- the image decoding method may be performed by the image decoding apparatus 30 of FIG. 3A.
- an image decoding method performed by the image decoding apparatus 30 will be described.
- the image decoding apparatus 30 may determine at least one partition of a block of an image based on a mask map which is information about partitioning of the block.
- the image decoding apparatus 30 may determine at least one partition of the block associated with the boundary on which the filtering is to be performed, based on the boundary of at least one partition of the mask map.
- the residual signal acquisition unit 32 may obtain a residual signal related to the image to be decoded from the bitstream.
- the residual signal may be a signal from which filtering is performed in the encoding process, or may be a signal from which filtering is not performed.
- the partition determiner 34 may determine at least one partition for dividing a block of an image.
- a block of an image may be a coding unit.
- one coding unit may be divided into at least one partition.
- a block is a coding unit for convenience of description.
- the partition determiner 34 may determine at least one partition for dividing a block of an image based on a maskmap.
- the mask map may be information about a layer image different from the image to be decoded.
- the mask map when decoding a color image, may be information about a depth image related to the corresponding color image.
- the mask map when decoding the depth image, may be information about a color image related to the depth image.
- the mask map required to decode the color image may relate to the partition type of the block of the depth image.
- the depth image may have the same POC (Picture Order Count) as a depth image corresponding to the same access unit as the color image to be decoded.
- the partition determiner 34 may determine a form of dividing the block of the color image by using mask map information for dividing the block of the depth image.
- the partition determiner 34 may refer to the color image 51 by referring to the mask map 52 including information about the boundary 52 in order to segment the color image 51. It can be split into two partitions.
- FIG. 5B is a flowchart of a process of determining a mask map for a partition shape of a block of a depth image, according to an exemplary embodiment.
- the block 56 of the depth image at a position corresponding to the position of the block of the color image may be used.
- the block 56 of the depth image is not larger than the sample R1 that is larger than the mean m of the values of the four samples 55a, 55b, 55c, 55d at the vertices of the block 56 of the depth image. It may be divided into a sample (R2).
- the two samples 56a, 56b) may be determined as the boundaries for dividing the mask map 57. Based on this comparison, the mask map 57 may be divided into partitions separated by a boundary 58 of samples having a value of R1 or R2.
- the description is not limited to the use of a mask map including two partitions divided based on one average value m, and the information on the partitions of the mask map may be three or more. That is, the reference value for dividing the mask map may be two or more, and accordingly, the mask map may be divided into three or more partitions.
- the number of partitions of the mask map and the number of information partition determination unit 34 is a block of the color image to be decoded using the mask map 57 associated with the partition information of the block 56 of the depth image determined through the above process Partition boundaries can be determined. Referring to FIG. 5A again, at least one partition of the block 54 of the color image determined using the mask map determined through the process of FIG. 5B may be determined, and according to the boundary 55 that divides the at least one partition. Filtering is performed.
- the filtering unit 36 may filter the samples of the boundary 55 of at least one partition that divides the block 54 of the color image based on the mask map.
- the sample 54b includes: Perform filtering. Since the sample 54a has a R1 value and the sample 54c has a R2 value on the mask map, the sample 54a has different values, so that filtering is performed on the sample 54b of the block 54 of the color image.
- the filtering unit 36 may determine the filtering direction according to the shape of the boundary 55.
- the filtering unit 36 performs vertical filtering on the samples 54b and 54c corresponding to the case where the boundary 55 dividing the block 54 of the color image is horizontal. As another example, the filtering unit 36 performs horizontal filtering on the samples 54f and 54g corresponding to the case where the boundary 55 dividing the block 54 of the color image is vertical. .
- the filtering unit 36 may apply filtering to the samples of the boundary 55 of the at least one partition by using the filtering coefficients.
- the block to be filtered by the filtering unit 36 may include a residual signal obtained from the bitstream or a prediction signal corresponding to the residual signal.
- the filtering unit 36 may perform filtering on at least one of the residual signal or the prediction signal corresponding to the residual signal.
- the prediction signal may be a signal on which prediction related to the residual signal is performed.
- the residual signal and the prediction signal may be used to generate a reconstruction signal related to the residual signal. Further, filtering may be performed on the reconstructed signal reconstructed using the residual signal and the prediction signal. Referring to FIG.
- the sizes of the filtering coefficients a, b, and c and the number of filtering coefficients for determining the degree of filtering may be fixed. It may be set adaptively depending on the characteristics of the image. For example, in a block of a high complexity part of the image, the number of filtering coefficients may be reduced and the ratio of the size of the filtering coefficients may be adjusted to set the degree of filtering weakly. Further, the neighboring samples used for filtering may be samples that are close to the sample to be filtered, but other pixels may be used because it is not limited thereto. In addition, the filtering coefficient may be adaptively set according to the size of the block to be filtered.
- the offset value may be a value added to prevent a sudden change in the value of the sample before and after filtering.
- d is a value capable of adjusting the degree of filtering according to an embodiment, and may be determined based on the magnitudes of the filtering coefficients a, b, and c, and the operator related to d may include various operators including a shift operator. Can be.
- the filtering unit 36 may perform horizontal filtering because the direction of the boundary 55 dividing the samples 54f and 54g is vertical.
- the process of horizontal filtering may be omitted since it may correspond to the process of vertical filtering.
- the filtering unit 36 corresponds to the case where the sample 60b of the block 60 of the color image is horizontal and vertical in the direction of the boundary 60h for dividing the block 60.
- the direction of the boundary 60h can perform both longitudinal filtering and horizontal filtering.
- the horizontal filtering is performed after the vertical filtering is performed for convenience of description. If the index of the sample 60a is p-1, the sample 60b is p, and the index of the sample 60c is q, p 'which is a result of filtering the sample 60b may be determined as in Equation 1 above. have.
- the index of the sample 60e can be regarded as r-1, the index of the sample 60b as p ', and the index of the sample 60f as s. have. That is, since the sample 60b has already been subjected to vertical filtering, p ′ may be used in performing horizontal filtering. Therefore, p ′ ′ which is a result of performing horizontal filtering after vertical filtering of the sample 60b may be determined as in Equation 2.
- the filtering unit 36 may perform filtering on the sample, but may perform vertical filtering after horizontal filtering, but the order of the filtering is not limited thereto. That is, the horizontal filtering may be performed after the vertical filtering, or may be performed simultaneously. As another example, it may be performed only in one of vertical filtering or horizontal filtering based on the mode, partition type, or block size.
- the decoder 38 may determine a residual signal and a prediction signal corresponding to the residual signal by the filtering unit 36 in step 302 at the boundary of at least one partition determined by the partition determination unit 34 in operation 301.
- the image decoding apparatus 40 may determine a residual signal obtained from a bitstream and a prediction signal corresponding to the residual signal.
- the recovery signal can be generated.
- the residual signal obtained from the bitstream may be a filtering residual signal that is a result of filtering by the image encoding apparatus described below. However, the residual signal may be a residual signal without filtering.
- the decoding unit 38 may generate a reconstruction signal using the filtered prediction signal and the residual signal obtained from the bitstream.
- the decoder 38 may generate a reconstruction signal by using the filtered differential signal and the prediction signal.
- the decoder 38 may generate a reconstruction signal using the filtered prediction signal and the filtered residual signal.
- the filtering process for each signal may be omitted since it may correspond to the description of the filtering process of the filtering unit 36 related to FIG. 5A.
- the reconstruction signal generation process of the decoder 38 may correspond to the 2D video decoding process and thus will be omitted.
- the filtering units 36 that perform filtering in blocks of the color image use mask masks for depth images at positions corresponding to positions of blocks of the color image in which the filtering is performed. If the size of the block and the color image are different from each other, it is difficult to clearly compare the positions of the mask map and the block of the color image with each other. Therefore, in order to compensate for this, the scaling of the mask map associated with the depth image may be performed to match the size of the block of the color image.
- the image decoding apparatus 30 may determine whether to perform filtering based on rate-distortion optimization for efficient filtering. According to another embodiment, the image decoding apparatus 30 may determine whether to perform filtering based on obtaining a flag for the presence or absence of filtering from the bitstream and based on the value of the acquired flag.
- the image encoding apparatus 40 may include an original signal acquirer 42, a partition determiner 44, a filter 46, and an encoder 48.
- FIG. 4B is a flowchart illustrating a process of performing an image encoding method, according to an embodiment.
- the image encoding method may be performed by the image encoding apparatus 40 of FIG. 4A.
- an image encoding method performed by the image encoding apparatus 40 will be described.
- the image encoding apparatus 40 may determine at least one partition of a block of an image based on a mask map that is information about partitioning of the block.
- the image encoding apparatus 40 may determine at least one partition of the block associated with the boundary to be filtered based on the boundary of at least one partition of the mask map.
- the original signal acquisition unit 42 may acquire an original signal related to the image to be encoded.
- the partition determiner 44 may determine at least one partition for dividing a block of an image to be encoded.
- a block of an image may be a coding unit. In the following description, it is assumed that a block is a coding unit for convenience of description.
- the partition determiner 44 may determine at least one partition for dividing a block of an image based on a mask map.
- the mask map may be information about a layer different from the image to be encoded.
- the mask map when encoding a color image, may be information about a depth image associated with the corresponding color image.
- the mask map when the depth image is encoded, may be information about a color image related to the depth image.
- the mask map necessary for encoding the color image may relate to the partition type of the block of the depth image.
- the partition determiner 44 may have the same POC as the depth image corresponding to the same access unit as the color image to be encoded.
- the partition determiner 44 may determine a form of dividing the block of the color image by using mask map information for dividing the block of the depth image.
- 5A illustrates a process of using mask map information in dividing a block of a color image into at least one partition.
- the mask map 52 includes information about a boundary 53 for dividing a block of a depth image into two partitions which are distinguished from each other, and a mask map for dividing a block 51 of a color image. See (52).
- the partition determiner 44 may divide the color image 51 into two partitions by referring to the mask map 52 including information about the boundary 52 in order to divide the color image 51.
- FIG. 5B illustrates a process of determining a mask map for a divided form of a block of a depth image, according to an exemplary embodiment.
- the block 56 of the depth image at a position corresponding to the position of the block of the color image may be used.
- the block 56 of the depth image is not larger than the sample R1 that is larger than the mean m of the values of the four samples 55a, 55b, 55c, 55d at the vertices of the block 56 of the depth image. It may be divided into a sample (R2).
- the two samples 56a, 56b) may be determined as the boundaries for dividing the mask map 57.
- the mask map 57 may be divided into partitions separated by a boundary 58 of samples having a value of R1 or R2.
- the description is not limited to the use of a mask map including two partitions divided based on one average value m, and the information on the partitions of the mask map may be three or more.
- the reference value for dividing the mask map may be two or more, and accordingly, the mask map may be divided into three or more partitions.
- the number of partitions of the mask map and the number of information partition determination unit 44 is a block of the color image to be encoded using the mask map 57 associated with the partition information of the block 56 of the depth image determined through the above process Partition boundaries can be determined. Referring to FIG. 5A again, at least one partition of the block 54 of the color image determined using the mask map determined through the process of FIG. 5B may be determined, and according to the boundary 55 that divides the at least one partition. Filtering is performed.
- the filtering unit 46 may perform filtering on samples of the boundary 55 of at least one partition that divides the block 54 of the color image based on the mask map.
- the block to be filtered by the filtering unit 46 may include an original signal obtained by the original signal acquisition unit 32, a prediction signal corresponding to the original signal, or a residual signal corresponding to the original signal. That is, the filtering unit 36 may perform filtering on at least one of the original signal, the prediction signal, and the residual signal.
- the sample 54b includes: Perform filtering.
- the filtering unit 46 may determine the filtering direction according to the shape of the boundary 55. For example, the filtering unit 46 performs vertical filtering on the samples 54b and 54c corresponding to the case where the boundary 55 dividing the block 54 of the color image is horizontal. As another example, the filtering unit 46 performs horizontal filtering on the samples 54f and 54g corresponding to the case in which the boundary 55 dividing the block 54 of the color image is vertical. .
- the filtering unit 46 may apply filtering to the samples of the boundary 55 of the at least one partition by using the filtering coefficients. Since the operation of the filtering unit 46 of the image encoding apparatus 40 in this regard may correspond to the operation of the filtering unit 36 of the image decoding apparatus 30 described above, a detailed description thereof will be omitted.
- the encoder 48 determines that at least one of the original signal, the prediction signal, and the residual signal is determined by the filtering unit 46 in step 402 on the boundary of at least one partition determined by the partition determiner 44 in step 401.
- the bitstream including the filtered residual signal which is the filtered residual signal, may be generated using the filtering result.
- the image encoding apparatus 40 may generate a residual signal by using an original signal and a prediction signal corresponding to the original signal.
- the encoder 48 when the filtering unit 46 performs filtering on the original signal, the encoder 48 generates a filtering residual signal based on the filtered original signal and a prediction signal corresponding to the original signal, and generates the filtered signal.
- a bitstream including the residual signal may be generated.
- the encoder 48 When the filtering unit 46 performs filtering on the prediction signal corresponding to the original signal, the encoder 48 generates a filtering residual signal based on the filtered prediction signal and the original signal and includes the generated filtering residual signal.
- a bitstream can be generated.
- the encoding unit 48 filters the residual signal, which is a result of filtering the residual signal. It may generate a bitstream comprising a.
- the encoder 48 generates a filtering residual signal based on the filtered original signal and the filtered prediction signal, and generates the generated residual signal.
- a bitstream including the filtering residual signal may be generated.
- the filtering process for each signal may be omitted because it may correspond to the description of the filtering process of the filtering unit 46 related to FIG. 5A. Since it may correspond to a bitstream generation process in a video encoding process of encoding a filtering residual signal of the encoder 48 to generate a bitstream, it is omitted.
- the filtering unit 46 that performs filtering in the block of the color image uses a mask map for the depth image at a position corresponding to the position of the block of the color image in which the filtering is performed. If the blocks of the color image have different sizes, it is difficult to clearly compare the positions of the mask map and the blocks of the color image with each other. Therefore, in order to compensate for this, the scaling of the mask map associated with the depth image may be performed to match the size of the block of the color image.
- the image encoding apparatus 40 may determine whether to perform filtering based on the rate-distortion optimization. In the encoding step, whether the maximum coding unit is divided into a plurality of coding units for an optimal coding effect may be determined based on the rate-distortion optimization. The image encoding apparatus 40 according to an embodiment may determine whether to perform filtering based on rate-distortion optimization for efficient filtering. According to another exemplary embodiment, the image encoding apparatus 40 may generate a bitstream including a flag for filtering.
- FIG. 7 is a block diagram of a video encoding apparatus 100 based on coding units having a tree structure, according to an embodiment.
- the video encoding apparatus 100 including video prediction based on coding units having a tree structure includes a coding unit determiner 120 and an output unit 130.
- the video encoding apparatus 100 that includes video prediction based on coding units having a tree structure is abbreviated as “video encoding apparatus 100”.
- the coding unit determiner 120 may partition the current picture based on a maximum coding unit that is a coding unit having a maximum size for the current picture of the image. If the current picture is larger than the maximum coding unit, image data of the current picture may be split into at least one maximum coding unit.
- the maximum coding unit may be a data unit having a size of 32x32, 64x64, 128x128, 256x256, or the like, and may be a square data unit having a square of two horizontal and vertical sizes.
- the coding unit according to an embodiment may be characterized by a maximum size and depth.
- the depth indicates the number of times the coding unit is spatially divided from the maximum coding unit, and as the depth increases, the coding unit for each depth may be split from the maximum coding unit to the minimum coding unit.
- the depth of the largest coding unit is the highest depth and the minimum coding unit may be defined as the lowest coding unit.
- the maximum coding unit decreases as the depth increases, the size of the coding unit for each depth decreases, and thus, the coding unit of the higher depth may include coding units of a plurality of lower depths.
- the image data of the current picture may be divided into maximum coding units according to the maximum size of the coding unit, and each maximum coding unit may include coding units divided by depths. Since the maximum coding unit is divided according to depths, image data of a spatial domain included in the maximum coding unit may be hierarchically classified according to depths.
- the maximum depth and the maximum size of the coding unit that limit the total number of times of hierarchically dividing the height and the width of the maximum coding unit may be preset.
- the coding unit determiner 120 encodes at least one divided region obtained by dividing the region of the largest coding unit for each depth, and determines a depth at which the final encoding result is output for each of the at least one divided region. That is, the coding unit determiner 120 encodes the image data in coding units according to depths for each maximum coding unit of the current picture, and selects the depth at which the smallest coding error occurs to determine the final depth. The determined final depth and the image data for each maximum coding unit are output to the outputter 130.
- Image data in the largest coding unit is encoded based on coding units according to depths according to at least one depth less than or equal to the maximum depth, and encoding results based on the coding units for each depth are compared. As a result of comparing the encoding error of the coding units according to depths, a depth having the smallest encoding error may be selected. At least one final depth may be determined for each maximum coding unit.
- the coding unit is divided into hierarchically and the number of coding units increases.
- a coding error of each data is measured and it is determined whether to divide into lower depths. Therefore, even in the data included in one largest coding unit, since the encoding error for each depth is different according to the position, the final depth may be differently determined according to the position. Accordingly, one or more final depths may be set for one maximum coding unit, and data of the maximum coding unit may be partitioned according to coding units of one or more final depths.
- the coding unit determiner 120 may determine coding units having a tree structure included in the current maximum coding unit.
- the coding units according to the tree structure according to an embodiment include coding units having a depth determined as a final depth among all deeper coding units included in the current maximum coding unit.
- the coding unit of the final depth may be determined hierarchically according to the depth in the same region within the maximum coding unit, and may be independently determined for the other regions.
- the final depth for the current area can be determined independently of the final depth for the other area.
- the maximum depth according to an embodiment is an index related to the number of divisions from the maximum coding unit to the minimum coding unit.
- the first maximum depth according to an embodiment may represent the total number of divisions from the maximum coding unit to the minimum coding unit.
- the second maximum depth according to an embodiment may represent the total number of depth levels from the maximum coding unit to the minimum coding unit. For example, when the depth of the largest coding unit is 0, the depth of the coding unit obtained by dividing the largest coding unit once may be set to 1, and the depth of the coding unit divided twice may be set to 2. In this case, if the coding unit divided four times from the maximum coding unit is the minimum coding unit, since depth levels of 0, 1, 2, 3, and 4 exist, the first maximum depth is set to 4 and the second maximum depth is set to 5. Can be.
- Predictive encoding and transformation of the largest coding unit may be performed. Similarly, prediction encoding and transformation are performed based on depth-wise coding units for each maximum coding unit and for each depth less than or equal to the maximum depth.
- encoding including prediction encoding and transformation should be performed on all the coding units for each depth generated as the depth deepens.
- the prediction encoding and the transformation will be described based on the coding unit of the current depth among at least one maximum coding unit.
- the video encoding apparatus 100 may variously select a size or shape of a data unit for encoding image data.
- the encoding of the image data is performed through prediction encoding, transforming, entropy encoding, and the like.
- the same data unit may be used in every step, or the data unit may be changed in steps.
- the video encoding apparatus 100 may select not only a coding unit for encoding the image data, but also a data unit different from the coding unit in order to perform predictive encoding of the image data in the coding unit.
- prediction encoding may be performed based on coding units of a final depth, that is, coding units that are no longer split.
- the partition in which the coding unit is divided may include a data unit in which at least one of a coding unit and a height and a width of the coding unit are split.
- the partition may include a data unit having a split coding unit and a data unit having the same size as the coding unit.
- the partition on which the prediction is based may be referred to as a 'prediction unit'.
- the partition mode may be formed in a geometric form, as well as partitions divided in an asymmetric ratio such as 1: n or n: 1, as well as symmetric partitions in which a height or width of a prediction unit is divided in a symmetrical ratio. It may optionally include partitioned partitions, arbitrary types of partitions, and the like.
- the prediction mode of the prediction unit may be at least one of an intra mode, an inter mode, and a skip mode.
- the intra mode and the inter mode may be performed on partitions having sizes of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, and N ⁇ N.
- the skip mode may be performed only for partitions having a size of 2N ⁇ 2N.
- the encoding may be performed independently for each prediction unit within the coding unit to select a prediction mode having the smallest encoding error.
- the video encoding apparatus 100 may perform conversion of image data of a coding unit based on not only a coding unit for encoding image data, but also a data unit different from the coding unit.
- the transformation may be performed based on a transformation unit having a size smaller than or equal to the coding unit.
- the transformation unit may include a data unit for intra mode and a transformation unit for inter mode.
- the transformation unit in the coding unit is also recursively divided into smaller transformation units, so that the residual data of the coding unit is determined according to the tree structure according to the transformation depth. Can be partitioned according to the conversion unit.
- a transform depth indicating a number of divisions between the height and the width of the coding unit divided to the transform unit may be set. For example, if the size of the transform unit of the current coding unit of size 2Nx2N is 2Nx2N, the transform depth is 0, the transform depth 1 if the size of the transform unit is NxN, and the transform depth 2 if the size of the transform unit is N / 2xN / 2. Can be. That is, the transformation unit having a tree structure may also be set for the transformation unit according to the transformation depth.
- the split information for each depth requires not only depth but also prediction related information and transformation related information. Accordingly, the coding unit determiner 120 may determine not only the depth that generated the minimum encoding error, but also a partition mode in which the prediction unit is divided into partitions, a prediction mode for each prediction unit, and a size of a transformation unit for transformation.
- a method of determining a coding unit, a prediction unit / partition, and a transformation unit according to a tree structure of a maximum coding unit according to an embodiment will be described in detail with reference to FIGS. 9 to 19.
- the coding unit determiner 120 may measure a coding error of coding units according to depths using a Lagrangian Multiplier-based rate-distortion optimization technique.
- the output unit 130 outputs the image data and the split information according to depths of the maximum coding unit, which are encoded based on at least one depth determined by the coding unit determiner 120, in a bitstream form.
- the encoded image data may be a result of encoding residual data of the image.
- the split information for each depth may include depth information, partition mode information of a prediction unit, prediction mode information, split information of a transformation unit, and the like.
- the final depth information may be defined using depth-specific segmentation information indicating whether to encode in a coding unit of a lower depth rather than encoding the current depth. If the current depth of the current coding unit is a depth, since the current coding unit is encoded in a coding unit of the current depth, split information of the current depth may be defined so that it is no longer divided into lower depths. On the contrary, if the current depth of the current coding unit is not the depth, encoding should be attempted using the coding unit of the lower depth, and thus split information of the current depth may be defined to be divided into coding units of the lower depth.
- encoding is performed on the coding unit divided into the coding units of the lower depth. Since at least one coding unit of a lower depth exists in the coding unit of the current depth, encoding may be repeatedly performed for each coding unit of each lower depth, and recursive coding may be performed for each coding unit of the same depth.
- coding units having a tree structure are determined in one largest coding unit and at least one split information should be determined for each coding unit of a depth, at least one split information may be determined for one maximum coding unit.
- the depth since the data of the largest coding unit is partitioned hierarchically according to the depth, the depth may be different for each location, and thus depth and split information may be set for the data.
- the output unit 130 may allocate encoding information about a corresponding depth and an encoding mode to at least one of a coding unit, a prediction unit, and a minimum unit included in the maximum coding unit.
- the minimum unit according to an embodiment is a square data unit having a size obtained by dividing a minimum coding unit, which is the lowest depth, into four divisions.
- the minimum unit according to an embodiment may be a square data unit having a maximum size that may be included in all coding units, prediction units, partition units, and transformation units included in the maximum coding unit.
- the encoding information output through the output unit 130 may be classified into encoding information according to depth coding units and encoding information according to prediction units.
- the encoding information for each coding unit according to depth may include prediction mode information and partition size information.
- the encoding information transmitted for each prediction unit includes information about an estimation direction of the inter mode, information about a reference image index of the inter mode, information about a motion vector, information about a chroma component of an intra mode, and information about an inter mode of an intra mode. And the like.
- Information about the maximum size and information about the maximum depth of the coding unit defined for each picture, slice, or GOP may be inserted into a header, a sequence parameter set, or a picture parameter set of the bitstream.
- the information on the maximum size of the transform unit and the minimum size of the transform unit allowed for the current video may also be output through a header, a sequence parameter set, a picture parameter set, or the like of the bitstream.
- the output unit 130 may encode and output reference information, prediction information, slice type information, and the like related to prediction.
- a coding unit according to depths is a coding unit having a size in which a height and a width of a coding unit of one layer higher depth are divided by half. That is, if the size of the coding unit of the current depth is 2Nx2N, the size of the coding unit of the lower depth is NxN.
- the current coding unit having a size of 2N ⁇ 2N may include up to four lower depth coding units having a size of N ⁇ N.
- the video encoding apparatus 100 determines a coding unit having an optimal shape and size for each maximum coding unit based on the size and the maximum depth of the maximum coding unit determined in consideration of the characteristics of the current picture. Coding units may be configured. In addition, since each of the maximum coding units may be encoded in various prediction modes and transformation methods, an optimal coding mode may be determined in consideration of image characteristics of coding units having various image sizes.
- the video encoding apparatus may adjust the coding unit in consideration of the image characteristics while increasing the maximum size of the coding unit in consideration of the size of the image, thereby increasing image compression efficiency.
- FIG. 8 is a block diagram of a video decoding apparatus 200 based on coding units having a tree structure, according to various embodiments.
- a video decoding apparatus 200 including video prediction based on coding units having a tree structure includes a receiver 210, image data and encoding information extractor 220, and image data decoder 230. do.
- the video decoding apparatus 200 that includes video prediction based on coding units having a tree structure is abbreviated as “video decoding apparatus 200”.
- the receiver 210 receives and parses a bitstream of an encoded video.
- the image data and encoding information extractor 220 extracts image data encoded for each coding unit from the parsed bitstream according to coding units having a tree structure for each maximum coding unit, and outputs the encoded image data to the image data decoder 230.
- the image data and encoding information extractor 220 may extract information about a maximum size of a coding unit of the current picture from a header, a sequence parameter set, or a picture parameter set for the current picture.
- the image data and encoding information extractor 220 extracts the final depth and the split information of the coding units having a tree structure for each maximum coding unit from the parsed bitstream.
- the extracted final depth and split information are output to the image data decoder 230. That is, the image data of the bit string may be divided into maximum coding units so that the image data decoder 230 may decode the image data for each maximum coding unit.
- the depth and split information for each largest coding unit may be set for one or more depth information, and the split information for each depth may include partition mode information, prediction mode information, split information of a transform unit, and the like, of a corresponding coding unit. .
- depth-specific segmentation information may be extracted.
- the depth and split information for each largest coding unit extracted by the image data and encoding information extractor 220 are repeatedly used for each coding unit for each deeper coding unit, as in the video encoding apparatus 100 according to an exemplary embodiment. Depth and split information determined to perform encoding to generate a minimum encoding error. Therefore, the video decoding apparatus 200 may reconstruct an image by decoding data according to an encoding method that generates a minimum encoding error.
- the image data and the encoding information extractor 220 may use the predetermined data unit. Depth and segmentation information can be extracted for each. If the depth and the split information of the corresponding maximum coding unit are recorded for each predetermined data unit, the predetermined data units having the same depth and the split information may be inferred as data units included in the same maximum coding unit.
- the image data decoder 230 reconstructs the current picture by decoding image data of each maximum coding unit based on the depth and the split information for each maximum coding unit. That is, the image data decoder 230 may decode the encoded image data based on the read partition mode, the prediction mode, and the transformation unit for each coding unit among the coding units having the tree structure included in the maximum coding unit. Can be.
- the decoding process may include a prediction process including intra prediction and motion compensation, and an inverse transform process.
- the image data decoder 230 may perform intra prediction or motion compensation according to each partition and prediction mode for each coding unit, based on the partition mode information and the prediction mode information of the prediction unit of the coding unit according to depths.
- the image data decoder 230 may read transform unit information having a tree structure for each coding unit, and perform inverse transform based on the transformation unit for each coding unit, for inverse transformation for each largest coding unit. Through inverse transformation, the pixel value of the spatial region of the coding unit may be restored.
- the image data decoder 230 may determine the depth of the current maximum coding unit by using the split information for each depth. If the split information indicates that the split information is no longer divided at the current depth, the current depth is the depth. Therefore, the image data decoder 230 may decode the coding unit of the current depth using the partition mode, the prediction mode, and the transformation unit size information of the prediction unit, for the image data of the current maximum coding unit.
- the image data decoder 230 It may be regarded as one data unit to be decoded in the same encoding mode.
- the decoding of the current coding unit may be performed by obtaining information about an encoding mode for each coding unit determined in this way.
- the image decoding apparatus 30 described above with reference to FIG. 3A may decode the first layer image stream and the second layer image stream to reconstruct the first layer images and the second layer images. 200) may be included as many as the number of viewpoints.
- the image data decoder 230 of the video decoding apparatus 200 may maximize the samples of the first layer images extracted from the first layer image stream by the extractor 220. It may be divided into coding units having a tree structure of the coding units. The image data decoder 230 may reconstruct the first layer images by performing motion compensation for each coding unit according to a tree structure of samples of the first layer images, for each prediction unit for inter-image prediction.
- the image data decoder 230 of the video decoding apparatus 200 may maximize the samples of the second layer images extracted from the second layer image stream by the extractor 220. It may be divided into coding units having a tree structure of the coding units. The image data decoder 230 may reconstruct the second layer images by performing motion compensation for each prediction unit for inter-image prediction for each coding unit of the samples of the second layer images.
- the extractor 220 may obtain information related to the luminance error from the bitstream to compensate for the luminance difference between the first layer image and the second layer image. However, whether to perform luminance may be determined according to an encoding mode of a coding unit. For example, luminance compensation may be performed only for prediction units having a size of 2N ⁇ 2N.
- the video decoding apparatus 200 may obtain information about a coding unit that generates a minimum coding error by recursively encoding each maximum coding unit in the encoding process, and use the same to decode the current picture. That is, decoding of encoded image data of coding units having a tree structure determined as an optimal coding unit for each maximum coding unit can be performed.
- the image data is efficiently decoded according to the size and encoding mode of a coding unit adaptively determined according to the characteristics of the image using the optimal split information transmitted from the encoding end. Can be restored
- FIG 9 illustrates a concept of coding units, according to various embodiments.
- a size of a coding unit may be expressed by a width x height, and may include 32x32, 16x16, and 8x8 from a coding unit having a size of 64x64.
- Coding units of size 64x64 may be partitioned into partitions of size 64x64, 64x32, 32x64, and 32x32, coding units of size 32x32 are partitions of size 32x32, 32x16, 16x32, and 16x16, and coding units of size 16x16 are 16x16.
- Coding units of size 8x8 may be divided into partitions of size 8x8, 8x4, 4x8, and 4x4, into partitions of 16x8, 8x16, and 8x8.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 2.
- the resolution is set to 1920x1080, the maximum size of the coding unit is 64, and the maximum depth is 3.
- the resolution is set to 352x288, the maximum size of the coding unit is 16, and the maximum depth is 1.
- the maximum depth illustrated in FIG. 10 represents the total number of divisions from the maximum coding unit to the minimum coding unit.
- the maximum size of the coding size is relatively large not only to improve the coding efficiency but also to accurately shape the image characteristics. Accordingly, the video data 310 or 320 having a higher resolution than the video data 330 may be selected to have a maximum size of 64.
- the coding unit 315 of the video data 310 is divided twice from a maximum coding unit having a long axis size of 64, and the depth is deepened by two layers, so that the long axis size is 32, 16. Up to coding units may be included.
- the coding unit 335 of the video data 330 is divided once from coding units having a long axis size of 16, and the depth is deepened by one layer to increase the long axis size to 8. Up to coding units may be included.
- the coding unit 325 of the video data 320 is divided three times from the largest coding unit having a long axis size of 64, and the depth is three layers deep, so that the long axis size is 32, 16. , Up to 8 coding units may be included. As the depth increases, the expressive power of the detailed information may be improved.
- FIG. 10 is a block diagram of an image encoder 400 based on coding units, according to various embodiments.
- the image encoder 400 performs operations that are performed to encode image data by the picture encoder 120 of the video encoding apparatus 100. That is, the intra prediction unit 420 performs intra prediction on each coding unit of the intra mode of the current image 405, and the inter prediction unit 415 performs the current image on the prediction unit of the coding unit of the inter mode. Inter-prediction is performed using the reference image acquired at 405 and the reconstructed picture buffer 410.
- the current image 405 may be divided into maximum coding units and then sequentially encoded. In this case, encoding may be performed on the coding unit in which the largest coding unit is to be divided into a tree structure.
- Residual data is generated by subtracting the prediction data for the coding unit of each mode output from the intra prediction unit 420 or the inter prediction unit 415 from the data for the encoding unit of the current image 405, and
- the dew data is output as transform coefficients quantized for each transform unit through the transform unit 425 and the quantization unit 430.
- the quantized transform coefficients are reconstructed into residue data in the spatial domain through the inverse quantizer 445 and the inverse transformer 450.
- Residual data of the reconstructed spatial domain is added to the prediction data of the coding unit of each mode output from the intra predictor 420 or the inter predictor 415, thereby adding the residual data of the spatial domain to the coding unit of the current image 405. The data is restored.
- the reconstructed spatial region data is generated as a reconstructed image through the deblocking unit 455 and the SAO performing unit 460.
- the generated reconstructed image is stored in the reconstructed picture buffer 410.
- the reconstructed images stored in the reconstructed picture buffer 410 may be used as reference images for inter prediction of another image.
- the transform coefficients quantized by the transformer 425 and the quantizer 430 may be output as the bitstream 440 through the entropy encoder 435.
- an inter predictor 415, an intra predictor 420, and a transformer each have a tree structure for each maximum coding unit. An operation based on each coding unit among the coding units may be performed.
- the intra prediction unit 420 and the inter prediction unit 415 determine the partition mode and the prediction mode of each coding unit among the coding units having a tree structure in consideration of the maximum size and the maximum depth of the current maximum coding unit.
- the transform unit 425 may determine whether to split the transform unit according to the quad tree in each coding unit among the coding units having the tree structure.
- FIG. 11 is a block diagram of an image decoder 500 based on coding units, according to various embodiments.
- the entropy decoding unit 515 parses the encoded image data to be decoded from the bitstream 505 and encoding information necessary for decoding.
- the encoded image data is a quantized transform coefficient
- the inverse quantizer 520 and the inverse transform unit 525 reconstruct residue data from the quantized transform coefficients.
- the intra prediction unit 540 performs intra prediction for each prediction unit with respect to the coding unit of the intra mode.
- the inter prediction unit 535 performs inter prediction using the reference image obtained from the reconstructed picture buffer 530 for each coding unit of the coding mode of the inter mode among the current pictures.
- the data of the spatial domain of the coding unit of the current image 405 is reconstructed and restored.
- the data of the space area may be output as a reconstructed image 560 via the deblocking unit 545 and the SAO performing unit 550.
- the reconstructed images stored in the reconstructed picture buffer 530 may be output as reference images.
- step-by-step operations after the entropy decoder 515 of the image decoder 500 may be performed.
- the entropy decoder 515, the inverse quantizer 520, and the inverse transformer ( 525, the intra prediction unit 540, the inter prediction unit 535, the deblocking unit 545, and the SAO performer 550 based on each coding unit among coding units having a tree structure for each maximum coding unit. You can do it.
- the intra predictor 540 and the inter predictor 535 determine a partition mode and a prediction mode for each coding unit among coding units having a tree structure, and the inverse transformer 525 has a quad tree structure for each coding unit. It is possible to determine whether to divide the conversion unit according to.
- the encoding operation of FIG. 10 and the decoding operation of FIG. 11 describe the video stream encoding operation and the decoding operation in a single layer, respectively. Therefore, if the image encoding apparatus 40 of FIG. 4A encodes a video stream of two or more layers, the image encoding unit 400 may be included for each layer. Similarly, if the decoding apparatus 30 of FIG. 3A decodes video streams of two or more layers, the image decoding unit 500 may be included for each layer.
- FIG. 12 is a diagram illustrating deeper coding units according to depths, and partitions, according to various embodiments.
- the video encoding apparatus 100 according to an embodiment and the video decoding apparatus 200 according to an embodiment use hierarchical coding units to consider image characteristics.
- the maximum height, width, and maximum depth of the coding unit may be adaptively determined according to the characteristics of the image, and may be variously set according to a user's request. According to the maximum size of the preset coding unit, the size of the coding unit for each depth may be determined.
- the hierarchical structure 600 of a coding unit illustrates a case in which a maximum height and a width of a coding unit are 64 and a maximum depth is three.
- the maximum depth indicates the total number of divisions from the maximum coding unit to the minimum coding unit. Since the depth deepens along the vertical axis of the hierarchical structure 600 of the coding unit according to an embodiment, the height and the width of the coding unit for each depth are divided.
- a prediction unit and a partition on which the prediction encoding of each depth-based coding unit is shown along the horizontal axis of the hierarchical structure 600 of the coding unit are illustrated.
- the coding unit 610 has a depth of 0 as the largest coding unit of the hierarchical structure 600 of the coding unit, and the size, ie, the height and width, of the coding unit is 64x64.
- a depth deeper along the vertical axis includes a coding unit 620 of depth 1 having a size of 32x32, a coding unit 630 of depth 2 having a size of 16x16, and a coding unit 640 of depth 3 having a size of 8x8.
- a coding unit 640 of depth 3 having a size of 8 ⁇ 8 is a minimum coding unit.
- Prediction units and partitions of the coding unit are arranged along the horizontal axis for each depth. That is, if the coding unit 610 of size 64x64 having a depth of zero is a prediction unit, the prediction unit may include a partition 610 of size 64x64, partitions 612 of size 64x32, and size included in the coding unit 610 of size 64x64. 32x64 partitions 614, 32x32 partitions 616.
- the prediction unit of the coding unit 620 having a size of 32x32 having a depth of 1 includes a partition 620 of size 32x32, partitions 622 of size 32x16 and a partition of size 16x32 included in the coding unit 620 of size 32x32. 624, partitions 626 of size 16x16.
- the prediction unit of the coding unit 630 of size 16x16 having a depth of 2 includes a partition 630 of size 16x16, partitions 632 of size 16x8, and a partition of size 8x16 included in the coding unit 630 of size 16x16. 634, partitions 636 of size 8x8.
- the prediction unit of the coding unit 640 of size 8x8 having a depth of 3 includes a partition 640 of size 8x8, partitions 642 of size 8x4 and a partition of size 4x8 included in the coding unit 640 of size 8x8. 644, partitions 646 of size 4x4.
- the coding unit determiner 120 of the video encoding apparatus 100 may determine the depth of the maximum coding unit 610 for each coding unit of each depth included in the maximum coding unit 610. Encoding must be performed.
- the number of deeper coding units according to depths for including data having the same range and size increases as the depth increases. For example, four coding units of depth 2 are required for data included in one coding unit of depth 1. Therefore, in order to compare the encoding results of the same data for each depth, each of the coding units having one depth 1 and four coding units having four depths 2 should be encoded.
- encoding may be performed for each prediction unit of a coding unit according to depths along a horizontal axis of the hierarchical structure 600 of the coding unit, and a representative coding error, which is the smallest coding error at a corresponding depth, may be selected. .
- a depth deeper along the vertical axis of the hierarchical structure 600 of the coding unit the encoding may be performed for each depth, and the minimum coding error may be searched by comparing the representative coding error for each depth.
- the depth and partition in which the minimum coding error occurs in the maximum coding unit 610 may be selected as the depth and partition mode of the maximum coding unit 610.
- FIG. 13 illustrates a relationship between a coding unit and transformation units, according to various embodiments.
- the video encoding apparatus 100 encodes or decodes an image in coding units having a size smaller than or equal to the maximum coding unit for each maximum coding unit.
- the size of a transformation unit for transformation in the encoding process may be selected based on a data unit that is not larger than each coding unit.
- the 32x32 size conversion unit 720 is The conversion can be performed.
- the data of the 64x64 coding unit 710 is transformed into 32x32, 16x16, 8x8, and 4x4 transform units of 64x64 size or less, and then encoded, and the transform unit having the least error with the original is selected. Can be.
- FIG. 14 is a diagram of deeper encoding information according to depths, according to various embodiments.
- the output unit 130 of the video encoding apparatus 100 is split information, and information about a partition mode 800, information 810 about a prediction mode, and transform unit size for each coding unit of each depth.
- Information 820 may be encoded and transmitted.
- the information about the partition mode 800 is a data unit for predictive encoding of the current coding unit and indicates information about a partition type in which the prediction unit of the current coding unit is divided.
- the current coding unit CU_0 of size 2Nx2N may be any one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It can be divided and used.
- the information 800 about the partition mode of the current coding unit represents one of a partition 802 of size 2Nx2N, a partition 804 of size 2NxN, a partition 806 of size Nx2N, and a partition 808 of size NxN. It is set to.
- Information 810 relating to the prediction mode indicates the prediction mode of each partition. For example, through the information 810 about the prediction mode, whether the partition indicated by the information 800 about the partition mode is performed in one of the intra mode 812, the inter mode 814, and the skip mode 816 is performed. Whether or not can be set.
- the information about the transform unit size 820 indicates whether to transform the current coding unit based on the transform unit.
- the transform unit may be one of a first intra transform unit size 822, a second intra transform unit size 824, a first inter transform unit size 826, and a second inter transform unit size 828. have.
- the image data and encoding information extractor 210 of the video decoding apparatus 200 may include information about a partition mode 800, information 810 about a prediction mode, and transformation for each depth-based coding unit. Information 820 about the unit size may be extracted and used for decoding.
- 15 is a diagram of deeper coding units according to depths, according to various embodiments.
- Segmentation information may be used to indicate a change in depth.
- the split information indicates whether a coding unit of a current depth is split into coding units of a lower depth.
- the prediction unit 910 for predictive encoding of the coding unit 900 having depth 0 and 2N_0x2N_0 size includes a partition mode 912 having a size of 2N_0x2N_0, a partition mode 914 having a size of 2N_0xN_0, a partition mode 916 having a size of N_0x2N_0, and N_0xN_0 May include a partition mode 918 of size. Although only partitions 912, 914, 916, and 918 in which the prediction unit is divided by a symmetrical ratio are illustrated, as described above, the partition mode is not limited thereto, and asymmetric partitions, arbitrary partitions, geometric partitions, and the like. It may include.
- prediction coding For each partition mode, prediction coding must be performed repeatedly for one 2N_0x2N_0 partition, two 2N_0xN_0 partitions, two N_0x2N_0 partitions, and four N_0xN_0 partitions.
- prediction encoding For partitions having a size 2N_0x2N_0, a size N_0x2N_0, a size 2N_0xN_0, and a size N_0xN_0, prediction encoding may be performed in an intra mode and an inter mode.
- the skip mode may be performed only for prediction encoding on partitions having a size of 2N_0x2N_0.
- the depth 0 is changed to 1 and split (920), and the encoding is repeatedly performed on the depth 2 and the coding units 930 of the partition mode of size N_0xN_0.
- the depth 1 is changed to the depth 2 and split (950), and repeatedly for the depth 2 and the coding units 960 of the size N_2xN_2.
- the encoding may be performed to search for a minimum encoding error.
- depth-based coding units may be set until depth d-1, and split information may be set up to depth d-2. That is, when encoding is performed from the depth d-2 to the depth d-1 to the depth d-1, the prediction encoding of the coding unit 980 of the depth d-1 and the size 2N_ (d-1) x2N_ (d-1)
- the prediction unit for 990 is a partition mode 992 of size 2N_ (d-1) x2N_ (d-1), a partition mode 994 of size 2N_ (d-1) xN_ (d-1), and size
- a partition mode 996 of N_ (d-1) x2N_ (d-1) and a partition mode 998 of size N_ (d-1) xN_ (d-1) may be included.
- partition mode one partition 2N_ (d-1) x2N_ (d-1), two partitions 2N_ (d-1) xN_ (d-1), two sizes N_ (d-1) x2N_
- a partition mode in which a minimum encoding error occurs may be searched.
- the coding unit CU_ (d-1) of the depth d-1 is no longer
- the depth of the current maximum coding unit 900 may be determined as the depth d-1, and the partition mode may be determined as N_ (d-1) xN_ (d-1) without going through a division process into lower depths.
- split information is not set for the coding unit 952 having the depth d-1.
- the data unit 999 may be referred to as a 'minimum unit' for the current maximum coding unit.
- the minimum unit may be a square data unit having a size obtained by dividing the minimum coding unit, which is the lowest depth, into four segments.
- the video encoding apparatus 100 compares depth-to-depth encoding errors of the coding units 900, selects a depth at which the smallest encoding error occurs, and determines a depth.
- the partition mode and the prediction mode may be set to the encoding mode of the depth.
- depths with the smallest error can be determined by comparing the minimum coding errors for all depths of depths 0, 1, ..., d-1, and d.
- the depth, the partition mode of the prediction unit, and the prediction mode may be encoded and transmitted as split information.
- the coding unit since the coding unit must be split from the depth 0 to the depth, only the split information of the depth is set to '0', and the split information for each depth except the depth should be set to '1'.
- the image data and encoding information extractor 220 of the video decoding apparatus 200 may extract information about a depth and a prediction unit of the coding unit 900 and use it to decode the coding unit 912. have.
- the video decoding apparatus 200 may grasp a depth having split information of '0' as a depth using split information for each depth, and may use the split information for the corresponding depth for decoding.
- 16, 17, and 18 illustrate a relationship between coding units, prediction units, and transformation units, according to various embodiments.
- the coding units 1010 are deeper coding units determined by the video encoding apparatus 100 according to an embodiment with respect to the largest coding unit.
- the prediction unit 1060 is partitions of prediction units of each deeper coding unit among the coding units 1010, and the transform unit 1070 is transform units of each deeper coding unit.
- the depth-based coding units 1010 have a depth of 0
- the coding units 1012 and 1054 have a depth of 1
- the coding units 1014, 1016, 1018, 1028, 1050, and 1052 have depths.
- coding units 1020, 1022, 1024, 1026, 1030, 1032, and 1048 have a depth of three
- coding units 1040, 1042, 1044, and 1046 have a depth of four.
- partitions 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 of the prediction units 1060 are obtained by splitting coding units. That is, partitions 1014, 1022, 1050, and 1054 are 2NxN partition modes, partitions 1016, 1048, and 1052 are Nx2N partition modes, and partitions 1032 are NxN partition modes. Prediction units and partitions of the coding units 1010 according to depths are smaller than or equal to each coding unit.
- the image data of the part 1052 of the transformation units 1070 is transformed or inversely transformed into a data unit having a smaller size than the coding unit.
- the transformation units 1014, 1016, 1022, 1032, 1048, 1050, 1052, and 1054 are data units having different sizes or shapes when compared to corresponding prediction units and partitions among the prediction units 1060. That is, the video encoding apparatus 100 according to an embodiment and the video decoding apparatus 200 according to an embodiment may be intra prediction / motion estimation / motion compensation operations and transform / inverse transform operations for the same coding unit. Each can be performed on a separate data unit.
- coding is performed recursively for each coding unit having a hierarchical structure for each largest coding unit to determine an optimal coding unit.
- coding units having a recursive tree structure may be configured.
- the encoding information may include split information about the coding unit, partition mode information, prediction mode information, and transformation unit size information. Table 1 below shows an example that can be set in the video encoding apparatus 100 and the video decoding apparatus 200 according to an embodiment.
- the output unit 130 of the video encoding apparatus 100 outputs encoding information about coding units having a tree structure
- the encoding information extraction unit of the video decoding apparatus 200 according to an embodiment 220 may extract encoding information about coding units having a tree structure from the received bitstream.
- the split information indicates whether the current coding unit is split into coding units of a lower depth. If the split information of the current depth d is 0, partition mode information, prediction mode, and transform unit size information may be defined for the depth since the current coding unit is a depth in which the current coding unit is no longer divided into lower coding units. have. If it is to be further split by the split information, encoding should be performed independently for each coding unit of the divided four lower depths.
- the prediction mode may be represented by one of an intra mode, an inter mode, and a skip mode.
- Intra mode and inter mode can be defined in all partition modes, and skip mode can only be defined in partition mode 2Nx2N.
- the partition mode information indicates symmetric partition modes 2Nx2N, 2NxN, Nx2N, and NxN, in which the height or width of the prediction unit is divided by symmetrical ratios, and asymmetric partition modes 2NxnU, 2NxnD, nLx2N, nRx2N, divided by asymmetrical ratios.
- the asymmetric partition modes 2NxnU and 2NxnD are divided into heights of 1: 3 and 3: 1, respectively, and the asymmetric partition modes nLx2N and nRx2N are divided into 1: 3 and 3: 1 widths, respectively.
- the conversion unit size may be set to two kinds of sizes in the intra mode and two kinds of sizes in the inter mode. That is, if the transformation unit split information is 0, the size of the transformation unit is set to the size 2Nx2N of the current coding unit. If the transform unit split information is 1, a transform unit having a size obtained by dividing the current coding unit may be set. In addition, if the partition mode for the current coding unit having a size of 2Nx2N is a symmetric partition mode, the size of the transform unit may be set to NxN, and N / 2xN / 2 if it is an asymmetric partition mode.
- Encoding information of coding units having a tree structure may be allocated to at least one of a coding unit, a prediction unit, and a minimum unit unit of a depth.
- the coding unit of the depth may include at least one prediction unit and at least one minimum unit having the same encoding information.
- the encoding information held by each adjacent data unit is checked, it may be determined whether the data is included in the coding unit having the same depth.
- the coding unit of the corresponding depth may be identified using the encoding information held by the data unit, the distribution of depths within the maximum coding unit may be inferred.
- the encoding information of the data unit in the depth-specific coding unit adjacent to the current coding unit may be directly referred to and used.
- the prediction coding when the prediction coding is performed by referring to the neighboring coding unit, the data adjacent to the current coding unit in the coding unit according to depths is encoded by using the encoding information of the adjacent coding units according to depths.
- the neighboring coding unit may be referred to by searching.
- FIG. 19 illustrates a relationship between a coding unit, a prediction unit, and a transformation unit, according to encoding mode information of Table 1.
- the maximum coding unit 1300 includes coding units 1302, 1304, 1306, 1312, 1314, 1316, and 1318 of depths. Since one coding unit 1318 is a coding unit of depth, split information may be set to zero.
- the partition mode information of the coding unit 1318 having a size of 2Nx2N includes partition modes 2Nx2N 1322, 2NxN 1324, Nx2N 1326, NxN 1328, 2NxnU 1332, 2NxnD 1334, and nLx2N (1336). And nRx2N 1338.
- the transform unit split information (TU size flag) is a type of transform index, and a size of a transform unit corresponding to the transform index may be changed according to a prediction unit type or a partition mode of the coding unit.
- the partition mode information is set to one of symmetric partition modes 2Nx2N 1322, 2NxN 1324, Nx2N 1326, and NxN 1328
- the conversion unit partition information is 0, a conversion unit of size 2Nx2N ( 1342 is set, and if the transform unit split information is 1, a transform unit 1344 of size NxN may be set.
- partition mode information is set to one of asymmetric partition modes 2NxnU (1332), 2NxnD (1334), nLx2N (1336), and nRx2N (1338), if the conversion unit partition information (TU size flag) is 0, a conversion unit of size 2Nx2N ( 1352 is set, and if the transform unit split information is 1, a transform unit 1354 of size N / 2 ⁇ N / 2 may be set.
- the conversion unit splitting information (TU size flag) described above with reference to FIG. 19 is a flag having a value of 0 or 1, but the conversion unit splitting information according to an embodiment is not limited to a 1-bit flag and is set according to a setting. , 1, 2, 3., etc., and may be divided hierarchically.
- the transformation unit partition information may be used as an embodiment of the transformation index.
- the size of the transformation unit actually used may be expressed.
- the video encoding apparatus 100 may encode maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information.
- the encoded maximum transform unit size information, minimum transform unit size information, and maximum transform unit split information may be inserted into the SPS.
- the video decoding apparatus 200 may use the maximum transform unit size information, the minimum transform unit size information, and the maximum transform unit split information to use for video decoding.
- the maximum transform unit split information is defined as 'MaxTransformSizeIndex'
- the minimum transform unit size is 'MinTransformSize'
- the transform unit split information is 0,
- the minimum transform unit possible in the current coding unit is defined as 'RootTuSize'.
- the size 'CurrMinTuSize' can be defined as in relation (1) below.
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may indicate a maximum transform unit size that can be adopted in the system. That is, according to relation (1), 'RootTuSize / (2 ⁇ MaxTransformSizeIndex)' is a transformation obtained by dividing 'RootTuSize', which is the size of the transformation unit when the transformation unit division information is 0, by the number of times corresponding to the maximum transformation unit division information. Since the unit size is 'MinTransformSize' is the minimum transform unit size, a smaller value among them may be the minimum transform unit size 'CurrMinTuSize' possible in the current coding unit.
- the maximum transform unit size RootTuSize may vary depending on a prediction mode.
- RootTuSize may be determined according to the following relation (2).
- 'MaxTransformSize' represents the maximum transform unit size
- 'PUSize' represents the current prediction unit size.
- RootTuSize min (MaxTransformSize, PUSize) ......... (2)
- 'RootTuSize' which is a transform unit size when the transform unit split information is 0, may be set to a smaller value among the maximum transform unit size and the current prediction unit size.
- 'RootTuSize' may be determined according to Equation (3) below.
- 'PartitionSize' represents the size of the current partition unit.
- RootTuSize min (MaxTransformSize, PartitionSize) ........... (3)
- the conversion unit size 'RootTuSize' when the conversion unit split information is 0 may be set to a smaller value among the maximum conversion unit size and the current partition unit size.
- the current maximum conversion unit size 'RootTuSize' according to an embodiment that changes according to the prediction mode of the partition unit is only an embodiment, and a factor determining the current maximum conversion unit size is not limited thereto.
- the image data of the spatial domain is encoded for each coding unit of the tree structure, and the video decoding method based on the coding units of the tree structure.
- decoding is performed for each largest coding unit, and image data of a spatial region may be reconstructed to reconstruct a picture and a video that is a picture sequence.
- the reconstructed video can be played back by a playback device, stored in a storage medium, or transmitted over a network.
- the above-described embodiments can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer which operates the program using a computer-readable recording medium.
- the computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).
- the video encoding method and / or video encoding method described above with reference to FIGS. 1 to 19 will be collectively referred to as a video encoding method.
- the image decoding method and / or video decoding method described above with reference to FIGS. 1 to 19 will be referred to as a video decoding method.
- the video encoding apparatus composed of the image encoding apparatus 40, the video encoding apparatus 100, or the image encoding unit 400 described above with reference to FIGS. 1 to 19 is collectively referred to as a “video encoding apparatus”.
- a video decoding apparatus including the image decoding apparatus 30, the video decoding apparatus 200, or the image decoding unit 500 described above with reference to FIGS. 1 to 19 is referred to as a “video decoding apparatus”.
- a computer-readable storage medium in which a program is stored according to an embodiment of the present invention will be described in detail below.
- the disk 26000 described above as a storage medium may be a hard drive, a CD-ROM disk, a Blu-ray disk, or a DVD disk.
- the disk 26000 is composed of a plurality of concentric tracks tr, and the tracks are divided into a predetermined number of sectors Se in the circumferential direction.
- a program for implementing the above-described quantization parameter determination method, video encoding method, and video decoding method may be allocated and stored in a specific region of the disc 26000 which stores the program according to the above-described embodiment.
- a computer system achieved using a storage medium storing a program for implementing the above-described video encoding method and video decoding method will be described below with reference to FIG. 22.
- the computer system 26700 may store a program for implementing at least one of a video encoding method and a video decoding method on the disc 26000 using the disc drive 26800.
- the program may be read from the disk 26000 by the disk drive 26800, and the program may be transferred to the computer system 26700.
- a program for implementing at least one of a video encoding method and a video decoding method according to an embodiment may be stored in a memory card, a ROM cassette, and a solid state drive (SSD) as well as the disk 26000 illustrated in FIGS. 20 and 21. Can be.
- SSD solid state drive
- FIG. 22 illustrates the overall structure of a content supply system 11000 for providing a content distribution service.
- the service area of the communication system is divided into cells of a predetermined size, and wireless base stations 11700, 11800, 11900, and 12000 that serve as base stations are installed in each cell.
- the content supply system 11000 includes a plurality of independent devices.
- independent devices such as a computer 12100, a personal digital assistant (PDA) 12200, a camera 12300, and a mobile phone 12500 may be an Internet service provider 11200, a communication network 11400, and a wireless base station. 11700, 11800, 11900, and 12000 to connect to the Internet 11100.
- PDA personal digital assistant
- the content supply system 11000 is not limited to the structure shown in FIG. 24, and devices may be selectively connected.
- the independent devices may be directly connected to the communication network 11400 without passing through the wireless base stations 11700, 11800, 11900, and 12000.
- the video camera 12300 is an imaging device capable of capturing video images like a digital video camera.
- the mobile phone 12500 is such as Personal Digital Communications (PDC), code division multiple access (CDMA), wideband code division multiple access (W-CDMA), Global System for Mobile Communications (GSM), and Personal Handyphone System (PHS). At least one communication scheme among various protocols may be adopted.
- PDC Personal Digital Communications
- CDMA code division multiple access
- W-CDMA wideband code division multiple access
- GSM Global System for Mobile Communications
- PHS Personal Handyphone System
- the video camera 12300 may be connected to the streaming server 11300 through the wireless base station 11900 and the communication network 11400.
- the streaming server 11300 may stream and transmit the content transmitted by the user using the video camera 12300 through real time broadcasting.
- Content received from the video camera 12300 may be encoded by the video camera 12300 or the streaming server 11300.
- Video data captured by the video camera 12300 may be transmitted to the streaming server 11300 via the computer 12100.
- Video data captured by the camera 12600 may also be transmitted to the streaming server 11300 via the computer 12100.
- the camera 12600 is an imaging device capable of capturing both still and video images, like a digital camera.
- Video data received from the camera 12600 may be encoded by the camera 12600 or the computer 12100.
- Software for video encoding and decoding may be stored in a computer readable recording medium such as a CD-ROM disk, a floppy disk, a hard disk drive, an SSD, or a memory card that the computer 12100 may access.
- video data may be received from the mobile phone 12500.
- the video data may be encoded by a large scale integrated circuit (LSI) system installed in the video camera 12300, the mobile phone 12500, or the camera 12600.
- LSI large scale integrated circuit
- a user is recorded using a video camera 12300, a camera 12600, a mobile phone 12500, or another imaging device.
- the content is encoded and sent to the streaming server 11300.
- the streaming server 11300 may stream and transmit content data to other clients who have requested the content data.
- the clients are devices capable of decoding the encoded content data, and may be, for example, a computer 12100, a PDA 12200, a video camera 12300, or a mobile phone 12500.
- the content supply system 11000 allows clients to receive and play encoded content data.
- the content supply system 11000 enables clients to receive and decode and reproduce encoded content data in real time, thereby enabling personal broadcasting.
- the video encoding apparatus and the video decoding apparatus may be applied to encoding and decoding operations of independent devices included in the content supply system 11000.
- the mobile phone 12500 is not limited in functionality and may be a smart phone that can change or expand a substantial portion of its functions through an application program.
- the mobile phone 12500 includes a built-in antenna 12510 for exchanging RF signals with the wireless base station 12000, and displays images captured by the camera 1530 or images received and decoded by the antenna 12510. And a display screen 12520 such as an LCD (Liquid Crystal Display) and an OLED (Organic Light Emitting Diodes) screen for displaying.
- the smartphone 12510 includes an operation panel 12540 including a control button and a touch panel. When the display screen 12520 is a touch screen, the operation panel 12540 further includes a touch sensing panel of the display screen 12520.
- the smart phone 12510 includes a speaker 12580 or another type of audio output unit for outputting voice and sound, and a microphone 12550 or another type of audio input unit for inputting voice and sound.
- the smartphone 12510 further includes a camera 1530 such as a CCD camera for capturing video and still images.
- the smartphone 12510 may be a storage medium for storing encoded or decoded data, such as video or still images captured by the camera 1530, received by an e-mail, or obtained in another form. 12570); And a slot 12560 for mounting the storage medium 12570 to the mobile phone 12500.
- the storage medium 12570 may be another type of flash memory such as an electrically erasable and programmable read only memory (EEPROM) embedded in an SD card or a plastic case.
- EEPROM electrically erasable and programmable read only memory
- FIG. 24 shows the internal structure of the mobile phone 12500.
- the power supply circuit 12700 the operation input controller 12640, the image encoder 12720, and the camera interface (12630), LCD control unit (12620), image decoding unit (12690), multiplexer / demultiplexer (12680), recording / reading unit (12670), modulation / demodulation unit (12660) and
- the sound processor 12650 is connected to the central controller 12710 through the synchronization bus 1730.
- the power supply circuit 12700 supplies power to each part of the mobile phone 12500 from the battery pack, thereby causing the mobile phone 12500 to operate. Can be set to an operating mode.
- the central controller 12710 includes a CPU, a read only memory (ROM), and a random access memory (RAM).
- the digital signal is generated in the mobile phone 12500 under the control of the central controller 12710, for example, the digital sound signal is generated in the sound processor 12650.
- the image encoder 12720 may generate a digital image signal, and text data of the message may be generated through the operation panel 12540 and the operation input controller 12640.
- the modulator / demodulator 12660 modulates a frequency band of the digital signal, and the communication circuit 12610 is a band-modulated digital signal. Digital-to-analog conversion and frequency conversion are performed on the acoustic signal.
- the transmission signal output from the communication circuit 12610 may be transmitted to the voice communication base station or the radio base station 12000 through the antenna 12510.
- the sound signal acquired by the microphone 12550 is converted into a digital sound signal by the sound processor 12650 under the control of the central controller 12710.
- the generated digital sound signal may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the text data of the message is input using the operation panel 12540, and the text data is transmitted to the central controller 12610 through the operation input controller 12640.
- the text data is converted into a transmission signal through the modulator / demodulator 12660 and the communication circuit 12610, and transmitted to the radio base station 12000 through the antenna 12510.
- the image data photographed by the camera 1530 is provided to the image encoder 12720 through the camera interface 12630.
- the image data photographed by the camera 1252 may be directly displayed on the display screen 12520 through the camera interface 12630 and the LCD controller 12620.
- the structure of the image encoder 12720 may correspond to the structure of the video encoding apparatus according to the above-described embodiment.
- the image encoder 12720 encodes the image data provided from the camera 1252 according to the above-described video encoding method, converts the image data into compression-encoded image data, and multiplexes / demultiplexes 12680 the encoded image data.
- the sound signal obtained by the microphone 12550 of the mobile phone 12500 is also converted into digital sound data through the sound processor 12650 during recording of the camera 1250, and the digital sound data is converted into the multiplex / demultiplexer 12680. Can be delivered.
- the multiplexer / demultiplexer 12680 multiplexes the encoded image data provided from the image encoder 12720 together with the acoustic data provided from the sound processor 12650.
- the multiplexed data may be converted into a transmission signal through the modulation / demodulation unit 12660 and the communication circuit 12610 and transmitted through the antenna 12510.
- the signal received through the antenna converts the digital signal through a frequency recovery (Analog-Digital conversion) process .
- the modulator / demodulator 12660 demodulates the frequency band of the digital signal.
- the band demodulated digital signal is transmitted to the video decoder 12690, the sound processor 12650, or the LCD controller 12620 according to the type.
- the mobile phone 12500 When the mobile phone 12500 is in the call mode, the mobile phone 12500 amplifies a signal received through the antenna 12510 and generates a digital sound signal through frequency conversion and analog-to-digital conversion processing.
- the received digital sound signal is converted into an analog sound signal through the modulator / demodulator 12660 and the sound processor 12650 under the control of the central controller 12710, and the analog sound signal is output through the speaker 12580. .
- a signal received from the radio base station 12000 via the antenna 12510 is converted into multiplexed data as a result of the processing of the modulator / demodulator 12660.
- the output and multiplexed data is transmitted to the multiplexer / demultiplexer 12680.
- the multiplexer / demultiplexer 12680 demultiplexes the multiplexed data to separate the encoded video data stream and the encoded audio data stream.
- the encoded video data stream is provided to the video decoder 12690, and the encoded audio data stream is provided to the sound processor 12650.
- the structure of the image decoder 12690 may correspond to the structure of the video decoding apparatus described above.
- the image decoder 12690 decodes the encoded video data to generate reconstructed video data by using the above-described video decoding method, and restores the reconstructed video data to the display screen 1252 via the LCD controller 1262.
- Video data can be provided.
- video data of a video file accessed from a website of the Internet can be displayed on the display screen 1252.
- the sound processor 1265 may convert the audio data into an analog sound signal and provide the analog sound signal to the speaker 1258. Accordingly, audio data contained in a video file accessed from a website of the Internet can also be reproduced in the speaker 1258.
- the mobile phone 1250 or another type of communication terminal is a transmitting / receiving terminal including both a video encoding apparatus and a video decoding apparatus according to an embodiment, a transmitting terminal including only the video encoding apparatus according to the above-described embodiment, or an embodiment. It may be a receiving terminal including only a video decoding apparatus according to an example.
- FIG. 25 illustrates a digital broadcasting system employing a communication system, according to various embodiments.
- the digital broadcasting system according to the embodiment of FIG. 25 may receive a digital broadcast transmitted through a satellite or terrestrial network using the video encoding apparatus and the video decoding apparatus according to the embodiment.
- the broadcast station 12890 transmits the video data stream to the communication satellite or the broadcast satellite 12900 through radio waves.
- the broadcast satellite 12900 transmits a broadcast signal, and the broadcast signal is received by the antenna 12860 in the home to the satellite broadcast receiver.
- the encoded video stream may be decoded and played back by the TV receiver 12610, set-top box 12870, or other device.
- the playback apparatus 12230 may read and decode the encoded video stream recorded on the storage medium 12620 such as a disk and a memory card.
- the reconstructed video signal may thus be reproduced in the monitor 12840, for example.
- the video decoding apparatus may also be mounted in the set-top box 12870 connected to the antenna 12860 for satellite / terrestrial broadcasting or the cable antenna 12850 for cable TV reception. Output data of the set-top box 12870 may also be reproduced by the TV monitor 12880.
- the video decoding apparatus may be mounted in the TV receiver 12810 instead of the set top box 12870.
- An automobile 12920 with an appropriate antenna 12910 may receive signals from satellite 12800 or radio base station 11700.
- the decoded video may be played on the display screen of the car navigation system 12930 mounted on the car 12920.
- the video signal may be encoded by the video encoding apparatus according to an embodiment and recorded and stored in a storage medium.
- the video signal may be stored in the DVD disk 12960 by the DVD recorder, or the video signal may be stored in the hard disk by the hard disk recorder 12950.
- the video signal may be stored in the SD card 12970.
- the hard disk recorder 12950 includes a video decoding apparatus according to an embodiment, a video signal recorded on a DVD disk 12960, an SD card 12970, or another type of storage medium is monitored. ) Can be played.
- the vehicle navigation system 12930 may not include the camera 1530, the camera interface 12630, and the image encoder 12720 of FIG. 26.
- the computer 12100 and the TV receiver 12610 may not include the camera 1250, the camera interface 12630, and the image encoder 12720 of FIG. 26.
- 26 is a diagram illustrating a network structure of a cloud computing system using a video encoding apparatus and a video decoding apparatus, according to various embodiments.
- a cloud computing system may include a cloud computing server 14100, a user DB 14100, a computing resource 14200, and a user terminal.
- the cloud computing system provides an on demand outsourcing service of computing resources through an information communication network such as the Internet at the request of a user terminal.
- service providers integrate the computing resources of data centers located in different physical locations into virtualization technology to provide users with the services they need.
- the service user does not install and use computing resources such as application, storage, operating system, and security in each user's own terminal, but services in virtual space created through virtualization technology. You can choose as many times as you want.
- a user terminal of a specific service user accesses the cloud computing server 14100 through an information communication network including the Internet and a mobile communication network.
- the user terminals may be provided with a cloud computing service, particularly a video playback service, from the cloud computing server 14100.
- the user terminal may be any electronic device capable of accessing the Internet, such as a desktop PC 14300, a smart TV 14400, a smartphone 14500, a notebook 14600, a portable multimedia player (PMP) 14700, a tablet PC 14800, and the like. It can be a device.
- the cloud computing server 14100 may integrate and provide a plurality of computing resources 14200 distributed in a cloud network to a user terminal.
- the plurality of computing resources 14200 include various data services and may include data uploaded from a user terminal.
- the cloud computing server 14100 integrates a video database distributed in various places into a virtualization technology to provide a service required by a user terminal.
- the user DB 14100 stores user information subscribed to a cloud computing service.
- the user information may include login information and personal credit information such as an address and a name.
- the user information may include an index of the video.
- the index may include a list of videos that have been played, a list of videos being played, and a stop time of the videos being played.
- Information about a video stored in the user DB 14100 may be shared among user devices.
- the playback history of the predetermined video service is stored in the user DB 14100.
- the cloud computing server 14100 searches for and plays a predetermined video service with reference to the user DB 14100.
- the smartphone 14500 receives the video data stream through the cloud computing server 14100, the operation of decoding the video data stream and playing the video may be performed by the operation of the mobile phone 12500 described above with reference to FIG. 24. similar.
- the cloud computing server 14100 may refer to a playback history of a predetermined video service stored in the user DB 14100. For example, the cloud computing server 14100 receives a playback request for a video stored in the user DB 14100 from a user terminal. If the video was being played before, the cloud computing server 14100 may have a streaming method different depending on whether the video is played from the beginning or from the previous stop point according to the user terminal selection. For example, when the user terminal requests to play from the beginning, the cloud computing server 14100 streams the video to the user terminal from the first frame. On the other hand, if the terminal requests to continue playing from the previous stop point, the cloud computing server 14100 streams the video to the user terminal from the frame at the stop point.
- the user terminal may include a video decoding apparatus according to an embodiment described above with reference to FIGS. 1 to 19.
- the user terminal may include a video encoding apparatus according to an embodiment described above with reference to FIGS. 1 to 20.
- the user terminal may include both the video encoding apparatus and the video decoding apparatus according to the above-described embodiments with reference to FIGS. 1 to 19.
- FIGS. 20 to 26 Various examples of using the image encoding method, the image decoding method, the image encoding apparatus, and the image decoding apparatus described above with reference to FIGS. 1 to 19 are described above with reference to FIGS. 20 to 26. However, various embodiments in which the video encoding method and the video decoding method described above with reference to FIGS. 1 to 19 are stored in a storage medium or the video encoding apparatus and the video decoding apparatus are implemented in a device are illustrated in the embodiments of FIGS. 20 to 26. It is not limited to.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (20)
- 비트스트림으로부터 영상의 블록의 잔차신호를 획득하는 단계;상기 블록의 분할에 대한 정보인 마스크맵(maskmap)에 기초하여 상기 영상의 블록의 적어도 하나의 파티션(partition)을 결정하는 단계;상기 적어도 하나의 파티션의 경계에서 필터링을 수행하는 단계; 및상기 필터링을 수행한 결과에 기초하여 복원신호를 생성하는 단계를 포함하고,상기 필터링은 상기 잔차신호 및 상기 잔차신호에 대응하는 예측신호 중 적어도 하나를 이용하여 수행되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서,상기 영상이 칼라 영상 또는 깊이 영상인 경우, 이에 대응하는 상기 마스크맵은 각각 깊이 영상의 블록 또는 칼라 영상의 블록의 분할 형태에 관련된 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서,상기 마스크맵은 상기 블록의 복수의 샘플들의 평균값을 기준으로 상기 복수의 샘플들의 값이 평균값 보다 큰 영역 및 크지 않은 영역으로 분할되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서, 상기 필터링을 수행하는 단계는필터링 계수에 기초하여 필터링을 수행하는 단계를 포함하고,상기 필터링 계수는 영상 특성에 따라 적응적인 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서, 상기 필터링을 수행하는 단계는경계의 방향이 수직 방향인 복수의 파티션으로 상기 블록이 분할된 경우 수평 방향 필터링을 수행하고, 경계의 방향이 수평 방향인 복수의 파티션으로 상기 블록이 분할된 경우 수직 방향 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서, 상기 필터링을 수행하는 단계는경계의 방향이 수직 방향 및 수평 방향인 복수의 파티션으로 상기 블록이 분할된 경우, 상기 수직 방향 경계 및 상기 수평 방향 경계에 근접하는 샘플에 대하여는 수평 방향 필터링 및 수직 방향 필터링 중 적어도 하나 이상의 방향으로 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서,상기 필터링을 수행하는 단계는,필터링을 수행하려고 하는 상기 영상의 샘플의 위치에 대응하는 마스크맵 상의 복수의 샘플을 검출하는 단계;상기 검출된 복수의 샘플을 서로 비교하는 단계; 및상기 비교 결과 상기 검출된 복수의 샘플이 서로 다른 것으로 판단된 경우 상기 검출된 복수의 샘플의 위치에 대응하는 상기 영상의 샘플에서 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서, 상기 영상 복호화 방법은상기 마스크맵의 크기가 상기 블록의 크기와 상이한 경우 상기 마스크맵의 크기를 상기 블록과 동일하게 변경하기 위한 스케일링(scaling)하는 단계를 더 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 영상의 블록의 원본신호를 획득하는 단계;상기 블록의 분할에 대한 정보인 마스크맵에 기초하여 상기 영상의 블록의 적어도 하나의 파티션을 결정하는 단계;상기 적어도 하나의 파티션의 경계에서 필터링을 수행하는 단계; 및상기 필터링을 수행한 결과에 기초하여 필터링 잔차신호를 생성하는 단계를 포함하고,상기 필터링은 상기 원본신호, 상기 원본신호에 대응하는 예측신호 및 상기 원본신호와 상기 예측신호에 관련된 잔차신호 중 적어도 하나를 이용하여 수행되는 것을 특징으로 하는, 영상 부호화 방법.
- 제 9 항에 있어서,상기 마스크맵은 깊이 영상의 블록 또는 칼라 영상의 블록의 분할 형태에 기초하여 획득될 수 있는 것을 특징으로 하는, 영상 부호화 방법.
- 제 9 항에 있어서,상기 마스크맵은 상기 블록의 복수의 샘플들의 평균값을 기준으로 상기 복수의 샘플들의 값이 평균값 보다 큰 영역 및 크지 않은 영역으로 분할되는 것을 특징으로 하는, 영상 부호화 방법.
- 제 9 항에 있어서, 상기 필터링을 수행하는 단계는필터링 계수에 기초하여 필터링을 수행하는 단계를 포함하고,상기 필터링 계수는 영상 특성에 따라 적응적인 것을 특징으로 하는, 영상 부호화 방법.
- 제 9 항에 있어서, 상기 필터링을 수행하는 단계는경계의 방향이 수직 방향인 복수의 파티션으로 상기 블록이 분할된 경우 수평 방향 필터링을 수행하고, 경계의 방향이 수평 방향인 복수의 파티션으로 상기 블록이 분할된 경우 수직 방향 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는, 영상 부호화 방법.
- 제 13 항에 있어서, 상기 필터링을 수행하는 단계는경계의 방향이 수직 방향 및 수평 방향인 복수의 파티션으로 상기 블록이 분할된 경우, 상기 수직 방향 경계 및 상기 수평 방향 경계에 근접하는 샘플에 대하여는 수평 방향 필터링 및 수직 방향 필터링 중 적어도 하나 이상의 방향으로 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는, 영상 부호화 방법.
- 제 9 항에 있어서,상기 필터링을 수행하는 단계는,필터링을 수행하려고 하는 상기 영상의 샘플의 위치에 대응하는 마스크맵 상의 복수의 샘플을 검출하는 단계;상기 검출된 복수의 샘플을 서로 비교하는 단계; 및상기 비교 결과 상기 검출된 복수의 샘플이 서로 다른 것으로 판단된 경우 상기 검출된 복수의 샘플의 위치에 대응하는 상기 영상의 샘플에서 필터링을 수행하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 제 9 항에 있어서, 상기 영상 부호화 방법은상기 마스크맵의 크기가 상기 블록의 크기와 상이한 경우 상기 마스크맵의 크기를 상기 블록과 동일하게 변경하기 위한 스케일링 하는 단계를 더 포함하는 것을 특징으로 하는, 영상 부호화 방법.
- 비트스트림으로부터 상기 영상을 구성하는 블록에 관련된 잔차신호를 획득하는 잔차신호 획득부;상기 영상의 블록의 적어도 하나의 파티션을 결정하는 파티션 결정부;상기 적어도 하나의 파티션의 경계에서 필터링을 수행하는 필터링부; 및상기 필터링을 수행한 예측신호 및 잔차신호 중 적어도 하나에 기초하여 복원신호를 생성하는 복호화부를 포함하는 영상 복호화 장치.
- 상기 영상의 블록의 적어도 하나의 파티션을 결정하는 파티션 결정부;상기 적어도 하나의 파티션의 경계에서 필터링을 수행하는 필터링부; 및상기 필터링을 수행한 원본신호, 예측신호 및 잔차신호 중 적어도 하나에 기초하여 필터링 잔차신호를 생성하는 부호화부를 포함하는, 영상 부호화 장치.
- 제 1 항 및 제 8 항 중 어느 한 항의 영상 복호화 방법을 구현하기 위한 프로그램이 저장된 컴퓨터 판독 가능 기록매체.
- 제 9 항 및 제 16 항 중 어느 한 항의 영상 부호화 방법을 구현하기 위한 프로그램이 저장된 컴퓨터 판독 가능 기록매체.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15765411.2A EP3119090A4 (en) | 2014-03-19 | 2015-03-19 | Method for performing filtering at partition boundary of block related to 3d image |
US15/127,105 US20180176559A1 (en) | 2014-03-19 | 2015-03-19 | Method for performing filtering at partition boundary of block related to 3d image |
KR1020167025549A KR102457810B1 (ko) | 2014-03-19 | 2015-03-19 | 3d 영상에 관련된 블록의 파티션 경계에서 필터링 수행 방법 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461955349P | 2014-03-19 | 2014-03-19 | |
US61/955,349 | 2014-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015142075A1 true WO2015142075A1 (ko) | 2015-09-24 |
Family
ID=54144962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/002675 WO2015142075A1 (ko) | 2014-03-19 | 2015-03-19 | 3d 영상에 관련된 블록의 파티션 경계에서 필터링 수행 방법 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20180176559A1 (ko) |
EP (1) | EP3119090A4 (ko) |
KR (1) | KR102457810B1 (ko) |
WO (1) | WO2015142075A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111851A (zh) * | 2016-11-25 | 2018-06-01 | 华为技术有限公司 | 一种去块滤波方法及终端 |
CN112352427A (zh) * | 2018-06-18 | 2021-02-09 | 交互数字Vc控股公司 | 基于图像块的非对称二元分区的视频编码和解码的方法和装置 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101978194B1 (ko) * | 2014-11-14 | 2019-05-14 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 디지털 이미지를 처리하는 방법 및 시스템 |
CN107211131B (zh) | 2014-11-14 | 2020-07-21 | 华为技术有限公司 | 对数字图像块进行基于掩码的处理的系统和方法 |
BR112017010007B1 (pt) * | 2014-11-14 | 2023-04-11 | Huawei Technologies Co., Ltd | Aparelho adaptado para gerar um conjunto de coeficientes de transformada, método para gerar um conjunto de coeficientes de transformada, aparelho adaptado para decodificar um bloco de um quadro, e método para reconstruir um bloco de um quadro |
US10469841B2 (en) * | 2016-01-29 | 2019-11-05 | Google Llc | Motion vector prediction using prior frame residual |
US10306258B2 (en) | 2016-01-29 | 2019-05-28 | Google Llc | Last frame motion vector partitioning |
EP3340624B1 (en) | 2016-12-20 | 2019-07-03 | Axis AB | Encoding a privacy masked image |
US20190020888A1 (en) * | 2017-07-11 | 2019-01-17 | Google Llc | Compound intra prediction for video coding |
CN113139980A (zh) * | 2021-05-13 | 2021-07-20 | 广西柳工机械股份有限公司 | 图像边界的确定方法、装置及存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120082606A (ko) * | 2011-01-14 | 2012-07-24 | 삼성전자주식회사 | 깊이 영상의 부호화/복호화 장치 및 방법 |
KR20130038360A (ko) * | 2010-11-25 | 2013-04-17 | 엘지전자 주식회사 | 영상 정보의 시그널링 방법 및 이를 이용한 영상 정보의 복호화 방법 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8050329B2 (en) * | 1998-06-26 | 2011-11-01 | Mediatek Inc. | Method and apparatus for generic scalable shape coding |
KR101484280B1 (ko) * | 2009-12-08 | 2015-01-20 | 삼성전자주식회사 | 임의적인 파티션을 이용한 움직임 예측에 따른 비디오 부호화 방법 및 장치, 임의적인 파티션을 이용한 움직임 보상에 따른 비디오 복호화 방법 및 장치 |
JP6407423B2 (ja) * | 2014-06-26 | 2018-10-17 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | 高効率映像符号化においてデプスベースのブロック分割を提供するための方法および装置 |
-
2015
- 2015-03-19 US US15/127,105 patent/US20180176559A1/en not_active Abandoned
- 2015-03-19 WO PCT/KR2015/002675 patent/WO2015142075A1/ko active Application Filing
- 2015-03-19 KR KR1020167025549A patent/KR102457810B1/ko active IP Right Grant
- 2015-03-19 EP EP15765411.2A patent/EP3119090A4/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130038360A (ko) * | 2010-11-25 | 2013-04-17 | 엘지전자 주식회사 | 영상 정보의 시그널링 방법 및 이를 이용한 영상 정보의 복호화 방법 |
KR20120082606A (ko) * | 2011-01-14 | 2012-07-24 | 삼성전자주식회사 | 깊이 영상의 부호화/복호화 장치 및 방법 |
Non-Patent Citations (4)
Title |
---|
FABIAN JAGER ET AL.: "CE3-related: Depth-based Block Partitioning", JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSIONS OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 5TH MEETING, 21 July 2013 (2013-07-21), Vienna, AT, pages 1 - 8, XP030131132 * |
JIAN-LIANG LIN ET AL.: "3D-CE5 related: Removal of boundary filters for depth intra prediction", JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSIONS OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 7TH MEETING, 6 January 2014 (2014-01-06), San Jose, US, pages 1 - 5, XP030131797 * |
See also references of EP3119090A4 * |
YUNSEOK SONG ET AL.: "3D-CE3.a results on depth boundary filtering", JOINT COLLABORATIVE TEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 1ST MEETING, 12 July 2012 (2012-07-12), Stockholm, SE, pages 1 - 5, XP030054302 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111851A (zh) * | 2016-11-25 | 2018-06-01 | 华为技术有限公司 | 一种去块滤波方法及终端 |
CN112352427A (zh) * | 2018-06-18 | 2021-02-09 | 交互数字Vc控股公司 | 基于图像块的非对称二元分区的视频编码和解码的方法和装置 |
CN112352427B (zh) * | 2018-06-18 | 2024-04-09 | 交互数字Vc控股公司 | 基于图像块的非对称二元分区的视频编码和解码的方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3119090A4 (en) | 2017-08-30 |
US20180176559A1 (en) | 2018-06-21 |
EP3119090A1 (en) | 2017-01-18 |
KR102457810B1 (ko) | 2022-10-21 |
KR20160136310A (ko) | 2016-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013022296A2 (ko) | 다시점 비디오 데이터의 부호화 방법 및 장치, 복호화 방법 및 장치 | |
WO2013022297A2 (ko) | 다시점 비디오 데이터의 깊이맵 부호화 방법 및 장치, 복호화 방법 및 장치 | |
WO2015053594A1 (ko) | 인트라 블록 복사 예측을 이용한 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015142075A1 (ko) | 3d 영상에 관련된 블록의 파티션 경계에서 필터링 수행 방법 | |
WO2015137783A1 (ko) | 인터 레이어 비디오의 복호화 및 부호화를 위한 머지 후보 리스트 구성 방법 및 장치 | |
WO2013062391A1 (ko) | 인터 예측 방법 및 그 장치, 움직임 보상 방법 및 그 장치 | |
WO2015194915A1 (ko) | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 | |
WO2014007521A1 (ko) | 비디오 부호화 또는 비디오 복호화를 위한 움직임 벡터 예측 방법 및 장치 | |
WO2014112830A1 (ko) | 디코더 설정을 위한 비디오 부호화 방법 및 그 장치, 디코더 설정에 기초한 비디오 복호화 방법 및 그 장치 | |
WO2013062389A1 (ko) | 비디오의 인트라 예측 방법 및 장치 | |
WO2013039357A2 (ko) | 비디오 부호화, 복호화 방법 및 장치 | |
WO2015102441A1 (ko) | 효율적인 파라미터 전달을 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2014007518A1 (ko) | 블록크기에 따라 인터 예측의 참조픽처리스트를 결정하는 비디오 부호화 방법과 그 장치, 비디오 복호화 방법과 그 장치 | |
WO2014163460A1 (ko) | 계층 식별자 확장에 따른 비디오 스트림 부호화 방법 및 그 장치, 계층 식별자 확장에 따른 따른 비디오 스트림 복호화 방법 및 그 장치 | |
WO2014175647A1 (ko) | 시점 합성 예측을 이용한 다시점 비디오 부호화 방법 및 그 장치, 다시점 비디오 복호화 방법 및 그 장치 | |
WO2015009113A1 (ko) | 인터 레이어 비디오 복호화 및 부호화 장치 및 방법을 위한 깊이 영상의 화면내 예측 방법 | |
WO2016072753A1 (ko) | 샘플 단위 예측 부호화 장치 및 방법 | |
WO2014163465A1 (ko) | 깊이맵 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2013022281A2 (ko) | 다시점 비디오 예측 부호화 방법 및 그 장치, 다시점 비디오 예측 복호화 방법 및 그 장치 | |
WO2015152605A1 (ko) | 깊이 영상을 부호화 또는 복호화 하는 방법 및 장치 | |
WO2015137736A1 (ko) | 인터 레이어 비디오 부복호화를 위한 깊이 영상의 예측 모드 전송 방법 및 장치 | |
WO2015102439A1 (ko) | 멀티 레이어 비디오의 복호화 및 부호화를 위한 버퍼 관리 방법 및 장치 | |
WO2015056945A1 (ko) | 깊이 인트라 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2015009108A1 (ko) | 비디오 포멧 파라미터 전달을 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2015194852A1 (ko) | 다 시점 영상 부호화/복호화 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15765411 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167025549 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15127105 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2015765411 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015765411 Country of ref document: EP |