CN114342388A - Method and apparatus for image encoding and image decoding using region segmentation - Google Patents

Method and apparatus for image encoding and image decoding using region segmentation Download PDF

Info

Publication number
CN114342388A
CN114342388A CN202080059045.XA CN202080059045A CN114342388A CN 114342388 A CN114342388 A CN 114342388A CN 202080059045 A CN202080059045 A CN 202080059045A CN 114342388 A CN114342388 A CN 114342388A
Authority
CN
China
Prior art keywords
block
information
target
picture
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080059045.XA
Other languages
Chinese (zh)
Inventor
方健
李振荣
林雄
金晖容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Priority claimed from PCT/KR2020/008081 external-priority patent/WO2020256522A1/en
Publication of CN114342388A publication Critical patent/CN114342388A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Abstract

Disclosed are a method and apparatus for image encoding and image decoding using region segmentation. A picture is segmented into a plurality of sub-pictures, and encoding and/or decoding is performed with respect to the plurality of sub-pictures resulting from the segmentation. Encoding and/or decoding for sub-pictures may be performed independently of each other. The merging and filtering are applied to the reconstructed sub-picture generated by the decoding, and the reconstructed picture is generated after the reconstructed sub-picture is merged and filtered.

Description

Method and apparatus for image encoding and image decoding using region segmentation
Technical Field
The following embodiments generally relate to a video decoding method and apparatus and a video encoding method and apparatus, and more particularly, to an image decoding method and apparatus and an image encoding method and apparatus using region segmentation.
The present application claims the benefit of korean patent application No. 10-2019-.
Background
With the continuous development of the information and communication industry, broadcasting services supporting High Definition (HD) resolution have been popularized throughout the world. Through this popularity, a large number of users have become accustomed to high resolution and high definition images and/or videos.
In order to meet the demand of users for high definition, a large number of mechanisms have accelerated the development of next-generation imaging devices. In addition to high definition TV (hdtv) and Full High Definition (FHD) TV, user interest in UHD TV has also increased, where the resolution of UHD TV is more than four times the resolution of Full High Definition (FHD) TV. With the increase of interest thereof, there is a current need for image encoding/decoding techniques for images having higher resolution and higher definition.
As an image compression technique, there are various techniques (such as an inter-prediction technique, an intra-prediction technique, a transform, a quantization technique, and an entropy coding technique).
The inter prediction technique is a technique for predicting values of pixels included in a current picture using a picture before the current picture and/or a picture after the current picture. The intra prediction technique is a technique for predicting values of pixels included in a current picture using information on the pixels in the current picture. The transform and quantization techniques may be techniques for compressing the energy of the residual signal. Entropy coding techniques are techniques for assigning short codewords to frequently occurring values and long codewords to less frequently occurring values.
By utilizing these image compression techniques, data on an image can be efficiently compressed, transmitted, and stored.
Disclosure of Invention
Technical problem
Embodiments are directed to providing an apparatus and method for generating one or more regions by dividing an image and for independently performing encoding on the one or more regions.
Embodiments are directed to an apparatus and method for generating one or more regions by dividing an image and for independently performing decoding on the one or more regions.
Embodiments are directed to providing an apparatus and method for performing efficient decoding for an application providing output (such as 360 ° video) in a particular viewport by independently performing decoding on one or more regions.
Technical scheme
According to an aspect, there is provided a decoding method comprising: determining a target sprite that is a portion of the target picture; generating a reconstructed target sprite for the target sprite; and generating a reconstructed target picture using the reconstructed target sprite.
In generating the reconstructed target picture, filtering may be applied to the reconstructed target sprite.
The filtering may be applied to a boundary line between the reconstructed target sprite and a further sprite.
The filter in the filtering may be a loop filter.
For the filtering, filtering information may be used.
The filtering information may include information indicating whether filtering of the target sprite is to be disabled.
The filtering information may be included in a Sequence Parameter Set (SPS).
The filtering information may be applied to pictures that reference the SPS.
When the filtering information is not explicitly signaled by a bitstream, a default value for the filtering information may be set.
According to another aspect, there is provided an encoding method comprising: determining a target sprite that is a portion of the target picture; generating a reconstructed target sprite for the target sprite; and generating a reconstructed target picture using the reconstructed target sprite.
In generating the reconstructed target picture, filtering may be applied to the reconstructed target sprite.
The filtering may be applied to a boundary line between the reconstructed target sprite and a further sprite.
The filter in the filtering may be a loop filter.
Filtering information for the filtering may be generated.
The filtering information may include information indicating whether filtering of the target sprite is to be disabled.
The filtering information may be included in a Sequence Parameter Set (SPS).
The filtering information may be applied to pictures that reference the SPS.
According to another aspect, there is provided a storage medium storing a bitstream generated by the encoding method.
According to yet another aspect, a computer-readable storage medium storing a bitstream for decoding a target picture is provided, wherein the bitstream may include information on a sub-picture, determines a target sub-picture that is a part of the target picture, generates a reconstructed target sub-picture for the target sub-picture, and generates a reconstructed target picture using the reconstructed target sub-picture.
In generating the reconstructed target picture, filtering may be applied to the reconstructed target sprite.
The filtering may be applied to a boundary line between the reconstructed target sprite and a further sprite.
The filter in the filtering may be a loop filter.
For the filtering, filtering information may be used.
The filtering information may include information indicating whether filtering of the target sprite is to be disabled.
The filtering information may be included in a Sequence Parameter Set (SPS).
The filtering information may be applied to pictures that reference the SPS.
Advantageous effects
An apparatus and method for generating one or more regions by dividing an image and for independently performing encoding on the one or more regions are provided.
An apparatus and method for generating one or more regions by dividing an image and for independently performing decoding on the one or more regions are provided.
An apparatus and method for performing efficient decoding for an application providing output (such as 360 ° video) in a particular viewport by independently performing decoding on one or more regions is provided.
Drawings
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied;
fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied;
fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded;
fig. 4 is a diagram illustrating a form of a Prediction Unit (PU) that a Coding Unit (CU) can include;
fig. 5 is a diagram illustrating a form of a Transform Unit (TU) that can be included in an encoding unit (CU);
FIG. 6 illustrates partitioning of a block according to an example;
FIG. 7 is a diagram for explaining an embodiment of an intra prediction process;
fig. 8 is a diagram illustrating reference samples used in an intra prediction process;
fig. 9 is a diagram for explaining an embodiment of an inter prediction process;
FIG. 10 illustrates spatial candidates according to an embodiment;
fig. 11 illustrates an order of adding motion information of spatial candidates to a merge list according to an embodiment;
FIG. 12 illustrates a transform and quantization process according to an example;
FIG. 13 illustrates a diagonal scan according to an example;
FIG. 14 shows a horizontal scan according to an example;
FIG. 15 shows a vertical scan according to an example;
fig. 16 is a configuration diagram of an encoding device according to an embodiment;
fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment;
fig. 18 illustrates division of a picture in raster scan stripe mode according to an example;
fig. 19 illustrates division of a picture in a rectangular stripe mode according to an example;
FIG. 20 shows parallel blocks (tile), tiles (brick), and rectangular stripes in a picture according to an example;
fig. 21 illustrates a picture coding method according to an embodiment;
fig. 22 illustrates a picture decoding method according to an embodiment;
fig. 23 illustrates division of a picture according to an example;
FIG. 24 illustrates a sprite with padding applied according to an example;
FIG. 25 illustrates filtering across boundary lines of a sprite in accordance with merging of sprites, according to an example;
fig. 26 illustrates a picture divided into six sub-pictures according to an example;
fig. 27 shows the padding of six sub-pictures according to an example;
fig. 28 illustrates filtering performed after combining six sub-pictures according to an example;
fig. 29 illustrates a first syntax for providing division information of a picture according to an example;
fig. 30 illustrates a second syntax for providing division information of a picture according to an example; and
fig. 31 illustrates a third syntax for providing division information of a picture according to an example.
Detailed Description
The present invention may be variously modified and may have various embodiments, and specific embodiments will be described in detail below with reference to the accompanying drawings. It should be understood, however, that these embodiments are not intended to limit the invention to the particular forms disclosed, but to include all changes, equivalents, and modifications encompassed within the spirit and scope of the invention.
The following exemplary embodiments will be described in detail with reference to the accompanying drawings showing specific embodiments. These embodiments are described so that those of ordinary skill in the art to which this disclosure pertains will be readily able to practice them. It should be noted that the various embodiments are different from one another, but are not necessarily mutually exclusive. For example, particular shapes, structures, and characteristics described herein may be implemented as one embodiment without departing from the spirit and scope of other embodiments associated with the other embodiments. Further, it is to be understood that the location or arrangement of individual components within each disclosed embodiment can be modified without departing from the spirit and scope of the embodiments. Therefore, the appended detailed description is not intended to limit the scope of the disclosure, and the scope of exemplary embodiments is defined only by the appended claims and equivalents thereof, as they are properly described.
In the drawings, like numerals are used to designate the same or similar functions in various respects. The shapes, sizes, and the like of components in the drawings may be exaggerated for clarity of the description.
Terms such as "first" and "second" may be used to describe various components, but the components are not limited by the terms. The term is used only to distinguish one component from another. For example, a first component may be termed a second component without departing from the scope of the present description. Similarly, the second component may be referred to as the first component. The term "and/or" may include a combination of multiple related descriptive items or any of multiple related descriptive items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, the two elements can be directly connected or coupled to each other or intervening elements may be present between the two elements. On the other hand, it will be understood that when components are referred to as being "directly connected or coupled", there are no intervening components between the two components.
Further, components described in the embodiments are independently illustrated to indicate different feature functions, but this does not mean that each component is formed of a separate piece of hardware or software. That is, a plurality of components are individually arranged and included for convenience of description. For example, at least two of the plurality of components may be integrated into a single component. Instead, one component may be divided into a plurality of components. Embodiments in which a plurality of components are integrated or embodiments in which some components are separated are included in the scope of the present specification as long as they do not depart from the essence of the present specification.
Furthermore, in exemplary embodiments, the expression that a component "includes" a specific component means that another component may be included within the scope of practical or technical spirit of the exemplary embodiments, but does not exclude the presence of components other than the specific component.
The terminology used in the description is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Singular references include plural references unless the context specifically indicates the contrary. In this specification, it is to be understood that terms such as "including" or "having" are only intended to indicate that there are features, numbers, steps, operations, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, operations, components, parts, or combinations thereof will be present or added. That is, in the present invention, the expression that a component described "includes" a specific component means that another component may be included in the scope of the practice of the present invention or the technical spirit of the present invention, but does not exclude the presence of components other than the specific component.
Some components of the present invention are not essential components for performing essential functions but may be optional components only for improving performance. An embodiment may be implemented using only the necessary components to implement the essence of the embodiment. For example, a structure including only necessary components (not including only optional components for improving performance) is also included in the scope of the embodiments.
The embodiments will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the embodiments pertain can easily implement the embodiments. In the following description of the embodiments, a detailed description of known functions or configurations incorporated herein will be omitted. In addition, the same reference numerals are used to designate the same components throughout the drawings, and repeated description of the same components will be omitted.
Hereinafter, "image" may represent a single picture constituting a video, or may represent the video itself. For example, "encoding and/or decoding of an image" may mean "encoding and/or decoding of a video", and may also mean "encoding and/or decoding of any one of a plurality of images constituting a video".
Hereinafter, the terms "video" and "moving picture" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target image may be an encoding target image that is a target to be encoded and/or a decoding target image that is a target to be decoded. Further, the target image may be an input image input to the encoding apparatus or an input image input to the decoding apparatus. Also, the target image may be a current image, i.e., a target that is currently to be encoded and/or decoded. For example, the terms "target image" and "current image" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "image", "picture", "frame", and "screen" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target block may be an encoding target block (i.e., a target to be encoded) and/or a decoding target block (i.e., a target to be decoded). Furthermore, the target block may be a current block, i.e., a target that is currently to be encoded and/or decoded. Here, the terms "target block" and "current block" may be used to have the same meaning and may be used interchangeably with each other. The current block may represent an encoding target block that is an encoding target during encoding and/or a decoding target block that is a decoding target during decoding. Further, the current block may be at least one of an encoding block, a prediction block, a residual block, and a transform block.
Hereinafter, the terms "block" and "unit" may be used to have the same meaning and may be used interchangeably with each other. Alternatively, "block" may represent a particular unit.
Hereinafter, the terms "region" and "fragment" are used interchangeably with each other.
Hereinafter, the specific signal may be a signal indicating a specific block. For example, the original signal may be a signal indicating a target block. The prediction signal may be a signal indicating a prediction block. The residual signal may be a signal indicating a residual block.
In the following embodiments, particular information, data, flags, indices, elements, and attributes may have their respective values. A value of "0" corresponding to each of the information, data, flags, indices, elements, and attributes may indicate a logical false or first predefined value. In other words, the values "0", false, logically false, and the first predefined value may be used interchangeably with each other. A value of "1" corresponding to each of the information, data, flags, indices, elements, and attributes may indicate a logical true or a second predefined value. In other words, the values "1", true, logically true, and second predefined values may be used interchangeably with each other.
When a variable such as i or j is used to indicate a row, column, or index, the value i may be an integer 0 or greater than 0, or may be an integer 1 or greater than 1. In other words, in embodiments, each of the rows, columns, and indices may count from 0, or may count from 1.
In embodiments, the term "one or more" or the term "at least one" may mean the term "a plurality". The term "one or more" or the term "at least one" may be used interchangeably with "plurality".
Hereinafter, terms to be used in the embodiments will be described.
An encoder: the encoder represents an apparatus for performing encoding. That is, the encoder may represent an encoding apparatus.
A decoder: the decoder represents means for performing decoding. That is, the decoder may represent a decoding apparatus.
A unit: the "unit" may represent a unit of image encoding and decoding. The terms "unit" and "block" may be used to have the same meaning and may be used interchangeably with each other.
The cell may be an M × N array of samples. Each of M and N may be a positive integer. The cells may generally represent a two-dimensional form of an array of samples.
During the encoding and decoding of an image, a "unit" may be a region produced by partitioning an image. In other words, a "cell" may be a region designated in one image. A single image may be partitioned into multiple cells. Alternatively, one image may be partitioned into subsections, and a unit may represent each partitioned subsection when encoding or decoding is performed on the partitioned subsection.
During the encoding and decoding of the image, a predefined processing can be performed on each unit according to the type of unit.
Unit types may be classified into macro-units, Coding Units (CUs), Prediction Units (PUs), residual units, Transform Units (TUs), etc., according to function. Alternatively, the unit may represent a block, a macroblock, a coding tree unit, a coding tree block, a coding unit, a coding block, a prediction unit, a prediction block, a residual unit, a residual block, a transform unit, a transform block, and the like according to functions. For example, a target unit, which is a target of encoding and/or decoding, may be at least one of a CU, a PU, a residual unit, and a TU.
The term "unit" may denote information including a luminance (luma) component block, a chrominance (chroma) component block corresponding to the luminance component block, and syntax elements for the respective blocks, such that the unit is designated to be distinguished from the blocks.
The size and shape of the cells can be implemented differently. Further, the cells may have any of a variety of sizes and shapes. Specifically, the shape of the cell may include not only a square but also a geometric shape (such as a rectangle, a trapezoid, a triangle, and a pentagon) that can be represented in two dimensions (2D).
Further, the unit information may include one or more of a type of the unit, a size of the unit, a depth of the unit, an encoding order of the unit, a decoding order of the unit, and the like. For example, the type of the unit may indicate one of a CU, a PU, a residual unit, and a TU.
A unit may be partitioned into sub-units, each having a size smaller than the size of the associated unit.
Depth: depth may represent the degree to which a cell is partitioned. Further, the depth of a cell may indicate the level at which the corresponding cell exists when represented by a tree structure.
The unit partition information may comprise a depth indicating a depth of the unit. The depth may indicate the number of times a cell is partitioned and/or the extent to which the cell is partitioned.
In the tree structure, the depth of the root node can be considered to be the smallest and the depth of the leaf nodes the largest. The root node may be the highest (top) node. The leaf node may be the lowest node.
A single unit may be hierarchically partitioned into a plurality of sub-units, while the single unit has tree structure based depth information. In other words, a unit and a child unit generated by partitioning the unit may correspond to a node and a child node of the node, respectively. Each partitioned sub-cell may have a cell depth. Since the depth indicates the number of times the unit is partitioned and/or the degree to which the unit is partitioned, the partition information of the sub-unit may include information on the size of the sub-unit.
In the tree structure, the top node may correspond to the initial node before partitioning. The top node may be referred to as the "root node". Further, the root node may have a minimum depth value. Here, the depth of the top node may be level "0".
A node with a depth of level "1" may represent a unit generated when an initial unit is partitioned once. A node with a depth of level "2" may represent a cell that is generated when an initial cell is partitioned twice.
A leaf node with a depth of level "n" may represent a unit that is generated when an initial unit is partitioned n times.
A leaf node may be the bottom node that cannot be partitioned further. The depth of a leaf node may be a maximum level. For example, the predefined value for the maximum level may be 3.
the-QT depth may represent the depth for a quad-partition. BT depth may represent depth for a bipartite partition. The TT depth may represent a depth for a tri-partition.
-sampling points: the samples may be elementary units that constitute a block. Available from 0 to 2 according to the bit depth (Bd)BdA value of-1 to represent a sample point.
The samples may be pixels or pixel values.
In the following, the terms "pixel" and "sample" may be used to have the same meaning and may be used interchangeably with each other.
Coding Tree Unit (CTU): a CTU may be composed of a single luma component (Y) coding tree block and two chroma component (Cb, Cr) coding tree blocks associated with the luma component coding tree block. Further, the CTU may represent information including the above-described blocks and syntax elements for each block.
-each Coding Tree Unit (CTU) may be partitioned using one or more partitioning methods, such as Quadtree (QT), Binary Tree (BT) and Ternary Tree (TT), in order to configure sub-units, such as coding units, prediction units and transform units. The quadtree may represent a quadtree. Further, each coding tree unit may be partitioned using a multi-type tree (MTT) using one or more partitioning methods.
"CTU" may be used as a term designating a pixel block as a processing unit in an image decoding and encoding process, such as in the case of partitioning an input image.
Coded Tree Block (CTB): "CTB" may be used as a term designating any one of a Y coding tree block, a Cb coding tree block, and a Cr coding tree block.
Adjacent blocks: the neighboring blocks (or neighboring blocks) may represent blocks adjacent to the target block. The neighboring blocks may represent reconstructed neighboring blocks.
Hereinafter, the terms "adjacent block" and "adjacent block" may be used to have the same meaning and may be used interchangeably with each other.
The neighboring blocks may represent reconstructed neighboring blocks.
Spatially adjacent blocks: the spatially neighboring block may be a block spatially adjacent to the target block. The neighboring blocks may include spatially neighboring blocks.
The target block and the spatially neighboring blocks may be comprised in the target picture.
Spatially neighboring blocks may represent blocks whose boundaries are in contact with the target block or blocks which are located within a predetermined distance from the target block.
The spatially neighboring blocks may represent blocks adjacent to the vertex of the target block. Here, the blocks adjacent to the vertex of the target block may represent blocks vertically adjacent to an adjacent block horizontally adjacent to the target block or blocks horizontally adjacent to an adjacent block vertically adjacent to the target block.
Temporal neighboring blocks: the temporally adjacent blocks may be blocks temporally adjacent to the target block. The neighboring blocks may include temporally neighboring blocks.
The temporally adjacent blocks may comprise co-located blocks (col blocks).
A col block may be a block in a previously reconstructed co-located picture (col picture). The location of the col block in the col picture may correspond to the location of the target block in the target picture. Alternatively, the location of the col block in the col picture may be equal to the location of the target block in the target picture. The col picture may be a picture included in the reference picture list.
The temporal neighboring blocks may be blocks temporally adjacent to spatially neighboring blocks of the target block.
Prediction mode: the prediction mode may be information indicating a mode in which encoding and/or decoding is performed for intra prediction or a mode in which encoding and/or decoding is performed for inter prediction.
A prediction unit: the prediction unit may be a basic unit for prediction such as inter prediction, intra prediction, inter compensation, intra compensation, and motion compensation.
A single prediction unit may be divided into multiple partitions or sub-prediction units of smaller size. The plurality of partitions may also be basic units in performing prediction or compensation. The partition generated by dividing the prediction unit may also be the prediction unit.
Prediction unit partitioning: the prediction unit partition may be a shape into which the prediction unit is divided.
Reconstructed neighboring cells: the reconstructed neighboring unit may be a unit that is neighboring the target unit and has been decoded and reconstructed.
The reconstructed neighboring cells may be cells that are spatially adjacent to the target cell or temporally adjacent to the target cell.
The reconstructed spatially neighboring units may be units comprised in the target picture that have been reconstructed by encoding and/or decoding.
The reconstructed temporal neighboring cells may be cells comprised in the reference image that have been reconstructed by encoding and/or decoding. The position of the reconstructed temporally neighboring unit in the reference image may be the same as the position of the target unit in the target picture or may correspond to the position of the target unit in the target picture. Further, the reconstructed temporal neighboring cell may be a block neighboring the corresponding block in the reference image. Here, the position of the corresponding block in the reference image may correspond to the position of the target block in the target image. Here, the fact that the positions of the blocks correspond to each other may mean that the positions of the blocks are identical to each other, may mean that one block is included in another block, or may mean that one block occupies a specific position in another block.
Sub-picture: a picture may be divided into one or more sub-pictures. A sprite may be composed of one or more parallel block rows and one or more parallel block columns.
A sprite may be a region in a picture that has a square or rectangular (i.e., non-square, rectangular) shape. Further, a sprite may include one or more CTUs.
A single sprite may comprise one or more parallel blocks, one or more tiles (swick) and/or one or more stripes.
Parallel block: a parallel block may be a region in a picture having a square or rectangular (i.e., non-square, rectangular) shape.
A parallel block may comprise one or more CTUs.
A parallel block may be partitioned into one or more partitions.
Partitioning: a block may represent one or more rows of CTUs in a parallel block.
A parallel block can be partitioned into one or more partitions. Each partition may include one or more rows of CTUs.
Parallel blocks that are not partitioned into two parts may also represent partitions.
Strip: a stripe may comprise one or more parallel blocks in a picture. Optionally, a stripe may comprise one or more partitions of parallel blocks.
Parameter set: the parameter set may correspond to header information in an internal structure of the bitstream.
The parameter set may include at least one of a Video Parameter Set (VPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), an Adaptive Parameter Set (APS), a Decoding Parameter Set (DPS), and the like.
The information signaled by each parameter set may be applied to pictures that reference the corresponding parameter set. For example, information in the VPS may be applied to pictures that reference the VPS. Information in the SPS may be applied to pictures that reference the SPS. Information in the PPS may be applied to pictures that reference the PPS.
Each parameter set may refer to a higher parameter set. For example, a PPS may reference an SPS. SPS may refer to VPS.
Furthermore, a parameter set may comprise a parallel block group, slice header information and parallel block header information. The parallel block group may be a group including a plurality of parallel blocks. Further, the meaning of "parallel block group" may be the same as that of "stripe".
And (3) rate distortion optimization: the encoding device may use rate-distortion optimization in order to provide high encoding efficiency by utilizing a combination of: a size of a Coding Unit (CU), a prediction mode, a size of a Prediction Unit (PU), motion information, and a size of a Transform Unit (TU).
The rate-distortion optimization scheme may calculate rate-distortion costs for the respective combinations to select an optimal combination from the combinations. The rate distortion cost may be calculated using "D + λ R". In general, the combination that minimizes the rate-distortion cost may be selected as the optimal combination under the rate-distortion optimization scheme.
-D may represent distortion. D may be the average of the squares of the differences between the original transform coefficients and the reconstructed transform coefficients in the transform unit (i.e., the mean square error).
-R may represent the rate, which may represent the bit rate using the relevant context information.
- λ represents the lagrange multiplier. R may include not only coding parameter information such as a prediction mode, motion information, and a coding block flag, but also bits generated as a result of coding transform coefficients.
The coding device may perform processes such as inter-and/or intra-prediction, transformation, quantization, entropy coding, inverse quantization (dequantization) and/or inverse transformation in order to calculate the exact D and R. These processes can add significant complexity to the encoding device.
Bit stream: the bitstream may represent a stream of bits including encoded image information.
Parameter set: the parameter set may correspond to header information in an internal structure of the bitstream.
And (3) resolving: parsing may be a decision on the value of a syntax element made by performing entropy decoding on the bitstream. Alternatively, the term "parsing" may denote such entropy decoding itself.
Symbol: the symbol may be at least one of a syntax element, a coding parameter, and a transform coefficient of the encoding target unit and/or the decoding target unit. Further, the symbol may be a target of entropy encoding or a result of entropy decoding.
Reference picture: the reference picture may be an image that is unit-referenced in order to perform inter prediction or motion compensation. Alternatively, the reference picture may be an image including a reference unit that is referred to by the target unit in order to perform inter prediction or motion compensation.
Hereinafter, the terms "reference picture" and "reference image" may be used to have the same meaning and may be used interchangeably with each other.
List of reference pictures: the reference picture list may be a list including one or more reference pictures used for inter prediction or motion compensation.
The types of the reference picture list may include a composition List (LC), a list 0(L0), a list 1(L1), a list 2(L3), a list 3(L3), and the like.
For inter prediction, one or more reference picture lists may be used.
Inter prediction indicator: the inter prediction indicator may indicate an inter prediction direction for the target unit. The inter prediction may be one of unidirectional prediction and bidirectional prediction. Alternatively, the inter prediction indicator may represent the number of reference pictures used to generate the prediction unit of the target unit. Alternatively, the inter prediction indicator may represent the number of prediction blocks used for inter prediction or motion compensation of the target unit.
Prediction list utilization flag: the prediction list utilization flag may indicate whether at least one reference picture in a particular reference picture list is used to generate a prediction unit.
-deriving the inter prediction indicator using the prediction list utilization flag. Instead, the prediction list utilization flag may be derived using the inter prediction indicator. For example, a case where the prediction list indicates "0" (as a first value) with the flag may indicate that, for the target unit, the reference picture in the reference picture list is not used to generate the prediction block. The case where the prediction list utilization flag indicates "1" (as a second value) may indicate that, for the target unit, the prediction unit is generated using the reference picture list.
Reference picture index: the reference picture index may be an index indicating a specific reference picture in the reference picture list.
Picture Order Count (POC): the POC value of a picture may represent an order in which the corresponding pictures are displayed.
Motion Vector (MV): the motion vector may be a 2D vector for inter prediction or motion compensation. The motion vector may represent an offset between the target image and the reference image.
For example, may be represented by a symbol such as (mv)x,mvy) Represents the MV. mvxCan indicate the horizontal component, mv yA vertical component may be indicated.
The search range is as follows: the search range may be a 2D region in which a search for MVs is performed during inter prediction. For example, the size of the search range may be M × N. M and N may be positive integers, respectively.
Motion vector candidates: the motion vector candidate may be a block that is a prediction candidate when the motion vector is predicted or a motion vector of a block that is a prediction candidate.
The motion vector candidate may be comprised in a motion vector candidate list.
Motion vector candidate list: the motion vector candidate list may be a list configured using one or more motion vector candidates.
Motion vector candidate index: the motion vector candidate index may be an indicator for indicating a motion vector candidate in the motion vector candidate list. Alternatively, the motion vector candidate index may be an index of a motion vector predictor.
Motion information: the motion information may be information including at least one of a reference picture list, a reference picture, a motion vector candidate index, a merge candidate, and a merge index, and a motion vector, a reference picture index, and an inter prediction indicator.
Merging the candidate lists: the merge candidate list may be a list using one or more merge candidate configurations.
Merging candidates: the merge candidate may be a spatial merge candidate, a temporal merge candidate, a combined bi-predictive merge candidate, a history-based candidate, a candidate based on the average of the two candidates, a zero merge candidate, etc. The merge candidate may include an inter prediction indicator, and may include motion information such as prediction type information, a reference picture index for each list, a motion vector, a prediction list utilization flag, and an inter prediction indicator.
Merging indexes: the merge index may be an indicator for indicating a merge candidate in the merge candidate list.
The merging index may indicate a reconstruction unit used for deriving the merging candidate among reconstruction units spatially neighboring the target unit and reconstruction units temporally neighboring the target unit.
The merge index may indicate at least one of pieces of motion information of the merge candidates.
A transformation unit: the transform unit may be a basic unit of residual signal encoding and/or residual signal decoding, such as transform, inverse transform, quantization, inverse quantization, transform coefficient encoding, and transform coefficient decoding. A single transform unit may be partitioned into a plurality of sub-transform units having smaller sizes. Here, the transform may include one or more of a primary transform and a secondary transform, and the inverse transform may include one or more of a primary inverse transform and a secondary inverse transform.
Zooming: scaling may refer to the process of multiplying a factor by a transform coefficient level.
-as a result of scaling the transform coefficient level, transform coefficients may be generated. Scaling may also be referred to as "inverse quantization".
Quantization Parameter (QP): the quantization parameter may be a value used to generate a transform coefficient level for a transform coefficient in quantization. Alternatively, the quantization parameter may also be a value used to generate a transform coefficient by scaling the transform coefficient level in inverse quantization. Alternatively, the quantization parameter may be a value mapped to a quantization step.
Delta quantization parameter (Delta): the delta quantization parameter may represent a difference between the quantization parameter of the target unit and the predicted quantization parameter.
Scanning: scanning may represent a method of arranging the order of coefficients in a cell, block, or matrix. For example, a method for arranging a 2D array in the form of a one-dimensional (1D) array may be referred to as "scanning". Alternatively, the method for arranging the 1D array in the form of a 2D array may also be referred to as "scanning" or "inverse scanning".
Transform coefficients: the transform coefficient may be a coefficient value generated when the encoding apparatus performs the transform. Alternatively, the transform coefficient may be a coefficient value generated when the decoding apparatus performs at least one of entropy decoding and inverse quantization.
The quantized levels or quantized transform coefficient levels generated by applying quantization to the transform coefficients or the residual signal may also be included in the meaning of the term "transform coefficients".
Level of quantization: the level of quantization may be a value generated when the encoding apparatus performs quantization on the transform coefficient or the residual signal. Alternatively, the quantized level may be a value that is a target of inverse quantization when the decoding apparatus performs inverse quantization.
The quantized transform coefficient levels as a result of the transform and quantization may also be included in the meaning of quantized levels.
Non-zero transform coefficients: the non-zero transform coefficient may be a transform coefficient having a value other than 0 or may be a transform coefficient level having a value other than 0. Alternatively, the non-zero transform coefficient may be a transform coefficient whose value is not 0 in magnitude, or may be a transform coefficient level whose value is not 0 in magnitude.
Quantization matrix: the quantization matrix may be a matrix used in a quantization process or an inverse quantization process in order to improve subjective image quality or objective image quality of an image. The quantization matrix may also be referred to as a "scaling list".
Quantization matrix coefficients: the quantization matrix coefficient may be each element of the quantization matrix. The quantized matrix coefficients may also be referred to as "matrix coefficients".
A default matrix: the default matrix may be a quantization matrix predefined by the encoding device and the decoding device.
Non-default matrix: the non-default matrix may be a quantization matrix that is not predefined by the encoding device and the decoding device. The non-default matrix may represent a quantization matrix signaled by a user from an encoding device to a decoding device.
Most Probable Mode (MPM): the MPM may represent an intra prediction mode in which a high probability is used for intra prediction for the target block.
The encoding apparatus and the decoding apparatus may determine one or more MPMs based on the encoding parameters related to the target block and the attributes of the entity related to the target block.
The encoding device and the decoding device may determine the one or more MPMs based on an intra prediction mode of the reference block. The reference block may include a plurality of reference blocks. The plurality of reference blocks may include a spatially adjacent block adjacent to a left side of the target block and a spatially adjacent block adjacent to an upper side of the target block. In other words, one or more different MPMs may be determined according to which intra prediction modes have been used for the reference block.
One or more MPMs may be determined in the same way in both the encoding device and the decoding device. That is, the encoding apparatus and the decoding apparatus may share the same MPM list including one or more MPMs.
MPM List: the MPM list may be a list including one or more MPMs. The number of one or more MPMs in the MPM list may be predefined.
MPM indicator: the MPM indicator may indicate an MPM to be used for intra prediction for the target block among one or more MPMs in the MPM list. For example, the MPM indicator may be an index for an MPM list.
Since the MPM list is determined in the same manner in both the encoding device and the decoding device, it may not be necessary to transmit the MPM list itself from the encoding device to the decoding device.
The MPM indicator may be signaled from the encoding device to the decoding device. Since the MPM indicator is signaled, the decoding apparatus may determine an MPM to be used for intra prediction for the target block among MPMs in the MPM list.
MPM usage indicator: the MPM usage indicator may indicate whether an MPM usage mode is to be used for prediction for the target block. The MPM use mode may be a mode that determines an MPM to be used for intra prediction for the target block using the MPM list.
The MPM usage indicator may be signaled from the encoding device to the decoding device.
Signal transmission: "signaling" may mean that information is sent from an encoding device to a decoding device. Alternatively, "signaling" may mean that the information is included in a bitstream or a recording medium. The information signaled by the encoding device may be used by the decoding device.
The encoding device may generate the encoded information by performing an encoding of the information to be signaled. The encoded information may be transmitted from the encoding device to the decoding device. The decoding apparatus may obtain the information by decoding the transmitted encoded information. Here, the encoding may be entropy encoding, and the decoding may be entropy decoding.
And (4) statistical value: variables, coding parameters, constants, etc. may have calculable values. The statistical value may be a value generated by performing calculation (operation) on a value of a specified target. For example, the statistical value may indicate one or more of an average, a weighted sum, a minimum, a maximum, a mode, a median, and an interpolation of values of a particular variable, a particular encoding parameter, a particular constant, and the like.
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied.
The encoding device 100 may be an encoder, a video encoding device, or an image encoding device. A video may comprise one or more images (pictures). The encoding apparatus 100 may sequentially encode one or more images of a video.
Referring to fig. 1, the encoding apparatus 100 includes an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy encoding unit 150, an inverse quantization (inverse quantization) unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
The encoding apparatus 100 may perform encoding on the target image using an intra mode and/or an inter mode. In other words, the prediction mode of the target block may be one of an intra mode and an inter mode.
Hereinafter, the terms "intra mode", "intra prediction mode", "intra mode", and "intra prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "inter mode", "inter prediction mode", "inter mode", and "inter prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the term "image" may indicate only a partial image, or may indicate a block. Further, the processing of an "image" may indicate sequential processing of a plurality of blocks.
Further, the encoding apparatus 100 may generate a bitstream including encoded information by encoding the target image, and may output and store the generated bitstream. The generated bitstream may be stored in a computer-readable storage medium and may be streamed via a wired and/or wireless transmission medium.
When the intra mode is used as the prediction mode, the switch 115 may switch to the intra mode. When the inter mode is used as the prediction mode, the switch 115 may switch to the inter mode.
The encoding apparatus 100 may generate a prediction block of a target block. Also, after the prediction block has been generated, the encoding apparatus 100 may encode a residual block for the target block using a residual between the target block and the prediction block.
When the prediction mode is the intra mode, the intra prediction unit 120 may use pixels of a previously encoded/decoded neighboring block adjacent to the target block as reference samples. The intra prediction unit 120 may perform spatial prediction on the target block using the reference sample points, and may generate prediction sample points for the target block through the spatial prediction. The prediction samples may represent samples in a prediction block.
The inter prediction unit 110 may include a motion prediction unit and a motion compensation unit.
When the prediction mode is the inter mode, the motion prediction unit may search for a region that best matches the target block in the reference image in the motion prediction process, and may derive a motion vector for the target block and the found region based on the found region. Here, the motion prediction unit may use the search range as a target region for the search.
The reference image may be stored in the reference picture buffer 190. More specifically, when encoding and/or decoding of a reference image has been processed, the encoded and/or decoded reference image may be stored in the reference picture buffer 190.
The reference picture buffer 190 may be a Decoded Picture Buffer (DPB) since decoded pictures are stored.
The motion compensation unit may generate the prediction block for the target block by performing motion compensation using the motion vector. Here, the motion vector may be a two-dimensional (2D) vector for inter prediction. Further, the motion vector may indicate an offset between the target image and the reference image.
When the motion vector has a value other than an integer, the motion prediction unit and the motion compensation unit may generate the prediction block by applying an interpolation filter to a partial region of the reference picture. In order to perform inter prediction or motion compensation, it may be determined which one of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a method for predicting and compensating for motion of a PU included in a CU based on the CU, and the inter prediction or motion compensation may be performed according to the mode.
The subtractor 125 may generate a residual block, wherein the residual block is a difference between the target block and the prediction block. The residual block may also be referred to as a "residual signal".
The residual signal may be the difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming or quantizing the difference between the original signal and the prediction signal or a signal generated by transforming and quantizing the difference. The residual block may be a residual signal for a block unit.
The transform unit 130 may generate transform coefficients by transforming the residual block, and may output the generated transform coefficients. Here, the transform coefficient may be a coefficient value generated by transforming the residual block.
The transform unit 130 may use one of a plurality of predefined transform methods when performing the transform.
The plurality of predefined transform methods may include Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve transform (KLT), and the like.
The transform method for transforming the residual block may be determined according to at least one of the encoding parameters for the target block and/or the neighboring blocks. For example, the transform method may be determined based on at least one of an inter prediction mode for the PU, an intra prediction mode for the PU, a size of the TU, and a shape of the TU. Alternatively, transform information indicating a transform method may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the transform skip mode is used, the transform unit 130 may omit an operation of transforming the residual block.
By performing quantization on the transform coefficients, quantized transform coefficient levels or quantized levels may be generated. Hereinafter, in the embodiment, each of the quantized transform coefficient level and the quantized level may also be referred to as a "transform coefficient".
The quantization unit 140 may generate a quantized transform coefficient level (i.e., a quantized level or a quantized coefficient) by quantizing the transform coefficient according to a quantization parameter. The quantization unit 140 may output the generated quantized transform coefficient levels. In this case, the quantization unit 140 may quantize the transform coefficient using a quantization matrix.
The entropy encoding unit 150 may generate a bitstream by performing probability distribution-based entropy encoding based on the values calculated by the quantization unit 140 and/or encoding parameter values calculated in the encoding process. The entropy encoding unit 150 may output the generated bitstream.
The entropy encoding unit 150 may perform entropy encoding on information about pixels of an image and information required for decoding the image. For example, information required for decoding an image may include syntax elements and the like.
When entropy coding is applied, fewer bits may be allocated to more frequently occurring symbols and more bits may be allocated to less frequently occurring symbols. Since the symbol is represented by this allocation, the size of the bit string for the target symbol to be encoded can be reduced. Accordingly, the compression performance of video encoding can be improved by entropy encoding.
Also, in order to perform entropy encoding, the entropy encoding unit 150 may use an encoding method such as exponential golomb, Context Adaptive Variable Length Coding (CAVLC), or Context Adaptive Binary Arithmetic Coding (CABAC). For example, entropy encoding unit 150 may perform entropy encoding using a variable length coding/code (VLC) table. For example, the entropy encoding unit 150 may derive a binarization method for the target symbol. Furthermore, entropy encoding unit 150 may derive a probability model for the target symbol/bin. The entropy encoding unit 150 may perform arithmetic encoding using the derived binarization method, probability model, and context model.
The entropy encoding unit 150 may transform the coefficients in the form of 2D blocks into the form of 1D vectors by a transform coefficient scanning method so as to encode the quantized transform coefficient levels.
The encoding parameter may be information required for encoding and/or decoding. The encoding parameter may include information encoded by the encoding apparatus 100 and transmitted from the encoding apparatus 100 to the decoding apparatus, and may also include information that may be derived in an encoding or decoding process. For example, the information sent to the decoding device may include syntax elements.
The encoding parameters may include not only information (or flags or indexes) such as syntax elements encoded by the encoding apparatus and signaled by the encoding apparatus to the decoding apparatus, but also information derived in the encoding or decoding process. In addition, the encoding parameter may include information required to encode or decode the image. For example, the encoding parameters may include at least one of the following, a combination of the following, or statistics: a size of the unit/block, a shape/form of the unit/block, a depth of the unit/block, partition information of the unit/block, a partition structure of the unit/block, information indicating whether the unit/block is partitioned in a quad-tree structure, information indicating whether the unit/block is partitioned in a binary-tree structure, a partition direction of the binary-tree structure (horizontal direction or vertical direction), a partition form of the binary-tree structure (symmetric partition or asymmetric partition), information indicating whether the unit/block is partitioned in a ternary-tree structure, a partition direction of the ternary-tree structure (horizontal direction or vertical direction), a partition form of the ternary-tree structure (symmetric partition or asymmetric partition, etc.), information indicating whether the unit/block is partitioned in a multi-type tree structure, a combination and direction of partitions of the multi-type tree structure (horizontal direction or vertical direction, etc.), and, Partition form of partitions of multi-type tree structure (symmetric partition or asymmetric partition, etc.), partition tree of multi-type tree form (binary tree or ternary tree), prediction type (intra prediction or inter prediction), intra prediction mode/direction, intra luma prediction mode/direction, intra chroma prediction mode/direction, intra partition information, inter partition information, coding block partition flag, prediction block partition flag, transform block partition flag, reference sample point filtering method, reference sample point filter tap, reference sample point filter coefficient, prediction block filtering method, prediction block filter tap, prediction block filter coefficient, prediction block boundary filtering method, prediction block boundary filter tap, prediction block boundary filter coefficient, inter prediction mode, motion information, motion vector difference, reference picture index, prediction mode, motion vector, motion information, motion vector, reference picture index, and/mode, Inter prediction direction, inter prediction indicator, prediction list utilization flag, reference picture list, reference picture, POC, motion vector predictor, motion vector prediction index, motion vector prediction candidate, motion vector candidate list, information indicating whether merge mode is used, merge index, merge candidate list, information indicating whether skip mode is used, type of interpolation filter, tap of interpolation filter, filter coefficient of interpolation filter, size of motion vector, accuracy of motion vector representation, transform type, transform size, information indicating whether first transform is used, information indicating whether additional (second) transform is used, first transform selection information (or first transform index), second transform selection information (or second transform index), information indicating presence or absence of residual signal, motion vector prediction index, and motion vector prediction index, A coding block pattern, a coding block flag, a quantization parameter, a residual quantization parameter, a quantization matrix, information on an in-loop filter, information indicating whether an in-loop filter is applied, a coefficient of an in-loop filter, a tap of an in-loop filter, a shape/form of an in-loop filter, information indicating whether a deblocking filter is applied, a coefficient of a deblocking filter, a tap of a deblocking filter, a deblocking filter strength, a shape/form of a deblocking filter, information indicating whether an adaptive sample offset is applied, a value of an adaptive sample offset, a class of an adaptive sample offset, a type of an adaptive sample offset, information indicating whether an adaptive loop filter is applied, a coefficient of an adaptive loop filter, a tap of an adaptive loop filter, a shape/form of an adaptive loop filter, a binarization/inverse binarization method, a computer program, and a computer-readable storage medium, Context model, context model deciding method, context model updating method, information indicating whether normal mode is executed or not, information indicating whether bypass (bypass) mode is executed or not, significant coefficient flag, last significant coefficient flag, coding flag of coefficient group, position of last significant coefficient, information indicating whether value of coefficient is greater than 1, information indicating whether value of coefficient is greater than 2, information indicating whether value of coefficient is greater than 3, residual coefficient value information, sign information, reconstructed luma sample, re-quantized chroma sample, context binary, bypass binary, residual luma sample, residual chroma sample, transform coefficient, luma transform coefficient, chroma transform coefficient, quantized level, luma quantized level, chroma quantized level, transform coefficient level scanning method, size of motion vector search area on decoding apparatus side, and method of decoding apparatus, A shape/form of a motion vector search region on a decoding apparatus side, a number of motion vector searches on the decoding apparatus side, a size of a CTU, a minimum block size, a maximum block depth, a minimum block depth, an image display/output order, slice identification information, a slice type, slice partition information, parallel block group identification information, a parallel block group type, parallel block group partition information, parallel block identification information, a parallel block type, parallel block partition information, a picture type, a bit depth, an input sample bit depth, a reconstructed sample bit depth, a residual sample bit depth, a transform coefficient bit depth, a quantized level bit depth, information on a luminance signal, information on a chrominance signal, a color space of a target block, and a color space of a residual block. In addition, the above-described encoding parameter related information may also be included in the encoding parameter. Information for calculating and/or deriving the above-described encoding parameters may also be included in the encoding parameters. Information calculated or derived using the above-described encoding parameters may also be included in the encoding parameters.
The prediction scheme may represent one of an intra prediction mode and an inter prediction mode.
The first transform selection information may indicate a first transform applied to the target block.
The second transform selection information may indicate a second transform applied to the target block.
The residual signal may represent the difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming a difference between the original signal and the prediction signal. Alternatively, the residual signal may be a signal resulting from transforming and quantizing the difference between the original signal and the prediction signal. The residual block may be a residual signal for the block.
Here, signaling the information may indicate that the encoding apparatus 100 includes entropy-encoded information generated by performing entropy encoding on the flag or the index in the bitstream, and may indicate that the decoding apparatus 200 acquires the information by performing entropy decoding on the entropy-encoded information extracted from the bitstream. Here, the information may include a flag, an index, and the like.
The bitstream may include information based on a specific syntax. The encoding apparatus 100 may generate a bitstream including information according to a specific syntax. The decoding apparatus 200 may acquire information from the bitstream according to a specific syntax.
Since the encoding apparatus 100 performs encoding via inter prediction, the encoded target image can be used as a reference image for another image to be subsequently processed. Accordingly, the encoding apparatus 100 may reconstruct or decode the encoded target image and store the reconstructed or decoded image as a reference image in the reference picture buffer 190. For decoding, inverse quantization and inverse transformation of the encoded target image may be performed.
The quantized levels may be inverse quantized by the inverse quantization unit 160 and inverse transformed by the inverse transformation unit 170. The inverse quantization unit 160 may generate inverse quantized coefficients by performing an inverse transform on the quantized levels. The inverse transform unit 170 may generate the inverse quantized and inverse transformed coefficients by performing an inverse transform on the inverse quantized coefficients.
The inverse quantized and inverse transformed coefficients may be added to the prediction block by adder 175. The inverse quantized and inverse transformed coefficients and the prediction block are added, and then a reconstructed block may be generated. Here, the inverse quantized and/or inverse transformed coefficients may represent coefficients on which one or more of inverse quantization and inverse transformation are performed, and may also represent a reconstructed residual block. Here, the reconstructed block may represent a restored block or a decoded block.
The reconstructed block may be filtered by the filter unit 180. Filter unit 180 may apply one or more of a deblocking filter, a Sample Adaptive Offset (SAO) filter, an Adaptive Loop Filter (ALF), and a non-local filter (NLF) to the reconstructed samples, reconstructed blocks, or reconstructed pictures. The filter unit 180 may also be referred to as a "loop filter".
The deblocking filter may remove block distortion occurring at the boundary between blocks. In order to determine whether to apply the deblocking filter, it may be decided to be included in the block and include the number of columns or lines of pixels on which to determine whether to apply the deblocking filter to the target block.
When the deblocking filter is applied to the target block, the applied filter may be different according to the strength of the deblocking filtering required. In other words, among different filters, a filter decided in consideration of the strength of the deblocking filtering may be applied to the target block. When the deblocking filter is applied to the target block, a filter corresponding to any one of the strong filter and the weak filter may be applied to the target block according to a required strength of the deblocking filter.
Further, when vertical filtering and horizontal filtering are performed on the target block, the horizontal filtering and the vertical filtering may be performed in parallel.
The SAO may add the appropriate offset to the pixel values to compensate for the coding error. The SAO may perform a correction on the image to which the deblocking is applied on a pixel basis, wherein the correction uses an offset of a difference between the original image and the image to which the deblocking is applied. In order to perform offset correction for an image, a method for dividing pixels included in the image into a certain number of regions, determining a region to which an offset is to be applied among the divided regions, and applying the offset to the determined region may be used, and a method for applying the offset in consideration of edge information of each pixel may also be used.
ALF may perform filtering based on values obtained by comparing a reconstructed image with an original image. After pixels included in an image have been divided into a predetermined number of groups, a filter to be applied to each group may be determined, and filtering may be performed differently for the respective groups. Information about whether to apply the adaptive loop filter may be signaled for each CU. Such information may be signaled for a luminance signal. The shape and filter coefficients of the ALF to be applied to each block may be different for each block. Alternatively, ALF having a fixed form may be applied to a block regardless of the characteristics of the block.
The non-local filter may perform filtering based on a reconstructed block similar to the target block. A region similar to the target block may be selected from the reconstructed picture, and filtering of the target block may be performed using statistical properties of the selected similar region. Information about whether to apply a non-local filter may be signaled for a Coding Unit (CU). Further, the shape and filter coefficients of the non-local filter to be applied to a block may be different according to the block.
The reconstructed block or the reconstructed image filtered by the filter unit 180 may be stored as a reference picture in the reference picture buffer 190. The reconstructed block filtered by the filter unit 180 may be a portion of a reference picture. In other words, the reference picture may be a reconstructed picture composed of the reconstructed block filtered by the filter unit 180. The stored reference pictures can then be used for inter prediction or motion compensation.
Fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied.
The decoding apparatus 200 may be a decoder, a video decoding apparatus, or an image decoding apparatus.
Referring to fig. 2, the decoding apparatus 200 may include an entropy decoding unit 210, an inverse quantization (inverse quantization) unit 220, an inverse transformation unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, a filter unit 260, and a reference picture buffer 270.
The decoding apparatus 200 may receive the bitstream output from the encoding apparatus 100. The decoding apparatus 200 may receive a bitstream stored in a computer-readable storage medium and may receive a bitstream transmitted through a wired/wireless transmission medium stream.
The decoding apparatus 200 may perform decoding on the bitstream in an intra mode and/or an inter mode. Further, the decoding apparatus 200 may generate a reconstructed image or a decoded image via decoding, and may output the reconstructed image or the decoded image.
For example, an operation of switching to an intra mode or an inter mode based on a prediction mode for decoding may be performed by the switch 245. When the prediction mode for decoding is intra mode, switch 245 may be operated to switch to intra mode. When the prediction mode for decoding is an inter mode, switch 245 may be operated to switch to the inter mode.
The decoding apparatus 200 may acquire a reconstructed residual block by decoding an input bitstream and may generate a prediction block. When the reconstructed residual block and the prediction block are acquired, the decoding apparatus 200 may generate a reconstructed block, which is a target to be decoded, by adding the reconstructed residual block to the prediction block.
The entropy decoding unit 210 may generate symbols by performing entropy decoding on the bitstream based on a probability distribution of the bitstream. The generated symbols may include symbols in the form of quantized transform coefficient levels (i.e., quantized levels or quantized coefficients). Here, the entropy decoding method may be similar to the entropy encoding method described above. That is, the entropy decoding method may be the inverse process of the entropy encoding method described above.
The entropy decoding unit 210 may change coefficients having a one-dimensional (1D) vector form into a 2D block shape by a transform coefficient scanning method in order to decode quantized transform coefficient levels.
For example, the coefficients of a block may be changed to a 2D block shape by scanning the block coefficients using an upper right diagonal scan. Alternatively, which one of the upper right diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the corresponding block and/or the intra prediction mode.
The quantized coefficients may be inverse quantized by the inverse quantization unit 220. The inverse quantization unit 220 may generate inverse quantized coefficients by performing inverse quantization on the quantized coefficients. Also, the inverse quantized coefficients may be inverse transformed by the inverse transform unit 230. The inverse transform unit 230 may generate a reconstructed residual block by performing inverse transform on the inverse quantized coefficients. As a result of inverse quantization and inverse transformation performed on the quantized coefficients, a reconstructed residual block may be generated. Here, when generating the reconstructed residual block, the inverse quantization unit 220 may apply a quantization matrix to the quantized coefficients.
When using the intra mode, the intra prediction unit 240 may generate a prediction block by performing spatial prediction on a target block, wherein the spatial prediction uses pixel values of previously decoded neighboring blocks adjacent to the target block.
The inter prediction unit 250 may include a motion compensation unit. Alternatively, the inter prediction unit 250 may be designated as a "motion compensation unit".
When the inter mode is used, the motion compensation unit 250 may generate a prediction block by performing motion compensation for the target block, wherein the motion compensation uses the motion vector and the reference image stored in the reference picture buffer 270.
The motion compensation unit may apply an interpolation filter to a partial region of the reference image when the motion vector has a value other than an integer, and may generate the prediction block using the reference image to which the interpolation filter is applied. To perform motion compensation, the motion compensation unit may determine which one of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a motion compensation method for a PU included in the CU based on the CU, and may perform motion compensation according to the determined mode.
The reconstructed residual block and the prediction block may be added to each other by an adder 255. The adder 255 may generate a reconstructed block by adding the reconstructed residual block and the prediction block.
The reconstructed block may be filtered by the filter unit 260. Filter unit 260 may apply at least one of a deblocking filter, SAO filter, ALF, and NLF to the reconstructed block or the reconstructed image. The reconstructed image may be a picture that includes the reconstructed block.
The filter unit may output a reconstructed image.
The reconstructed image and/or reconstructed block filtered by the filter unit 260 may be stored as a reference picture in the reference picture buffer 270. The reconstructed block filtered by the filter unit 260 may be a portion of a reference picture. In other words, the reference picture may be an image composed of the reconstructed block filtered by the filter unit 260. The stored reference pictures can then be used for inter prediction or motion compensation.
Fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded.
Fig. 3 may schematically illustrate an example in which a single cell is partitioned into a plurality of sub-cells.
In order to efficiently partition an image, a Coding Unit (CU) may be used in encoding and decoding. The term "unit" may be used to collectively specify 1) a block comprising image samples and 2) syntax elements. For example, "partition of a unit" may represent "partition of a block corresponding to the unit".
A CU can be used as a basic unit for image encoding/decoding. A CU can be used as a unit to which one mode selected from an intra mode and an inter mode is applied in image encoding/decoding. In other words, in image encoding/decoding, it may be determined which one of an intra mode and an inter mode is to be applied to each CU.
Also, a CU may be a basic unit that predicts, transforms, quantizes, inversely transforms, inversely quantizes, and encodes/decodes transform coefficients.
Referring to fig. 3, the image 300 may be sequentially partitioned into units corresponding to maximum coding units (LCUs), and a partition structure may be determined for each LCU. Here, the LCU may be used to have the same meaning as a Coding Tree Unit (CTU).
Partitioning a unit may mean partitioning a block corresponding to the unit. The block partition information may include depth information regarding a depth of the unit. The depth information may indicate a number of times the unit is partitioned and/or a degree to which the unit is partitioned. A single unit may be hierarchically partitioned into a plurality of sub-units while the single unit has depth information based on a tree structure.
Each partitioned sub-unit may have depth information. The depth information may be information indicating a size of the CU. Depth information may be stored for each CU.
Each CU may have depth information. When a CU is partitioned, the depth of the CU resulting from the partitioning may be increased by 1 from the depth of the partitioned CU.
The partition structure may represent the distribution of Coding Units (CUs) in the LCU 310 for efficient encoding of the image. Such a distribution may be determined according to whether a single CU is to be partitioned into multiple CUs. The number of CUs generated by partitioning may be a positive integer of 2 or more, including 2, 3, 4, 8, 16, etc.
According to the number of CUs generated by performing partitioning, the horizontal size and the vertical size of each CU generated by performing partitioning may be smaller than those of the CUs before being partitioned. For example, the horizontal and vertical sizes of each CU generated by partitioning may be half of the horizontal and vertical sizes of the CU before partitioning.
Each partitioned CU may be recursively partitioned into four CUs in the same manner. At least one of a horizontal size and a vertical size of each partitioned CU may be reduced via recursive partitioning compared to at least one of a horizontal size and a vertical size of a CU before being partitioned.
Partitioning of CUs may be performed recursively until a predefined depth or a predefined size.
For example, the depth of a CU may have a value ranging from 0 to 3. The size of a CU may range from a size of 64 × 64 to a size of 8 × 8, depending on the depth of the CU.
For example, the depth of the LCU 310 may be 0, and the depth of the minimum coding unit (SCU) may be a predefined maximum depth. Here, as described above, the LCU may be a CU having a maximum coding unit size, and the SCU may be a CU having a minimum coding unit size.
Partitioning may begin at LCU 310, and the depth of a CU may increase by 1 each time the horizontal and/or vertical dimensions of the CU are reduced by partitioning.
For example, for each depth, a CU that is not partitioned may have a size of 2N × 2N. Further, in the case where CUs are partitioned, CUs of a size of 2N × 2N may be partitioned into four CUs each of a size of N × N. The value of N may be halved each time the depth is increased by 1.
Referring to fig. 3, an LCU having a depth of 0 may have 64 × 64 pixels or 64 × 64 blocks. 0 may be a minimum depth. An SCU of depth 3 may have 8 × 8 pixels or 8 × 8 blocks. 3 may be the maximum depth. Here, a CU having 64 × 64 blocks as an LCU may be represented by depth 0. A CU with 32 x 32 blocks may be represented with depth 1. A CU with 16 x 16 blocks may be represented with depth 2. A CU with 8 x 8 blocks as SCU can be represented by depth 3.
The information on whether the corresponding CU is partitioned may be represented by partition information of the CU. The partition information may be 1-bit information. All CUs except the SCU may include partition information. For example, the value of the partition information of the CU that is not partitioned may be the first value. The value of the partition information of the partitioned CU may be the second value. When the partition information indicates whether the CU is partitioned, the first value may be "0" and the second value may be "1".
For example, when a single CU is partitioned into four CUs, the horizontal size and the vertical size of each of the four CUs generated by the partitioning may be half of the horizontal size and the vertical size of the CU before being partitioned. When a CU having a size of 32 × 32 is partitioned into four CUs, the size of each of the partitioned four CUs may be 16 × 16. When a single CU is partitioned into four CUs, the CUs may be considered to have been partitioned in a quadtree structure. In other words, the quadtree partition may be considered to have been applied to the CU.
For example, when a single CU is partitioned into two CUs, the horizontal size or the vertical size of each of the two CUs generated by the partitioning may be half the horizontal size or the vertical size of the CU before being partitioned. When a CU having a size of 32 × 32 is vertically partitioned into two CUs, the size of each of the partitioned two CUs may be 16 × 32. When a CU having a size of 32 × 32 is horizontally partitioned into two CUs, the size of each of the partitioned two CUs may be 32 × 16. When a single CU is partitioned into two CUs, the CUs may be considered to have been partitioned in a binary tree structure. In other words, the binary tree partition may be considered to have been applied to the CU.
For example, when a single CU is partitioned (or divided) into three CUs, the original CU before being partitioned is partitioned such that its horizontal or vertical size is 1: 2: the ratio of 1 is divided, thus enabling generation of three sub-CUs. For example, when a CU of size 16 × 32 is horizontally partitioned into three sub-CUs, the three sub-CUs generated by the partitioning may have sizes of 16 × 8, 16 × 16, and 16 × 8, respectively, in the direction from top to bottom. For example, when a CU having a size of 32 × 32 is vertically partitioned into three sub-CUs, the three sub-CUs generated by the partitioning may have sizes of 8 × 32, 16 × 32, and 8 × 32, respectively, in a direction from left to right. When a single CU is partitioned into three CUs, the CUs may be considered to be partitioned in a ternary tree. In other words, a ternary tree partition may be considered to have been applied to a CU.
Both quad tree and binary tree partitioning are applied to LCU 310 of fig. 3.
In the encoding apparatus 100, a Coding Tree Unit (CTU) having a size of 64 × 64 may be partitioned into a plurality of smaller CUs by a recursive quadtree structure. A single CU may be partitioned into four CUs having the same size. Each CU may be recursively partitioned and may have a quadtree structure.
By recursive partitioning of CUs, the optimal partitioning method that incurs the smallest rate-distortion cost can be selected.
The Coding Tree Unit (CTU)320 in fig. 3 is an example of a CTU to which a quad tree partition, a binary tree partition, and a ternary tree partition are all applied.
As described above, in order to partition the CTU, at least one of the quadtree partition, the binary tree partition, and the ternary tree partition may be applied to the CTU. Partitions may be applied based on a particular priority.
For example, quadtree partitioning may be preferentially applied to CTUs. CUs that cannot be further partitioned in a quadtree fashion may correspond to leaf nodes of the quadtree. CUs corresponding to leaf nodes of a quadtree may be root nodes of the binary tree and/or the ternary tree. That is, CUs corresponding to leaf nodes of a quadtree may be partitioned in a binary tree form or a ternary tree form, or may not be further partitioned. In this case, each CU generated by applying binary tree partitioning or ternary tree partitioning to CUs corresponding to leaf nodes of the quadtree is prevented from being partitioned by the quadtree again, thereby efficiently performing the partitioning of the block and/or the operation of signaling the block partition information.
The partition of the CU corresponding to each node of the quadtree may be signaled using the four-partition information. The four-partition information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a quadtree form. The four-partition information having a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in the quadtree form. The quad-partition information may be a flag having a specific length (e.g., 1 bit).
There may be no priority between the binary tree partition and the ternary tree partition. That is, CUs corresponding to leaf nodes of a quadtree may be partitioned in a binary tree form or a ternary tree form. Furthermore, CUs generated by binary tree partitioning or ternary tree partitioning may or may not be further partitioned in binary tree form or ternary tree form.
Partitions that are executed when there is no priority between a binary tree partition and a ternary tree partition may be referred to as "multi-type tree partitions". That is, a CU corresponding to a leaf node of a quadtree may be a root node of a multi-type tree. The partition of the CU corresponding to each node of the multi-type tree may be signaled using at least one of information indicating whether the CU is partitioned by the multi-type tree, partition direction information, and partition tree information. For the partition of the CU corresponding to each node of the multi-type tree, information indicating whether or not the partitioning by the multi-type tree is performed, partition direction information, and partition tree information may be sequentially signaled.
For example, the information indicating whether a CU is partitioned in a multi-type tree and has a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a multi-type tree form. The information indicating whether the CU is partitioned in a multi-type tree and has a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in the multi-type tree form.
When a CU corresponding to each node of the multi-type tree is partitioned in the multi-type tree form, the corresponding CU may further include partition direction information.
The partition direction information may indicate a partition direction of the multi-type tree partition. The partition direction information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in the vertical direction. The partition direction information having the second value (e.g., "0") may indicate that the corresponding CU is partitioned in the horizontal direction.
When a CU corresponding to each node of the multi-type tree is partitioned in the multi-type tree form, the corresponding CU may further include partition tree information. The partition tree information may indicate a tree that is used for multi-type tree partitioning.
For example, partition tree information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a binary tree form. The partition tree information having the second value (e.g., "0") may indicate that the corresponding CU is partitioned in a ternary tree form.
Here, each of the above-described information indicating whether partitioning by the multi-type tree is performed, the partition tree information, and the partition direction information may be a flag having a specific length (e.g., 1 bit).
At least one of the above-described four partition information, information indicating whether or not partitioning is performed per multi-type tree, partition direction information, and partition tree information may be entropy-encoded and/or entropy-decoded. To perform entropy encoding/decoding of such information, information of neighboring CUs adjacent to the target CU may be used.
For example, it may be considered that the partition form (i.e., partition/non-partition, partition tree, and/or partition direction) of the left CU and/or the upper CU and the partition form of the target CU may be similar to each other with a high probability. Thus, based on the information of the neighboring CUs, context information for entropy encoding and/or entropy decoding of the information of the target CU may be derived. Here, the information of the neighboring CU may include at least one of: 1) four partition information of a neighboring CU, 2) information indicating whether the neighboring CU is partitioned by a multi-type tree, 3) partition direction information of the neighboring CU, and 4) partition tree information of the neighboring CU.
In another embodiment of binary tree partitioning and ternary tree partitioning, binary tree partitioning may be performed preferentially. That is, binary tree partitioning may be applied first, and then CUs corresponding to leaf nodes of the binary tree may be set as root nodes of the ternary tree. In this case, the quadtree partitioning or the binary tree partitioning may not be performed on CUs corresponding to nodes of the ternary tree.
CUs that are not further partitioned by quadtree partitioning, binary tree partitioning, and/or ternary tree partitioning may be units of coding, prediction, and/or transformation. That is, a CU may not be further partitioned for prediction and/or transform. Accordingly, a partition structure for partitioning a CU into Prediction Units (PUs)/or Transform Units (TUs), partition information thereof, and the like may not exist in a bitstream.
However, when the size of a CU, which is a unit of partitioning, is larger than the size of the maximum transform block, the CU may be recursively partitioned until the size of the CU becomes smaller than or equal to the size of the maximum transform block. For example, when the size of a CU is 64 × 64 and the size of the largest transform block is 32 × 32, the CU may be partitioned into four 32 × 32 blocks in order to perform the transform. For example, when the size of a CU is 32 × 64 and the size of the largest transform block is 32 × 32, the CU may be partitioned into two 32 × 32 blocks.
In this case, the information indicating whether a CU is partitioned for transformation may not be separately signaled. Without signaling, it may be determined whether a CU is partitioned via a comparison between the horizontal size (and/or vertical size) of the CU and the horizontal size (and/or vertical size) of the largest transform block. For example, a CU may be vertically halved when the horizontal size of the CU is larger than the horizontal size of the largest transform block. Furthermore, when the vertical size of a CU is larger than the vertical size of the largest transform block, the CU may be horizontally halved.
The information on the maximum size and/or the minimum size of the CU and the information on the maximum size and/or the minimum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a parallel block group level, or a stripe level. For example, the minimum size of a CU may be set to 4 × 4. For example, the maximum size of the transform block may be set to 64 × 64. For example, the maximum size of the transform block may be set to 4 × 4.
Information about a minimum size of a CU corresponding to a leaf node of the quadtree (i.e., a minimum size of the quadtree) and/or information about a maximum depth of a path from a root node of the multi-type tree to the leaf node (i.e., a maximum depth of the multi-type tree) may be signaled or determined at a level higher than that of the CU. For example, the higher level may be a sequence level, a picture level, a stripe level, a parallel block group level, or a parallel block level. Information regarding a minimum size of the quadtree and/or information regarding a maximum depth of the multi-type tree may be separately signaled or determined at each of the intra-stripe level and the inter-stripe level.
Information about the difference between the size of the CTU and the maximum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a slice level, a parallel block group level, or a parallel block level. Information on the maximum size of the CU corresponding to each node of the binary tree (i.e., the maximum size of the binary tree) may be determined based on the size of the CTU and the information of the difference. The maximum size of the CU corresponding to each node of the ternary tree (i.e., the maximum size of the ternary tree) may have different values according to the type of the slice. For example, the maximum size of the ternary tree at the intra-stripe level may be 32 × 32. For example, the maximum size of the ternary tree at the inter-band level may be 128 × 128. For example, the minimum size of the CU corresponding to each node of the binary tree (i.e., the minimum size of the binary tree) and/or the minimum size of the CU corresponding to each node of the ternary tree (i.e., the minimum size of the ternary tree) may be set to the minimum size of the CU.
In another example, the maximum size of the binary tree and/or the maximum size of the ternary tree may be signaled or determined at the slice level. Further, a minimum size of the binary tree and/or a minimum size of the ternary tree may be signaled or determined at the slice level.
Based on the various block sizes and depths, the four-partition information, the information indicating whether partitioning by the multi-type tree is performed, the partition tree information, and/or the partition direction information described above may or may not be present in the bitstream.
For example, when the size of the CU is not greater than the minimum size of the quadtree, the CU may not include the four-partition information, and the four-partition information of the CU may be inferred to a second value.
For example, when the size (horizontal size and vertical size) of a CU corresponding to each node of the multi-type tree is larger than the maximum size (horizontal size and vertical size) of the binary tree and/or the maximum size (horizontal size and vertical size) of the ternary tree, the CU may not be partitioned in the binary tree form and/or the ternary tree form. By this determination, information indicating whether partitioning is performed per multi-type tree may not be signaled, but may be inferred as a second value.
Alternatively, a CU may not be partitioned in the binary tree form and/or the ternary tree form when the size (horizontal size and vertical size) of the CU corresponding to each node of the multi-type tree is equal to the minimum size (horizontal size and vertical size) of the binary tree, or when the size (horizontal size and vertical size) of the CU is equal to twice the minimum size (horizontal size and vertical size) of the ternary tree. By this determination, information indicating whether partitioning is performed per multi-type tree may be signaled but may be inferred as a second value. The reason for this is that when a CU is partitioned in binary tree form and/or ternary tree form, a CU smaller than the minimum size of the binary tree and/or the minimum size of the ternary tree is generated.
Alternatively, binary or ternary tree partitioning may be restricted based on the size of the virtual pipeline data unit (i.e., the size of the pipeline buffer). For example, binary or ternary tree partitioning may be limited when a CU is partitioned into sub-CUs that do not fit the size of the pipeline buffer by binary or ternary tree partitioning. The size of the pipeline buffer may be equal to the maximum size of the transform block (e.g., 64 x 64).
For example, when the size of the pipeline buffer is 64 × 64, the following partitions may be restricted.
Ternary tree partitioning for nxm CU (where N and/or M are 128)
Horizontal binary tree partitioning for 128 × N CU (where N < ═ 64)
Vertical binary tree partitioning for nx128 CU (where N < ═ 64)
Alternatively, a CU may not be partitioned in binary tree form and/or ternary tree form when a depth of the CU corresponding to each node of the multi-type tree is equal to a maximum depth of the multi-type tree. By this determination, information indicating whether partitioning is performed per multi-type tree may be signaled but may be inferred as a second value.
Alternatively, the information indicating whether partitioning per the multi-type tree is performed may be signaled only when at least one of the vertical binary tree partition, the horizontal binary tree partition, the vertical ternary tree partition, and the horizontal ternary tree partition is possible for the CU corresponding to each node of the multi-type tree. Otherwise, the CU may not be partitioned in binary tree form and/or ternary tree form. By this determination, the information indicating whether partitioning is performed per multi-type tree may not be signaled, but may be inferred as a second value.
Alternatively, for a CU corresponding to each node of the multi-type tree, partition direction information may be signaled only when both vertical and horizontal binary tree partitions are feasible or only when both vertical and horizontal ternary tree partitions are feasible. Otherwise, partition direction information may be signaled but may be inferred as a value indicating the direction in which the CU may be partitioned.
Alternatively, for a CU corresponding to each node of the multi-type tree, the partition tree information may be signaled only when both vertical binary tree partitioning and vertical ternary tree partitioning are feasible or only when both horizontal binary tree partitioning and horizontal ternary tree partitioning are feasible. Otherwise, partition tree information may not be signaled, but may be inferred as a value indicating a tree applicable to the partitions of the CU.
Fig. 4 is a diagram illustrating a form of a prediction unit that a coding unit can include.
Among CUs partitioned from the LCU, CUs that are no longer partitioned may be divided into one or more Prediction Units (PUs). This division is also referred to as "partitioning".
A PU may be the basic unit for prediction. The PU may be encoded and decoded in any one of a skip mode, an inter mode, and an intra mode. The PU may be partitioned into various shapes according to various modes. For example, the target block described above with reference to fig. 1 and the target block described above with reference to fig. 2 may both be PUs.
A CU may not be partitioned into PUs. When a CU is not divided into PUs, the size of the CU and the size of the PU may be equal to each other.
In skip mode, there may be no partitions in a CU. In the skip mode, the 2N × 2N mode 410 may be supported without partitioning, wherein the size of the PU and the size of the CU are the same as each other in the 2N × 2N mode 410.
In inter mode, there may be 8 types of partition shapes in a CU. For example, in the inter mode, a 2N × 2N mode 410, a 2N × N mode 415, an N × 2N mode 420, an N × N mode 425, a 2N × nU mode 430, a 2N × nD mode 435, an nL × 2N mode 440, and an nR × 2N mode 445 may be supported.
In intra mode, a 2N × 2N mode 410 and an N × N mode 425 may be supported.
In the 2N × 2N mode 410, PUs of size 2N × 2N may be encoded. A PU of size 2N × 2N may represent a PU of the same size as the CU. For example, a PU of size 2N × 2N may have a size 64 × 64, 32 × 32, 16 × 16, or 8 × 8.
In the nxn mode 425, PUs of size nxn may be encoded.
For example, in intra prediction, when the size of a PU is 8 × 8, four partitioned PUs may be encoded. The size of each partitioned PU may be 4 x 4.
When a PU is encoded in intra mode, the PU may be encoded using any one of a plurality of intra prediction modes. For example, High Efficiency Video Coding (HEVC) techniques may provide 35 intra prediction modes, a PU may be encoded in any one of the 35 intra prediction modes.
Which of the 2N × 2N mode 410 and the N × N mode 425 is to be used to encode the PU may be determined based on the rate-distortion cost.
The encoding apparatus 100 may perform an encoding operation on PUs having a size of 2N × 2N. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. Through the encoding operation, the optimal intra prediction mode for a PU of size 2N × 2N may be derived. The optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost occurs when a PU having a size of 2N × 2N is encoded, among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
Further, the encoding apparatus 100 may sequentially perform an encoding operation on the respective PUs obtained by performing the N × N partitioning. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. Through the encoding operation, the optimal intra prediction mode for a PU of size N × N may be derived. The optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost occurs when a PU having a size of N × N is encoded, among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
The encoding apparatus 100 may determine which one of a PU of size 2N × 2N and a PU of size N × N is to be encoded based on a comparison between a rate distortion cost of the PU of size 2N × 2N and a rate distortion cost of the PU of size N × N.
A single CU may be partitioned into one or more PUs, and a PU may be partitioned into multiple PUs.
For example, when a single PU is partitioned into four PUs, the horizontal and vertical dimensions of each of the four PUs produced by the partitioning may be half the horizontal and vertical dimensions of the PU prior to being partitioned. When a PU of size 32 x 32 is partitioned into four PUs, the size of each of the four partitioned PUs may be 16 x 16. When a single PU is partitioned into four PUs, the PUs may be considered to have been partitioned in a quad-tree structure.
For example, when a single PU is partitioned into two PUs, the horizontal or vertical size of each of the two PUs produced by the partitioning may be half the horizontal or vertical size of the PU before being partitioned. When a PU of size 32 x 32 is vertically partitioned into two PUs, the size of each of the two partitioned PUs may be 16 x 32. When a PU of size 32 x 32 is horizontally partitioned into two PUs, the size of each of the two partitioned PUs may be 32 x 16. When a single PU is partitioned into two PUs, the PUs may be considered to have been partitioned in a binary tree structure.
Fig. 5 is a diagram illustrating a form of a transform unit that can be included in a coding unit.
A Transform Unit (TU) may be a basic unit used in a CU for processes such as transform, quantization, inverse transform, inverse quantization, entropy coding, and entropy decoding.
The TU may have a square shape or a rectangular shape. The shape of a TU may be determined based on the size and/or shape of the CU.
Among CUs partitioned from the LCU, CUs that are no longer partitioned into CUs may be partitioned into one or more TUs. Here, the partition structure of the TU may be a quad tree structure. For example, as shown in fig. 5, a single CU 510 may be partitioned one or more times according to a quadtree structure. With such partitioning, a single CU 510 may be composed of TUs having various sizes.
A CU may be considered to be recursively divided when a single CU is divided two or more times. By the division, a single CU may be composed of Transform Units (TUs) having various sizes.
Alternatively, a single CU may be divided into one or more TUs based on the number of vertical and/or horizontal lines dividing the CU.
A CU may be divided into symmetric TUs or asymmetric TUs. For the division into the asymmetric TUs, information regarding the size and/or shape of each TU may be signaled from the encoding apparatus 100 to the decoding apparatus 200. Alternatively, the size and/or shape of each TU may be derived from information on the size and/or shape of the CU.
A CU may not be divided into TUs. When a CU is not divided into TUs, the size of the CU and the size of the TU may be equal to each other.
A single CU may be partitioned into one or more TUs, and a TU may be partitioned into multiple TUs.
For example, when a single TU is partitioned into four TUs, the horizontal and vertical sizes of each of the four TUs generated by the partitioning may be half of those of the TU before being partitioned. When a TU having a size of 32 × 32 is partitioned into four TUs, the size of each of the four partitioned TUs may be 16 × 16. When a single TU is partitioned into four TUs, the TUs may be considered to have been partitioned in a quadtree structure.
For example, when a single TU is partitioned into two TUs, the horizontal size or the vertical size of each of the two TUs generated by the partitioning may be half of the horizontal size or the vertical size of the TU before being partitioned. When a TU of a size of 32 × 32 is vertically partitioned into two TUs, each of the two partitioned TUs may be of a size of 16 × 32. When a TU having a size of 32 × 32 is horizontally partitioned into two TUs, the size of each of the two partitioned TUs may be 32 × 16. When a single TU is partitioned into two TUs, the TUs may be considered to have been partitioned in a binary tree structure.
A CU may be partitioned in a different manner than shown in fig. 5.
For example, a single CU may be divided into three CUs. The horizontal or vertical sizes of the three CUs generated by the division may be 1/4, 1/2, and 1/4 of the horizontal or vertical size of the original CU before being divided, respectively.
For example, when a CU having a size of 32 × 32 is vertically divided into three CUs, the sizes of the three CUs generated by the division may be 8 × 32, 16 × 32, and 8 × 32, respectively. In this way, when a single CU is divided into three CUs, the CU can be considered to be divided in the form of a ternary tree.
One of exemplary division forms (i.e., quadtree division, binary tree division, and ternary tree division) may be applied to the division of the CU, and a variety of division schemes may be combined and used together for the division of the CU. Here, a case where a plurality of division schemes are combined and used together may be referred to as "composite tree-like division".
Fig. 6 illustrates partitioning of a block according to an example.
In the video encoding and/or decoding process, as shown in fig. 6, the target block may be divided. For example, the target block may be a CU.
For the division of the target block, an indicator indicating division information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The partition information may be information indicating how the target block is partitioned.
The partition information may be one or more of a partition flag (hereinafter, referred to as "split _ flag"), a quad-binary flag (hereinafter, referred to as "QB _ flag"), a quad-tree flag (hereinafter, referred to as "quadtree _ flag"), a binary tree flag (hereinafter, referred to as "binary _ flag"), and a binary type flag (hereinafter, referred to as "Btype _ flag").
The "split _ flag" may be a flag indicating whether the block is divided. For example, a split _ flag value of 1 may indicate that the corresponding block is divided. A split _ flag value of 0 may indicate that the corresponding block is not divided.
"QB _ flag" may be a flag indicating which of the quad tree form and the binary tree form corresponds to the shape in which the block is divided. For example, a QB _ flag value of 0 may indicate that the block is divided in a quad tree form. A QB _ flag value of 1 may indicate that the block is divided in a binary tree form. Alternatively, the QB _ flag value of 0 may indicate that the block is divided in a binary tree form. A QB _ flag value of 1 may indicate that the block is divided in a quad tree form.
"quadtree _ flag" may be a flag indicating whether a block is divided in a quad-tree form. For example, a value of quadtree _ flag of 1 may indicate that the block is divided in a quad-tree form. A quadtree _ flag value of 0 may indicate that the block is not divided in a quadtree form.
"binarytree _ flag" may be a flag indicating whether a block is divided in a binary tree form. For example, a binarytree _ flag value of 1 may indicate that the block is divided in a binary tree form. A binarytree _ flag value of 0 may indicate that the block is not divided in a binary tree form.
"Btype _ flag" may be a flag indicating which one of the vertical division and the horizontal division corresponds to the division direction when the block is divided in the binary tree form. For example, a Btype _ flag value of 0 may indicate that the block is divided in the horizontal direction. A Btype _ flag value of 1 may indicate that the block is divided in the vertical direction. Alternatively, a Btype _ flag value of 0 may indicate that the block is divided in the vertical direction. A Btype _ flag value of 1 may indicate that the block is divided in the horizontal direction.
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of quadtree _ flag, binytree _ flag, and Btype _ flag, as shown in table 1 below.
TABLE 1
Figure BDA0003512847470000371
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of split _ flag, QB _ flag, and Btype _ flag, as shown in table 2 below.
TABLE 2
Figure BDA0003512847470000381
The partitioning method may be limited to only a quad tree or a binary tree depending on the size and/or shape of the block. When this restriction is applied, the split _ flag may be a flag indicating whether the block is divided in a quad tree form or a flag indicating whether the block is divided in a binary tree form. The size and shape of the block may be deduced from the depth information of the block, and the depth information may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the size of the block falls within a certain range, it is possible to perform division only in the form of a quad tree. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a quad-tree form.
Information indicating the maximum block size and the minimum block size that can be divided only in the form of a quadtree may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Further, this information may be signaled for at least one of units such as video, sequence, picture, parameter, parallel block group, and slice (or).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the size of the block is larger than 64 × 64 and smaller than 256 × 256, only the division in the form of a quad tree is possible. In this case, split _ flag may be a flag indicating whether to perform partitioning in the form of a quad tree.
When the size of the block is larger than the maximum size of the transform block, only partitioning in the form of a quadtree is possible. Here, the sub-block generated by the partition may be at least one of a CU and a TU.
In this case, the split _ flag may be a flag indicating whether the CU is partitioned in a quadtree form.
When the size of the block falls within a specific range, division in only a binary tree form or a ternary tree form is possible. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a binary tree form or a ternary tree form.
Information indicating the maximum block size and/or the minimum block size that can be divided only in a binary tree form or a ternary tree form may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Further, this information may be signaled for at least one of the units such as sequence, picture, and slice (or slice).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the size of the block is larger than 8 × 8 and smaller than 16 × 16, only division in a binary tree form is possible. In this case, split _ flag may be a flag indicating whether to perform partitioning in a binary tree form or a ternary tree form.
The above description of partitioning in a quadtree format can be equally applied to a binary tree format and/or a ternary tree format.
The partitioning of a block may be limited by previous partitions. For example, when a block is partitioned in a specific binary tree form and a plurality of sub-blocks are generated from the partition, each sub-block may be additionally partitioned only in a specific tree form. Here, the specific tree form may be at least one of a binary tree form, a ternary tree form, and a quaternary tree form.
The indicator may not be signaled when the horizontal size or the vertical size of the partition block is a size that cannot be further divided.
The arrows extending radially from the center of the graph in fig. 7 indicate the prediction directions of the directional intra prediction modes. Further, numbers appearing near the arrows indicate examples of mode values assigned to the intra prediction mode or the prediction direction of the intra prediction mode.
In fig. 7, the number "0" may represent a planar mode as a non-directional intra prediction mode. The number "1" may represent a DC mode as a non-directional intra prediction mode.
Intra-coding and/or decoding may be performed using reference samples of neighboring units of the target block. The neighboring blocks may be reconstructed neighboring blocks. The reference samples may represent neighboring samples.
For example, intra-coding and/or decoding may be performed using values of reference samples included in the reconstructed neighboring blocks or encoding parameters of the reconstructed neighboring blocks.
The encoding apparatus 100 and/or the decoding apparatus 200 may generate the prediction block by performing intra prediction on the target block based on the information on the sampling points in the target image. When the intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may generate a prediction block for the target block by performing the intra prediction based on the information on the sampling points in the target image. When the intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may perform directional prediction and/or non-directional prediction based on the at least one reconstructed reference sample.
The prediction block may be a block generated as a result of performing intra prediction. The prediction block may correspond to at least one of a CU, PU, and TU.
The units of the prediction block may have a size corresponding to at least one of the CU, the PU, and the TU. The prediction block may have a square shape with a size of 2N × 2N or N × N. The size N × N may include sizes 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64, and so on.
Alternatively, the prediction block may be a square block having a size of 2 × 2, 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64, or the like, or a rectangular block having a size of 2 × 8, 4 × 8, 2 × 16, 4 × 16, 8 × 16, or the like.
The intra prediction may be performed in consideration of an intra prediction mode for the target block. The number of intra prediction modes that the target block may have may be a predefined fixed value, and may be a value differently determined according to the properties of the prediction block. For example, the properties of the prediction block may include the size of the prediction block, the type of prediction block, and the like. Furthermore, the properties of the prediction block may indicate the coding parameters used for the prediction block.
For example, the number of intra prediction modes may be fixed to N regardless of the size of the prediction block. Alternatively, the number of intra prediction modes may be, for example, 3, 5, 9, 17, 34, 35, 36, 65, 67, or 95.
The intra prediction mode may be a non-directional mode or a directional mode.
For example, the intra prediction modes may include two non-directional modes and 65 directional modes corresponding to numbers 0 to 66 shown in fig. 7.
For example, in the case of using a specific intra prediction method, the intra prediction modes may include two non-directional modes corresponding to numbers-14 to 80 shown in fig. 7 and 93 directional modes.
The two non-directional modes may include a DC mode and a planar mode.
The directional mode may be a prediction mode having a specific direction or a specific angle. The directional mode may also be referred to as an "angular mode".
The intra prediction mode may be represented by at least one of a mode number, a mode value, a mode angle, and a mode direction. In other words, the terms "a (mode) number of an intra prediction mode", "a (mode) value of an intra prediction mode", "a (mode) angle of an intra prediction mode", and "a (mode) direction of an intra prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
The number of intra prediction modes may be M. The value of M may be 1 or greater. In other words, the number of intra prediction modes may be M, where M includes the number of non-directional modes and the number of directional modes.
The number of intra prediction modes may be fixed to M regardless of the size and/or color components of the block. For example, the number of intra prediction modes may be fixed to any one of 35 and 67 regardless of the size of the block.
Alternatively, the number of intra prediction modes may be different according to the shape, size, and/or type of color component of the block.
For example, in fig. 7, the directional prediction mode as shown by the dotted line may be applied only to prediction for non-square blocks.
For example, the larger the size of the block, the larger the number of intra prediction modes. Alternatively, the larger the size of the block, the smaller the number of intra prediction modes. When the size of the block is 4 × 4 or 8 × 8, the number of intra prediction modes may be 67. When the size of the block is 16 × 16, the number of intra prediction modes may be 35. When the size of the block is 32 × 32, the number of intra prediction modes may be 19. When the size of the block is 64 × 64, the number of intra prediction modes may be 7.
For example, the number of intra prediction modes may be different according to whether a color component is a luminance signal or a chrominance signal. Alternatively, the number of intra prediction modes corresponding to the luminance component block may be greater than the number of intra prediction modes corresponding to the chrominance component block.
For example, in the vertical mode with a mode value of 50, prediction may be performed in the vertical direction based on the pixel values of the reference sampling points. For example, in the horizontal mode with the mode value of 18, prediction may be performed in the horizontal direction based on the pixel values of the reference sampling points.
Even in a directional mode other than the above-described modes, the encoding apparatus 100 and the decoding apparatus 200 may perform intra prediction on a target unit using reference samples according to an angle corresponding to the directional mode.
The intra prediction mode located on the right side with respect to the vertical mode may be referred to as a "vertical-right mode". The intra prediction mode located below the horizontal mode may be referred to as a "horizontal-below mode". For example, in fig. 7, the intra prediction mode having one of the mode values 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, and 66 may be a vertical-right mode. The intra prediction mode having a mode value of one of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17 may be a horizontal-lower mode.
The non-directional mode may include a DC mode and a planar mode. For example, the value of the DC mode may be 1. The value of the planar mode may be 0.
The directional pattern may include an angular pattern. Among the plurality of intra prediction modes, the remaining modes other than the DC mode and the planar mode may be directional modes.
When the intra prediction mode is the DC mode, the prediction block may be generated based on an average value of pixel values of a plurality of reference pixels. For example, the values of the pixels of the prediction block may be determined based on an average of pixel values of a plurality of reference pixels.
The number of intra prediction modes and the mode values of the respective intra prediction modes described above are merely exemplary. The number of intra prediction modes described above and the mode values of the respective intra prediction modes may be defined differently according to embodiments, implementations, and/or requirements.
In order to perform intra prediction on the target block, a step of checking whether samples included in the reconstructed neighboring blocks can be used as reference samples of the target block may be performed. When there are samples that cannot be used as reference samples of the target block among samples in the neighboring block, a value generated via interpolation and/or duplication using at least one sample value among samples included in the reconstructed neighboring block may replace sample values of samples that cannot be used as reference samples. When a value generated via replication and/or interpolation replaces a sample value of an existing sample, the sample may be used as a reference sample for the target block.
When intra prediction is used, a filter may be applied to at least one of the reference sampling point and the prediction sampling point based on at least one of an intra prediction mode and a size of the target block.
The type of the filter to be applied to at least one of the reference samples and the prediction samples may be different according to at least one of an intra prediction mode of the target block, a size of the target block, and a shape of the target block. The type of filter may be classified according to one or more of the length of the filter tap, the value of the filter coefficient, and the filter strength. The length of the filter taps may represent the number of filter taps. Further, the number of filter taps may represent the length of the filter.
When the intra prediction mode is the planar mode, a sample value of the prediction target block may be generated using a weighted sum of an upper reference sample of the target block, a left reference sample of the target block, an upper right reference sample of the target block, and a lower left reference sample of the target block according to a position of the prediction target sample in the prediction block when generating the prediction block of the target block.
When the intra prediction mode is the DC mode, an average value of reference samples above the target block and reference samples to the left of the target block may be used in generating the prediction block of the target block. Further, filtering using the value of the reference sampling point may be performed on a specific row or a specific column in the target block. The particular row may be one or more upper rows adjacent to the reference sample point. The particular column may be one or more left-hand columns adjacent to the reference sample point.
When the intra prediction mode is a directional mode, the prediction block may be generated using the upper reference sample, the left reference sample, the upper right reference sample, and/or the lower left reference sample of the target block.
To generate the predicted samples described above, real-number based interpolation may be performed.
The intra prediction mode of the target block may be predicted from intra prediction modes of neighboring blocks adjacent to the target block, and information for prediction may be entropy-encoded/entropy-decoded.
For example, when the intra prediction modes of the target block and the neighboring block are identical to each other, the intra prediction modes of the target block and the neighboring block may be signaled to be identical using a predefined flag.
For example, an indicator indicating the same intra prediction mode as that of the target block among intra prediction modes of a plurality of neighboring blocks may be signaled.
When the intra prediction modes of the target block and the neighboring block are different from each other, information regarding the intra prediction mode of the target block may be encoded and/or decoded using entropy encoding and/or entropy decoding.
Fig. 8 is a diagram illustrating reference samples used in an intra prediction process.
The reconstructed reference samples for intra prediction of the target block may include a lower left reference sample, a left reference sample, an upper right reference sample, and an upper right reference sample.
For example, the left reference sample point may represent a reconstructed reference pixel adjacent to the left side of the target block. The upper reference sample point may represent a reconstructed reference pixel adjacent to the top of the target block. The upper left reference sample point may represent a reconstructed reference pixel located at the upper left corner of the target block. The lower-left reference sampling point may represent a reference sampling point located below a left side sampling point line composed of the left reference sampling points among sampling points located on the same line as the left side sampling point line. The upper right reference sampling point may represent a reference sampling point located on the right side of an upper sampling point line composed of upper reference sampling points among sampling points located on the same line as the upper sampling point line.
When the size of the target block is N × N, the numbers of the lower-left reference samples, the upper reference samples, and the upper-right reference samples may all be N.
By performing intra prediction on the target block, a prediction block may be generated. The process of generating the prediction block may include determining values of pixels in the prediction block. The sizes of the target block and the prediction block may be the same.
The reference sampling point used for intra prediction of the target block may be changed according to the intra prediction mode of the target block. The direction of the intra prediction mode may represent a dependency between the reference samples and the pixels of the prediction block. For example, a value specifying a reference sample may be used as a value of one or more specified pixels in the prediction block. In this case, the specified reference samples and the one or more specified pixels in the prediction block may be samples and pixels located on a straight line along a direction of an intra prediction mode. In other words, the value of the specified reference sample point may be copied as the value of the pixel located in the direction opposite to the direction of the intra prediction mode. Alternatively, the value of a pixel in the prediction block may be a value of a reference sample located in the direction of the intra prediction mode with respect to the position of the pixel.
In an example, when the intra prediction mode of the target block is a vertical mode, the upper reference sampling point may be used for intra prediction. When the intra prediction mode is a vertical mode, the value of a pixel in the prediction block may be a value of a reference sample point vertically above the position of the pixel. Therefore, the upper reference samples adjacent to the top of the target block may be used for intra prediction. In addition, the values of pixels in a row of the prediction block may be the same as those of the pixels of the upper reference sample point.
In an example, when the intra prediction mode of the target block is a horizontal mode, the left reference sample may be used for intra prediction. When the intra prediction mode is a horizontal mode, the value of a pixel in the prediction block may be a value of a reference sample horizontally located to the left of the position of the pixel. Therefore, the left reference samples adjacent to the left side of the target block may be used for intra prediction. Furthermore, the values of pixels in a column of the prediction block may be the same as the values of pixels of the left reference sample point.
In an example, when a mode value of an intra prediction mode of the current block is 34, at least some of the left reference samples, the upper-left corner reference samples, and at least some of the upper reference samples may be used for intra prediction. When the mode value of the intra prediction mode is 18, the value of a pixel in the prediction block may be a value of a reference sample point located diagonally at an upper left corner of the pixel.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 52 to 66, at least a portion of the upper-right reference samples may be used for intra prediction.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 2 to 17, at least a portion of the lower left reference samples may be used for intra prediction.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 19 to 49, the upper left reference sample may be used for intra prediction.
The number of reference samples used to determine the pixel value of one pixel in the prediction block may be 1 or 2 or more.
As described above, the pixel values of the pixels in the prediction block may be determined according to the positions of the pixels and the positions of the reference samples indicated by the direction of the intra prediction mode. When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are integer positions, the value of one reference sample point indicated by the integer position may be used to determine the pixel value of the pixel in the prediction block.
When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are not integer positions, an interpolated reference sample point based on two reference sample points closest to the position of the reference sample point may be generated. The values of the interpolated reference samples may be used to determine pixel values for pixels in the prediction block. In other words, when the position of the pixel in the prediction block and the position of the reference sample point indicated by the direction of the intra prediction mode indicate a position between two reference sample points, an interpolation based on the values of the two sample points may be generated.
The prediction block generated through prediction may be different from the original target block. In other words, there may be a prediction error, which is a difference between the target block and the prediction block, and there may also be a prediction error between pixels of the target block and pixels of the prediction block.
Hereinafter, the terms "difference", "error" and "residual" may be used to have the same meaning and may be used interchangeably with each other.
For example, in the case of directional intra prediction, the longer the distance between the pixels of the predicted block and the reference sample, the larger the prediction error that may occur. Such a prediction error may cause discontinuity between the generated prediction block and the neighboring block.
To reduce the prediction error, a filtering operation for the prediction block may be used. The filtering operation may be configured to adaptively apply a filter to a region in the prediction block that is considered to have a large prediction error. For example, a region considered to have a large prediction error may be a boundary of a prediction block. In addition, regions that are considered to have a large prediction error in a prediction block may be different according to an intra prediction mode, and characteristics of a filter may also be different according to the intra prediction mode.
As shown in fig. 8, for intra prediction of a target block, at least one of reference line 0 to reference line 3 may be used. Each reference line may indicate a reference sample line. When the number of reference lines is smaller, reference sample line closer to the target block may be indicated.
The samples in segment a and segment F may be obtained by padding using the samples in segment B and segment E that are closest to the target block, rather than from reconstructed neighboring blocks.
Index information indicating a reference sample line to be used for intra prediction of a target block may be signaled. The index information may indicate a reference sample line of the plurality of reference sample lines to be used for intra prediction of the target block. For example, the index information may have a value corresponding to any one of 0 to 3.
When the upper boundary of the target block is the boundary of the CTU, only the reference sample line 0 may be available. Therefore, in this case, the index information may not be signaled. When an additional reference sample line other than the reference sample line 0 is used, filtering of a prediction block, which will be described later, may not be performed.
In the case of inter-color intra prediction, a prediction block of a target block of a second color component may be generated based on a corresponding reconstructed block of a first color component.
For example, the first color component may be a luminance component and the second color component may be a chrominance component.
To perform inter-color intra prediction, parameters of a linear model between the first color component and the second color component may be derived based on the template.
The template may include reference samples (upper reference samples) above the target block and/or reference samples (left reference samples) to the left of the target block, and may include upper reference samples and/or left reference samples of a reconstructed block of the first color component corresponding to the reference samples.
For example, the following values may be used to derive the parameters of the linear model: 1) a value of a sample point of a first color component having a maximum value among sample points in the template, 2) a value of a sample point of a second color component corresponding to a sample point of the first color component, 3) a value of a sample point of a first color component having a minimum value among sample points in the template, and 4) a value of a sample point of a second color component corresponding to a sample point of the first color component.
When the parameters of the linear model are derived, the prediction block of the target block may be generated by applying the corresponding reconstructed block to the linear model.
According to the image format, subsampling may be performed on samples adjacent to the reconstructed block of the first color component and the corresponding reconstructed block of the first color component. For example, when one sampling point of the second color component corresponds to four sampling points of the first color component, one corresponding sampling point may be calculated by performing sub-sampling on the four sampling points of the first color component. When performing sub-sampling, derivation of parameters of the linear model and inter-color intra prediction may be performed based on the respective sampling points of the sub-sampling.
Information regarding whether to perform inter-color intra prediction and/or the range of templates may be signaled in the intra prediction mode.
The target block may be partitioned into two or four sub-blocks in the horizontal direction and/or the vertical direction.
The sub-blocks generated by the partitioning may be sequentially reconstructed. That is, when intra prediction is performed on each sub-block, sub-prediction blocks of the sub-block may be generated. Further, when inverse quantization (inverse quantization) and/or inverse transformation is performed on each sub-block, a sub-residual block for the corresponding sub-block may be generated. The reconstructed sub-block may be generated by adding the sub-prediction block to the sub-residual block. The reconstructed sub-block may be used as a reference sample point for intra prediction of a sub-block having a next priority.
A sub-block may be a block that includes a certain number (e.g., 16) or more samples. For example, when the target block is an 8 × 4 block or a 4 × 8 block, the target block may be partitioned into two sub-blocks. Further, when the target block is a 4 × 4 block, the target block cannot be partitioned into sub-blocks. When the target block has another size, the target block may be partitioned into four sub-blocks.
Information on whether to perform intra prediction based on the sub-blocks and/or information on a partition direction (horizontal direction or vertical direction) may be signaled.
Such sub-block based intra prediction may be limited such that it is only performed when the reference sample line 0 is used. When the sub-block-based intra prediction is performed, filtering of a prediction block, which will be described below, may not be performed.
The final prediction block may be generated by performing filtering on the prediction block generated through intra prediction.
The filtering may be performed by applying a specific weight to the filtering target samples, the left reference samples, the upper reference samples, and/or the upper left reference samples, which are targets to be filtered.
The weight for filtering and/or the reference samples (e.g., the range of the reference samples, the location of the reference samples, etc.) may be determined based on at least one of the block size, the intra prediction mode, and the location of the filtering target samples in the prediction block.
For example, the filtering may be performed only in a specific intra prediction mode (e.g., DC mode, planar mode, vertical mode, horizontal mode, diagonal mode, and/or adjacent diagonal mode).
The adjacent diagonal patterns may be patterns having numbers obtained by adding k to the numbers of the diagonal patterns, and may be patterns having numbers obtained by subtracting k from the numbers of the diagonal patterns. In other words, the number of the adjacent diagonal pattern may be the sum of the number of the diagonal pattern and k, or may be the difference between the number of the diagonal pattern and k. For example, k may be a positive integer of 8 or less.
The intra prediction mode of the target block may be derived using intra prediction modes of neighboring blocks existing near the target block, and such derived intra prediction modes may be entropy-encoded and/or entropy-decoded.
For example, when the intra prediction mode of the target block is the same as the intra prediction modes of the neighboring blocks, information indicating that the intra prediction mode of the target block is the same as the intra prediction modes of the neighboring blocks may be signaled using the specific flag information.
Also, for example, indicator information of neighboring blocks of which intra prediction modes are the same as the intra prediction mode of the target block among the intra prediction modes of the plurality of neighboring blocks may be signaled.
For example, when the intra prediction mode of the target block is different from the intra prediction modes of the neighboring blocks, entropy encoding and/or entropy decoding may be performed on information regarding the intra prediction mode of the target block by performing entropy encoding and/or entropy decoding based on the intra prediction modes of the neighboring blocks.
Fig. 9 is a diagram for explaining an embodiment of an inter prediction process.
The rectangle shown in fig. 9 may represent an image (or picture). In addition, in fig. 9, an arrow may indicate a prediction direction. The arrow pointing from the first picture to the second picture indicates that the second picture refers to the first picture. That is, each image may be encoded and/or decoded according to a prediction direction.
Images can be classified into an intra picture (I picture), a mono-predictive picture or a predictive coded picture (P picture), and a bi-predictive picture or a bi-predictive coded picture (B picture) according to coding types. Each picture may be encoded and/or decoded according to its coding type.
When the target image that is the target to be encoded is an I picture, the target image can be encoded using data contained in the image itself without performing inter prediction with reference to other images. For example, an I picture may be encoded via intra prediction only.
When the target image is a P picture, the target image may be encoded via inter prediction using a reference picture existing in one direction. Here, the one direction may be a forward direction or a backward direction.
When the target image is a B picture, the image may be encoded via inter prediction using reference pictures existing in both directions, or may be encoded via inter prediction using reference pictures existing in one of a forward direction and a backward direction. Here, the two directions may be a forward direction and a backward direction.
P-pictures and B-pictures encoded and/or decoded using reference pictures may be regarded as images using inter prediction.
Hereinafter, inter prediction in the inter mode according to the embodiment will be described in detail.
Inter prediction or motion compensation may be performed using the reference picture and the motion information.
In the inter mode, the encoding apparatus 100 may perform inter prediction and/or motion compensation on the target block. The decoding apparatus 200 may perform inter prediction and/or motion compensation corresponding to the inter prediction and/or motion compensation performed by the encoding apparatus 100 on the target block.
The motion information of the target block may be separately derived by the encoding apparatus 100 and the decoding apparatus 200 during inter prediction. The motion information may be derived using motion information of reconstructed neighboring blocks, motion information of a col block, and/or motion information of blocks adjacent to the col block.
For example, the encoding apparatus 100 or the decoding apparatus 200 may perform prediction and/or motion compensation by using motion information of a spatial candidate and/or a temporal candidate as motion information of a target block. The target blocks may represent PUs and/or PU partitions.
The spatial candidate may be a reconstructed block spatially adjacent to the target block.
The temporal candidate may be a reconstructed block corresponding to the target block in a previously reconstructed co-located picture (col picture).
In the inter prediction, the encoding apparatus 100 and the decoding apparatus 200 may improve encoding efficiency and decoding efficiency by using motion information of spatial candidates and/or temporal candidates. The motion information of the spatial candidates may be referred to as "spatial motion information". The motion information of the temporal candidates may be referred to as "temporal motion information".
Next, the motion information of the spatial candidate may be the motion information of the PU including the spatial candidate. The motion information of the temporal candidate may be the motion information of the PU including the temporal candidate. The motion information of the candidate block may be motion information of a PU that includes the candidate block.
Inter prediction may be performed using a reference picture.
The reference picture may be at least one of a picture preceding the target picture and a picture following the target picture. The reference picture may be an image used for prediction of the target block.
In inter prediction, a region in a reference picture may be specified using a reference picture index (or refIdx) indicating the reference picture, a motion vector to be described later, and the like. Here, the area specified in the reference picture may indicate a reference block.
Inter prediction may select a reference picture, and may also select a reference block corresponding to the target block from the reference picture. Furthermore, inter prediction may use the selected reference block to generate a prediction block for the target block.
The motion information may be derived by each of the encoding apparatus 100 and the decoding apparatus 200 during inter prediction.
The spatial candidates may be 1) blocks that exist in the target picture that 2) have been previously reconstructed via encoding and/or decoding and 3) are adjacent to the target block or located at corners of the target block. Here, the "block located at a corner of the target block" may be a block vertically adjacent to an adjacent block horizontally adjacent to the target block, or a block horizontally adjacent to an adjacent block vertically adjacent to the target block. Further, "a block located at a corner of the target block" may have the same meaning as "a block adjacent to the corner of the target block". The meaning of "blocks located at the corners of the target block" may be included in the meaning of "blocks adjacent to the target block".
For example, the spatial candidate may be a reconstructed block located to the left of the target block, a reconstructed block located above the target block, a reconstructed block located in the lower left corner of the target block, a reconstructed block located in the upper right corner of the target block, or a reconstructed block located in the upper left corner of the target block.
Each of the encoding apparatus 100 and the decoding apparatus 200 can identify a block existing in a position spatially corresponding to a target block in a col picture. The position of the target block in the target picture and the position of the identified block in the col picture may correspond to each other.
Each of the encoding apparatus 100 and the decoding apparatus 200 may determine, as a time candidate, a col block existing at a predefined correlation position with respect to the identified block. The predefined relative location may be a location that exists inside and/or outside the identified block.
For example, the col blocks may include a first col block and a second col block. When the coordinates of the identified block are (xP, yP) and the size of the identified block is represented by (nPSW, nPSH), the first col block may be a block located at coordinates (xP + nPSW, yP + nPSH). The second col block may be a block located at coordinates (xP + (nPSW > >1), yP + (nPSH > > 1)). When the first col block is not available, the second col block may be selectively used.
The motion vector of the target block may be determined based on the motion vector of the col block. Each of the encoding apparatus 100 and the decoding apparatus 200 may scale the motion vector of the col block. The scaled motion vector of the col block can be used as the motion vector of the target block. Further, the motion vector of the motion information of the temporal candidate stored in the list may be a scaled motion vector.
The ratio of the motion vector of the target block relative to the motion vector of the col block may be the same as the ratio of the first temporal distance relative to the second temporal distance. The first temporal distance may be a distance between the reference picture and a target picture of the target block. The second temporal distance may be a distance between the reference picture and a col picture of the col block.
The scheme for deriving the motion information may vary according to the inter prediction mode of the target block. For example, as an inter prediction mode applied to inter prediction, there may be an Advanced Motion Vector Predictor (AMVP) mode, a merge mode, a skip mode, a merge mode with a motion vector difference, a sub-block merge mode, a triangle partition mode, an inter-intra combined prediction mode, an affine inter mode, a current picture reference mode, and the like. The merge mode may also be referred to as a "motion merge mode". Each mode will be described in detail below.
1) AMVP mode
When using the AMVP mode, the encoding apparatus 100 may search for similar blocks in the neighborhood of the target block. The encoding apparatus 100 may acquire a prediction block by performing prediction on a target block using motion information of the found similar block. The encoding apparatus 100 may encode a residual block that is a difference between the target block and the prediction block.
1-1) creating a list of predicted motion vector candidates
When the AMVP mode is used as the prediction mode, each of the encoding apparatus 100 and the decoding apparatus 200 may create a list of predicted motion vector candidates using a motion vector of a spatial candidate, a motion vector of a temporal candidate, and a zero vector. The predicted motion vector candidate list may include one or more predicted motion vector candidates. At least one of a motion vector of the spatial candidate, a motion vector of the temporal candidate, and a zero vector may be determined and used as the prediction motion vector candidate.
Hereinafter, the terms "prediction motion vector (candidate)" and "motion vector (candidate)" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "prediction motion vector candidate" and "AMVP candidate" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "predicted motion vector candidate list" and "AMVP candidate list" may be used to have the same meaning and may be used interchangeably with each other.
The spatial candidates may comprise reconstructed spatially neighboring blocks. In other words, the motion vectors of the reconstructed neighboring blocks may be referred to as "spatial prediction motion vector candidates".
The temporal candidates may include a col block and blocks adjacent to the col block. In other words, a motion vector of a col block or a motion vector of a block adjacent to the col block may be referred to as a "temporal prediction motion vector candidate".
The zero vector may be a (0,0) motion vector.
The predicted motion vector candidate may be a motion vector predictor for predicting a motion vector. Further, in the encoding apparatus 100, each of the predicted motion vector candidates may be an initial search position for a motion vector.
1-2) searching for motion vector using list of predicted motion vector candidates
The encoding apparatus 100 may determine a motion vector to be used for encoding the target block within the search range using the list of predicted motion vector candidates. Further, the encoding apparatus 100 may determine a predicted motion vector candidate to be used as the predicted motion vector of the target block among the predicted motion vector candidates existing in the predicted motion vector candidate list.
The motion vector to be used for encoding the target block may be a motion vector that can be encoded at a minimum cost.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the AMVP mode.
1-3) Transmission of Interframe prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether the AMVP mode is used, 2) a prediction motion vector index, 3) a Motion Vector Difference (MVD), 4) a reference direction, and 5) a reference picture index.
Hereinafter, the terms "prediction motion vector index" and "AMVP index" may be used to have the same meaning and may be used interchangeably with each other.
In addition, the inter prediction information may include a residual signal.
When the mode information indicates that the AMVP mode is used, the decoding apparatus 200 may acquire a prediction motion vector index, an MVD, a reference direction, and a reference picture index from the bitstream through entropy decoding.
The prediction motion vector index may indicate a prediction motion vector candidate to be used for predicting the target block among prediction motion vector candidates included in the prediction motion vector candidate list.
1-4) inter prediction in AMVP mode using inter prediction information
The decoding apparatus 200 may derive the prediction motion vector candidate using the prediction motion vector candidate list, and may determine motion information of the target block based on the derived prediction motion vector candidate.
The decoding apparatus 200 may determine a motion vector candidate for the target block among the predicted motion vector candidates included in the predicted motion vector candidate list using the predicted motion vector index. The decoding apparatus 200 may select a predicted motion vector candidate indicated by the predicted motion vector index as the predicted motion vector of the target block from among the predicted motion vector candidates included in the predicted motion vector candidate list.
The encoding apparatus 100 may generate an entropy-encoded prediction motion vector index by applying entropy encoding to the prediction motion vector index, and may generate a bitstream including the entropy-encoded prediction motion vector index. The entropy-encoded prediction motion vector index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract an entropy-encoded prediction motion vector index from a bitstream, and may acquire the prediction motion vector index by applying entropy decoding to the entropy-encoded prediction motion vector index.
The motion vector that is actually to be used for inter prediction of the target block may not match the predicted motion vector. To indicate the difference between the motion vector that will actually be used for inter-predicting the target block and the predicted motion vector, MVD may be used. The encoding apparatus 100 may derive a prediction motion vector similar to a motion vector that will actually be used for inter-prediction of the target block in order to use an MVD as small as possible.
The MVD may be a difference between the motion vector of the target block and the prediction motion vector. The encoding apparatus 100 may calculate an MVD and may generate an entropy-encoded MVD by applying entropy encoding to the MVD. The encoding apparatus 100 may generate a bitstream including the entropy-encoded MVDs.
The MVD may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded MVDs from the bitstream and may acquire the MVDs by applying entropy decoding to the entropy-encoded MVDs.
The decoding apparatus 200 may derive a motion vector of the target block by summing the MVD and the prediction motion vector. In other words, the motion vector of the target block derived by the decoding apparatus 200 may be the sum of the MVD and the motion vector candidate.
Also, the encoding apparatus 100 may generate entropy-encoded MVD resolution information by applying entropy encoding to the calculated MVD resolution information, and may generate a bitstream including the entropy-encoded MVD resolution information. The decoding apparatus 200 may extract entropy-encoded MVD resolution information from a bitstream, and may acquire the MVD resolution information by applying entropy decoding to the entropy-encoded MVD resolution information. The decoding apparatus 200 may adjust the resolution of the MVD using the MVD resolution information.
In addition, the encoding apparatus 100 may calculate the MVD based on an affine model. The decoding apparatus 200 may derive an affine control motion vector of the target block through the sum of the MVD and the affine control motion vector candidate, and may derive a motion vector of the sub-block using the affine control motion vector.
The reference direction may indicate a list of reference pictures to be used for predicting the target block. For example, the reference direction may indicate one of the reference picture list L0 and the reference picture list L1.
The reference direction indicates only a reference picture list to be used for prediction of the target block, and may not mean that the direction of the reference picture is limited to a forward direction or a backward direction. In other words, each of the reference picture list L0 and the reference picture list L1 may include pictures in the forward direction and/or the backward direction.
The reference direction being unidirectional may mean that a single reference picture list is used. The reference direction being bi-directional may mean that two reference picture lists are used. In other words, the reference direction may indicate one of the following: the case of using only the reference picture list L0, the case of using only the reference picture list L1, and the case of using two reference picture lists.
The reference picture index may indicate a reference picture for the prediction target block among reference pictures existing in the reference picture list. The encoding apparatus 100 may generate an entropy-encoded reference picture index by applying entropy encoding to the reference picture index, and may generate a bitstream including the entropy-encoded reference picture index. The entropy-encoded reference picture index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract an entropy-encoded reference picture index from a bitstream, and may acquire the reference picture index by applying entropy decoding to the entropy-encoded reference picture index.
When two reference picture lists are used for prediction of a target block, a single reference picture index and a single motion vector may be used for each of the reference picture lists. Further, when two reference picture lists are used for predicting the target block, two prediction blocks may be specified for the target block. For example, an average or a weighted sum of two prediction blocks for a target block may be used to generate a (final) prediction block for the target block.
The motion vector of the target block may be derived by predicting a motion vector index, an MVD, a reference direction, and a reference picture index.
The decoding apparatus 200 may generate a prediction block for the target block based on the derived motion vector and the reference picture index. For example, the prediction block may be a reference block indicated by a derived motion vector in a reference picture indicated by a reference picture index.
Since the prediction motion vector index and the MVD are encoded while the motion vector itself of the target block is not encoded, the number of bits transmitted from the encoding apparatus 100 to the decoding apparatus 200 can be reduced and the encoding efficiency can be improved.
For the target block, motion information of the reconstructed neighboring blocks may be used. In a specific inter prediction mode, the encoding apparatus 100 may not encode actual motion information of the target block alone. The motion information of the target block is not encoded, but additional information that enables the motion information of the target block to be derived using the reconstructed motion information of the neighboring blocks may be encoded. Since the additional information is encoded, the number of bits transmitted to the decoding apparatus 200 may be reduced and the encoding efficiency may be improved.
For example, as an inter prediction mode in which motion information of a target block is not directly encoded, a skip mode and/or a merge mode may exist. Here, each of the encoding apparatus 100 and the decoding apparatus 200 may use an identifier and/or an index indicating a unit of which motion information is to be used as motion information of the target unit among the reconstructed neighboring units.
2) Merge mode
As a scheme for deriving motion information of a target block, there is merging. The term "merging" may mean merging motion of multiple blocks. "merging" may mean that motion information of one block is also applied to other blocks. In other words, the merge mode may be a mode in which motion information of the target block is derived from motion information of neighboring blocks.
When the merge mode is used, the encoding apparatus 100 may predict motion information of the target block using motion information of the spatial candidate and/or motion information of the temporal candidate. The spatial candidates may include reconstructed spatially neighboring blocks that are spatially adjacent to the target block. The spatially neighboring blocks may include a left neighboring block and an upper neighboring block. The temporal candidates may include col blocks. The terms "spatial candidate" and "spatial merge candidate" may be used to have the same meaning and may be used interchangeably with each other. The terms "time candidate" and "time merge candidate" may be used to have the same meaning and may be used interchangeably with each other.
The encoding apparatus 100 may acquire a prediction block via prediction. The encoding apparatus 100 may encode a residual block that is a difference between the target block and the prediction block.
2-1) creating a merge candidate list
When the merge mode is used, each of the encoding apparatus 100 and the decoding apparatus 200 may create a merge candidate list using motion information of spatial candidates and/or motion information of temporal candidates. The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may be unidirectional or bidirectional. The reference direction may represent an inter prediction indicator.
The merge candidate list may include merge candidates. The merge candidate may be motion information. In other words, the merge candidate list may be a list storing a plurality of pieces of motion information.
The merge candidate may be motion information of a plurality of temporal candidates and/or spatial candidates. In other words, the merge candidate list may include motion information of temporal candidates and/or spatial candidates, and the like.
Further, the merge candidate list may include a new merge candidate generated by combining merge candidates already existing in the merge candidate list. In other words, the merge candidate list may include new motion information generated by combining a plurality of pieces of motion information previously existing in the merge candidate list.
Further, the merge candidate list may include history-based merge candidates. The history-based merge candidate may be motion information of a block that is encoded and/or decoded before the target block.
Further, the merge candidate list may include a merge candidate based on an average of the two merge candidates.
The merging candidate may be a specific mode of deriving inter prediction information. The merge candidate may be information indicating a specific mode of deriving inter prediction information. Inter prediction information for the target block may be derived from a particular mode indicated by the merge candidate. Further, the particular mode may include a process of deriving a series of inter prediction information. This particular mode may be an inter prediction information derivation mode or a motion information derivation mode.
The inter prediction information of the target block may be derived according to a mode indicated by a merge candidate selected among merge candidates in the merge candidate list by a merge index.
For example, the motion information derivation mode in the merge candidate list may be at least one of the following modes: 1) a motion information derivation mode for sub-block units and 2) an affine motion information derivation mode.
In addition, the merge candidate list may include motion information of a zero vector. The zero vector may also be referred to as a "zero merge candidate".
In other words, the pieces of motion information in the merge candidate list may be at least one of: 1) motion information of a spatial candidate, 2) motion information of a temporal candidate, 3) motion information generated by combining pieces of motion information previously existing in a merge candidate list, and 4) a zero vector.
The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may also be referred to as an "inter prediction indicator". The reference direction may be unidirectional or bidirectional. The unidirectional reference direction may indicate L0 prediction or L1 prediction.
The merge candidate list may be created before performing prediction in merge mode.
The number of merge candidates in the merge candidate list may be predefined. Each of the encoding apparatus 100 and the decoding apparatus 200 may add the merge candidates to the merge candidate list according to a predefined scheme and predefined priorities such that the merge candidate list has a predefined number of merge candidates. The merge candidate list of the encoding apparatus 100 and the merge candidate list of the decoding apparatus 200 may be made identical to each other using a predefined scheme and a predefined priority.
Merging may be applied on a CU or PU basis. When the merging is performed on a CU or PU basis, the encoding apparatus 100 may transmit a bitstream including predefined information to the decoding apparatus 200. For example, the predefined information may include 1) information indicating whether to perform merging for each block partition, and 2) information on a block on which merging is to be performed among blocks that are spatial candidates and/or temporal candidates for a target block.
2-2) searching for motion vector using merge candidate list
The encoding apparatus 100 may determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidate in the merge candidate list, and may generate a residual block for the merge candidate. The encoding apparatus 100 may encode the target block using a merging candidate that yields the smallest cost in the encoding of the prediction and residual blocks.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the merge mode.
2-3) Transmission of Interframe prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The encoding apparatus 100 may generate entropy-encoded inter prediction information by performing entropy encoding on the inter prediction information, and may transmit a bitstream including the entropy-encoded inter prediction information to the decoding apparatus 200. The entropy-encoded inter prediction information may be signaled by the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded inter prediction information from a bitstream, and may acquire the inter prediction information by applying entropy decoding to the entropy-encoded inter prediction information.
The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a merge mode is used, 2) a merge index, and 3) correction information.
In addition, the inter prediction information may include a residual signal.
The decoding apparatus 200 may acquire the merge index from the bitstream only when the mode information indicates that the merge mode is used.
The mode information may be a merge flag. The unit of the mode information may be a block. The information on the block may include mode information, and the mode information may indicate whether a merge mode is applied to the block.
The merge index may indicate a merge candidate to be used for prediction of the target block among merge candidates included in the merge candidate list. Alternatively, the merge index may indicate a block to be merged with the target block among neighboring blocks spatially or temporally adjacent to the target block.
The encoding apparatus 100 may select a merge candidate having the highest encoding performance among merge candidates included in the merge candidate list, and set a value of the merge index to indicate the selected merge candidate.
The correction information may be information for correcting a motion vector. The encoding apparatus 100 may generate correction information. The decoding apparatus 200 may correct the motion vector of the merge candidate selected by the merge index based on the correction information.
The correction information may include at least one of information indicating whether correction is to be performed, correction direction information, and correction size information. The prediction mode for correcting the motion vector based on the signaled correction information may be referred to as a "merge mode with motion vector difference".
2-4) use ofInter prediction for merge mode of inter prediction information
The decoding apparatus 200 may perform prediction on the target block using a merge candidate indicated by the merge index among merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merging candidate indicated by the merging index, the reference picture index, and the reference direction.
3) Skip mode
The skip mode may be a mode in which motion information of a spatial candidate or motion information of a temporal candidate is applied to the target block without change. Also, the skip mode may be a mode that does not use a residual signal. In other words, when the skip mode is used, the reconstructed block may be the same as the predicted block.
The difference between the merge mode and the skip mode is whether a residual signal is sent or used. That is, the skip mode may be similar to the merge mode except that no residual signal is sent or used.
When the skip mode is used, the encoding apparatus 100 may transmit information on a block whose motion information is to be used as motion information of a target block among blocks that are spatial candidates or temporal candidates to the decoding apparatus 200 through a bitstream. The encoding apparatus 100 may generate entropy-encoded information by performing entropy encoding on the information, and may signal the entropy-encoded information to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded information from a bitstream and may acquire the information by applying entropy decoding to the entropy-encoded information.
Also, when the skip mode is used, the encoding apparatus 100 may not send other syntax information (such as MVD) to the decoding apparatus 200. For example, when the skip mode is used, the encoding apparatus 100 may not signal syntax elements related to at least one of an MVD, a coded block flag, and a transform coefficient level to the decoding apparatus 200.
3-1) creating a merge candidate list
The skip mode may also use a merge candidate list. In other words, the merge candidate list may be used in both the merge mode and the skip mode. In this regard, the merge candidate list may also be referred to as a "skip candidate list" or a "merge/skip candidate list".
Alternatively, the skip mode may use an additional candidate list different from the candidate list of the merge mode. In this case, in the following description, the merge candidate list and the merge candidate may be replaced with the skip candidate list and the skip candidate, respectively.
The merge candidate list may be created before performing prediction in skip mode.
3-2) searching for motion vector using merge candidate list
The encoding apparatus 100 may determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidate in the merge candidate list. The encoding apparatus 100 may encode the target block using the merge candidate that yields the smallest cost in prediction.
In addition, the encoding apparatus 100 may determine whether to encode the target block using the skip mode.
3-3) Transmission of inter-frame prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a skip mode is used and 2) a skip index.
The skip index may be the same as the merge index described above.
When the skip mode is used, the target block may be encoded without using a residual signal. The inter prediction information may not include a residual signal. Alternatively, the bitstream may not include a residual signal.
The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the skip mode is used. As described above, the merge index and the skip index may be identical to each other. The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the merge mode or the skip mode is used.
The skip index may indicate a merge candidate to be used for predicting the target block among merge candidates included in the merge candidate list.
3-4) inter prediction in skip mode using inter prediction information
The decoding apparatus 200 may perform prediction on the target block using a merge candidate indicated by the skip index among merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merging candidate indicated by the skip index, the reference picture index, and the reference direction.
4) Current picture reference mode
The current picture reference mode may represent a prediction mode: the prediction mode uses a previously reconstructed region in a target picture to which the target block belongs.
A motion vector specifying a previously reconstructed region may be used. The reference picture index of the target block may be used to determine whether the target block has been encoded in the current picture reference mode.
A flag or index indicating whether the target block is a block encoded in the current picture reference mode may be signaled by the encoding apparatus 100 to the decoding apparatus 200. Alternatively, whether the target block is a block encoded in the current picture reference mode may be inferred by the reference picture index of the target block.
When the target block is encoded in the current picture reference mode, the current picture may exist at a fixed position or an arbitrary position in the reference picture list for the target block.
For example, the fixed position may be a position where the value of the reference picture index is 0 or the last position.
When the target picture exists at an arbitrary position in the reference picture list, an additional reference picture index indicating such an arbitrary position may be signaled by the encoding apparatus 100 to the decoding apparatus 200.
5) Subblock merging mode
The sub-block merging mode may be a mode in which motion information is derived from sub-blocks of the CU.
When the subblock merge mode is applied, a subblock merge candidate list may be generated using motion information of a co-located subblock (col-sub-block) of a target subblock (i.e., a subblock-based temporal merge candidate) in a reference image and/or an affine control point motion vector merge candidate.
6) Triangular partition mode
In the triangle partition mode, the target block may be partitioned in a diagonal direction, and a child target block generated by the partitioning may be generated. For each sub-target block, motion information for the corresponding sub-target block may be derived, and the derived motion information may be used to derive a prediction sample for each sub-target block. The predicted samples of the target block may be derived by a weighted sum of the predicted samples of the sub-target blocks generated via partitioning.
7) Combining inter-frame intra prediction modes
The combined inter-intra prediction mode may be a mode in which the prediction samples of the target block are derived using a weighted sum of the prediction samples generated via inter prediction and the prediction samples generated via intra prediction.
In the above-described mode, the decoding apparatus 200 may autonomously correct the derived motion information. For example, the decoding apparatus 200 may search for motion information having a minimum Sum of Absolute Differences (SAD) in a specific region based on a reference block indicated by the derived motion information, and may derive the found motion information as corrected motion information.
In the above-described mode, the decoding apparatus 200 may compensate for prediction samples derived via inter prediction using optical flow.
In the AMVP mode, the merge mode, the skip mode, and the like described above, the index information of the list may be used to specify motion information to be used for prediction of the target block among pieces of motion information in the list.
In order to improve encoding efficiency, the encoding apparatus 100 may signal only an index of an element that generates the smallest cost in inter prediction of a target block among elements in a list. The encoding apparatus 100 may encode the index and may signal the encoded index.
Therefore, it is necessary to be able to derive the above-described lists (i.e., the predictive motion vector candidate list and the merge candidate list) based on the same data using the same scheme by the encoding apparatus 100 and the decoding apparatus 200. Here, the same data may include a reconstructed picture and a reconstructed block. Further, in order to specify an element using an index, the order of the elements in the list must be fixed.
Fig. 10 illustrates spatial candidates according to an embodiment.
In fig. 10, the positions of the spatial candidates are shown.
The large block at the center of the graph may represent the target block. Five small blocks may represent spatial candidates.
The coordinates of the target block may be (xP, yP), and the size of the target block may be represented by (nPSW, nPSH).
Spatial candidate A0May be a block adjacent to the lower left corner of the target block. A. the0May be a block occupying a pixel located at the coordinates (xP-1, yP + nPSH + 1).
Spatial candidate A1May be the block adjacent to the left side of the target block. A. the 1May be the lowermost block among blocks adjacent to the left side of the target block. Alternatively, A1May be with A0Top adjacent block of (a). A. the1May be a block occupying a pixel located at the coordinate (xP-1, yP + nPSH).
Spatial candidate B0May be a block adjacent to the upper right corner of the target block. B is0May be a block occupying a pixel located at the coordinates (xP + nPSW +1, yP-1).
Spatial candidate B1May be a block adjacent to the top of the target block. B is1May be the rightmost block among blocks adjacent to the top of the target block. Alternatively, B1May be with B0Left adjacent block. B is1May be an image occupying a position at coordinates (xP + nPSW, yP-1)A block of elements.
Spatial candidate B2May be a block adjacent to the upper left corner of the target block. B2May be a block occupying a pixel located at the coordinates (xP-1, yP-1).
Determination of availability of spatial and temporal candidates
In order to include motion information of a spatial candidate or motion information of a temporal candidate in a list, it must be determined whether motion information of a spatial candidate or motion information of a temporal candidate is available.
Hereinafter, the candidate block may include a spatial candidate and a temporal candidate.
For example, the determination may be performed by sequentially applying the following steps 1) to 4).
Step 1) when a PU including a candidate block is located outside the boundary of the picture, the availability of the candidate block may be set to "false". The expression "availability is set to false" may have the same meaning as "set to unavailable".
Step 2) when a PU including a candidate block is located outside the boundary of a slice, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different stripes, the availability of the candidate block may be set to "false".
Step 3) when the PU including the candidate block is outside the boundary of the parallel block, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different parallel blocks, the availability of the candidate block may be set to "false".
Step 4) when the prediction mode of the PU including the candidate block is an intra prediction mode, the availability of the candidate block may be set to "false". The availability of a candidate block may be set to "false" when a PU that includes the candidate block does not use inter prediction.
Fig. 11 illustrates an order of adding motion information of spatial candidates to a merge list according to an embodiment.
As shown in fig. 11, a may be used when pieces of motion information of spatial candidates are added to the merge list 1、B1、B0、A0And B2The order of (a). That is, can be according to A1、B1、B0、A0And B2The order of the available spatial candidates adds pieces of motion information of the available spatial candidates to the merge list.
Method for deriving merge lists in merge mode and skip mode
As described above, the maximum number of merging candidates in the merge list may be set. The maximum number of settings may be indicated by "N". The set number may be transmitted from the encoding apparatus 100 to the decoding apparatus 200. The head of the strip may comprise N. In other words, the maximum number of merging candidates in the merging list for the target block of the slice may be set by the slice header. For example, the value of N may be substantially 5.
Pieces of motion information (i.e., merging candidates) may be added to the merge list in the order of the following steps 1) to 4).
Step 1)Among the spatial candidates, the available spatial candidates may be added to the merge list. The pieces of motion information of the available spatial candidates may be added to the merge list in the order shown in fig. 10. Here, when the motion information of the available spatial candidate overlaps with other motion information already existing in the merge list, the motion information of the available spatial candidate may not be added to the merge list. The operation of checking whether the corresponding motion information overlaps with other motion information present in the list may be simply referred to as "overlap check".
The maximum number of pieces of motion information to be added may be N.
Step 2)When the number of pieces of motion information in the merge list is less than N and a temporal candidate is available, the motion information of the temporal candidate may be added to the merge list. Here, when the motion information of the available temporal candidate overlaps with other motion information already existing in the merge list, the motion information of the available temporal candidate may not be added to the merge list.
Step 3)When the number of pieces of motion information in the merge list is less than N and the type of the target slice is "B", it may be generated by combining bi-prediction (bi-prediction)The combined motion information of (2) is added to the merge list.
The target stripe may be a stripe that includes the target block.
The combined motion information may be a combination of the L0 motion information and the L1 motion information. The L0 motion information may be motion information referring only to the reference picture list L0. The L1 motion information may be motion information referring only to the reference picture list L1.
In the merge list, there may be one or more pieces of L0 motion information. Further, in the merge list, there may be one or more pieces of L1 motion information.
The combined motion information may include one or more pieces of combined motion information. When generating the combined motion information, L0 motion information and L1 motion information, which will be used for the step of generating the combined motion information, among the one or more pieces of L0 motion information and the one or more pieces of L1 motion information may be previously defined. One or more pieces of combined motion information may be generated in a predefined order via combined bi-prediction using a pair of different pieces of motion information in the merge list. One piece of motion information of the pair of different motion information may be L0 motion information, and the other piece of motion information of the pair of different motion information may be L1 motion information.
For example, the combined motion information added with the highest priority may be a combination of L0 motion information having a merge index of 0 and L1 motion information having a merge index of 1. When the motion information having the merge index 0 is not the L0 motion information or when the motion information having the merge index 1 is not the L1 motion information, the combined motion information may be neither generated nor added. Next, the combined motion information to which the next priority is added may be a combination of L0 motion information having a merge index of 1 and L1 motion information having a merge index of 0. The detailed combinations that follow may conform to other combinations in the video encoding/decoding field.
Here, when the combined motion information overlaps with other motion information already existing in the merge list, the combined motion information may not be added to the merge list.
Step 4)When merging motions in listsWhen the number of pieces of information is less than N, the motion information of the zero vector may be added to the merge list.
The zero vector motion information may be motion information in which the motion vector is a zero vector.
The number of pieces of zero vector motion information may be one or more. The reference picture indices of one or more pieces of zero vector motion information may be different from each other. For example, the value of the reference picture index of the first zero vector motion information may be 0. The reference picture index of the second zero vector motion information may have a value of 1.
The number of pieces of zero vector motion information may be the same as the number of reference pictures in the reference picture list.
The reference direction of the zero vector motion information may be bi-directional. Both motion vectors may be zero vectors. The number of pieces of zero vector motion information may be the smaller one of the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1. Alternatively, when the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1 are different from each other, the reference direction, which is unidirectional, may be used for the reference picture index that can be applied to only a single reference picture list.
The encoding apparatus 100 and/or the decoding apparatus 200 may then add zero vector motion information to the merge list while changing the reference picture index.
Zero vector motion information may not be added to the merge list when it overlaps with other motion information already present in the merge list.
The order of the above-described steps 1) to 4) is merely exemplary, and may be changed. Furthermore, some of the above steps may be omitted according to predefined conditions.
Method for deriving predicted motion vector candidate list in AMVP mode
The maximum number of predicted motion vector candidates in the predicted motion vector candidate list may be predefined. A predefined maximum number may be indicated by N. For example, the predefined maximum number may be 2.
The pieces of motion information (i.e., predicted motion vector candidates) may be added to the predicted motion vector candidate list in the order of step 1) to step 3) below.
Step 1)An available spatial candidate among the spatial candidates may be added to the predicted motion vector candidate list. The spatial candidates may include a first spatial candidate and a second spatial candidate.
The first spatial candidate may be a0、A1Scaled A0And scaled A1Of the above. The second spatial candidate may be B0、B1、B2Scaled B0Scaled B1And scaled B2One of them.
The plurality of pieces of motion information of the available spatial candidates may be added to the prediction motion vector candidate list in the order of the first spatial candidate and the second spatial candidate. In this case, when the motion information of the available spatial candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available spatial candidate may not be added to the predicted motion vector candidate list. In other words, when the value of N is 2, if the motion information of the second spatial candidate is the same as the motion information of the first spatial candidate, the motion information of the second spatial candidate may not be added to the predicted motion vector candidate list.
The maximum number of pieces of motion information to be added may be N.
Step 2)When the number of pieces of motion information in the predicted motion vector candidate list is less than N and a temporal candidate is available, the motion information of the temporal candidate may be added to the predicted motion vector candidate list. In this case, when the motion information of the available temporal candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available temporal candidate may not be added to the predicted motion vector candidate list.
Step 3)Zero vector motion information may be added to the predicted motion vector candidate list when the number of pieces of motion information in the predicted motion vector candidate list is less than N。
The zero vector motion information may include one or more pieces of zero vector motion information. The reference picture indices of the one or more pieces of zero vector motion information may be different from each other.
The encoding apparatus 100 and/or the decoding apparatus 200 may sequentially add pieces of zero vector motion information to the predicted motion vector candidate list while changing the reference picture index.
When the zero vector motion information overlaps with other motion information already existing in the predicted motion vector candidate list, the zero vector motion information may not be added to the predicted motion vector candidate list.
The description of zero vector motion information made above in connection with the merge list is also applicable to zero vector motion information. A repetitive description thereof will be omitted.
The order of step 1) to step 3) described above is merely exemplary and may be changed. Furthermore, some of the steps may be omitted according to predefined conditions.
Fig. 12 illustrates a transform and quantization process according to an example.
As shown in fig. 12, the quantized level may be generated by performing a transform and/or quantization process on the residual signal.
The residual signal may be generated as a difference between the original block and the prediction block. Here, the prediction block may be a block generated via intra prediction or inter prediction.
The residual signal may be transformed into a signal in the frequency domain by a transformation process as part of a quantization process.
The transform kernels used for the transform may include various DCT kernels, such as Discrete Cosine Transform (DCT) type 2(DCT-II) and Discrete Sine Transform (DST) kernels.
These transform kernels may perform separable transforms or two-dimensional (2D) non-separable transforms on the residual signal. The separable transform may be a transform indicating that a one-dimensional (1D) transform is performed on the residual signal in each of a horizontal direction and a vertical direction.
The DCT type and the DST type adaptively used for the 1D transform may include DCT-V, DCT-VIII, DST-I, and DST-VII in addition to DCT-II, as shown in each of Table 3 and Table 4 below.
TABLE 3
Set of transformations Transformation candidates
0 DST-VII、DCT-VIII
1 DST-VII、DST-I
2 DST-VII、DCT-V
TABLE 4
Transformation set Transformation candidates
0 DST-VII、DCT-VIII、DST-I
1 DST-VII、DST-I、DCT-VIII
2 DST-VII、DCT-V、DST-I
As shown in tables 3 and 4, when a DCT type or a DST type to be used for transformation is derived, a transformation set may be used. Each transform set may include a plurality of transform candidates. Each transform candidate may be of DCT type or DST type.
Table 5 below shows an example of a transform set to be applied to the horizontal direction and a transform set to be applied to the vertical direction according to the intra prediction mode.
TABLE 5
Figure BDA0003512847470000651
Figure BDA0003512847470000661
In table 5, the numbers of the vertical transform set and the horizontal transform set to be applied to the horizontal direction of the residual signal according to the intra prediction mode of the target block are shown.
As illustrated in table 5, a transform set to be applied to the horizontal direction and the vertical direction may be predefined according to the intra prediction mode of the target block. The encoding apparatus 100 may perform transformation and inverse transformation on the residual signal using the transformation included in the transformation set corresponding to the intra prediction mode of the target block. Also, the decoding apparatus 200 may perform inverse transformation on the residual signal using a transform included in a transform set corresponding to an intra prediction mode of the target block.
In the transform and inverse transform, as illustrated in table 3, table 4, and table 5, a transform set to be applied to a residual signal may be determined and may not be signaled. The transformation indication information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The transformation indication information may be information indicating which one of a plurality of transformation candidates included in a transformation set to be applied to the residual signal is used.
For example, when the size of the target block is 64 × 64 or less, transform sets each having three transforms may be configured according to the intra prediction mode. The optimal transformation method may be selected from a total of nine multi-transformation methods resulting from a combination of three transformations in the horizontal direction and three transformations in the vertical direction. By such an optimal transformation method, a residual signal may be encoded and/or decoded, and thus encoding efficiency may be improved.
Here, the information indicating which one of a plurality of transforms belonging to each transform set has been used for at least one of a vertical transform and a horizontal transform may be entropy-encoded and/or entropy-decoded. Here, truncated unary binarization may be used to encode and/or decode such information.
As described above, the method using various transforms may be applied to a residual signal generated via intra prediction or inter prediction.
The transform may include at least one of a first transform and a secondary transform. The transform coefficient may be generated by performing a first transform on the residual signal, and the secondary transform coefficient may be generated by performing a secondary transform on the transform coefficient.
The first transformation may be referred to as a "primary transformation". Further, the first transformation may also be referred to as an "adaptive multi-transformation (AMT) scheme". As described above, the AMT may represent applying different transforms to respective 1D directions (i.e., vertical and horizontal directions).
The secondary transform may be a transform for increasing the energy concentration of transform coefficients generated by the first transform. Similar to the first transform, the secondary transform may be a separable transform or a non-separable transform. Such an inseparable transform may be an inseparable secondary transform (NSST).
The first transformation may be performed using at least one of a predefined plurality of transformation methods. For example, the predefined multiple transform methods may include Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve transform (KLT), and the like.
Further, the first transform may be a transform having various types according to a kernel function defining a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST).
For example, the first transform may include transforms such as DCT-2, DCT-5, DCT-7, DST-1, DST-8, and DCT-8 according to the transform kernel presented in Table 6 below. In table 6 below, various transform types and transform kernels for Multiple Transform Selection (MTS) are illustrated.
MTS may refer to the selection of a combination of one or more DCT and/or DST kernels for transforming the residual signal in the horizontal and/or vertical directions.
TABLE 6
Figure BDA0003512847470000671
Figure BDA0003512847470000681
In Table 6, i and j may be integer values equal to or greater than 0 and less than or equal to N-1.
A secondary transform may be performed on transform coefficients generated by performing the first transform.
As in the first transformation, a set of transformations may also be defined in the secondary transformation. The method for deriving and/or determining the above-described set of transforms may be applied not only to the first transform but also to the secondary transform.
The first transformation and the secondary transformation may be determined for a particular target.
For example, the first transform and the secondary transform may be applied to signal components corresponding to one or more of a luminance (luma) component and a chrominance (chroma) component. Whether to apply the first transform and/or the secondary transform may be determined according to at least one of encoding parameters for the target block and/or the neighboring blocks. For example, whether to apply the first transform and/or the secondary transform may be determined according to the size and/or shape of the target block.
In the encoding apparatus 100 and the decoding apparatus 200, transformation information indicating a transformation method to be used for a target may be derived by using the designation information.
For example, the transformation information may include transformation indices to be used for the primary transformation and/or the secondary transformation. Optionally, the transformation information may indicate that the primary transformation and/or the secondary transformation is not used.
For example, when a target of the primary transform and the secondary transform is a target block, a transform method to be applied to the primary transform and/or the secondary transform, which is indicated by the transform information, may be determined according to at least one of encoding parameters for the target block and/or blocks adjacent to the target block.
Alternatively, transformation information indicating a transformation method for a specific object may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
For example, whether to use the primary transform, the index indicating the primary transform, whether to use the secondary transform, and the index indicating the secondary transform may be derived as transform information by the decoding apparatus 200 for a single CU. Optionally, for a single CU, transform information may be signaled, wherein the transform information indicates whether to use a primary transform, an index indicating a primary transform, whether to use a secondary transform, and an index indicating a secondary transform.
The quantized transform coefficients (i.e., quantized levels) may be generated by performing quantization on a result generated by performing the first transform and/or the secondary transform or performing quantization on the residual signal.
Fig. 13 illustrates a diagonal scan according to an example.
Fig. 14 shows a horizontal scan according to an example.
Fig. 15 illustrates a vertical scan according to an example.
The quantized transform coefficients may be scanned via at least one of a (top right) diagonal scan, a vertical scan, and a horizontal scan according to at least one of an intra prediction mode, a block size, and a block shape. The block may be a Transform Unit (TU).
Each scan may be initiated at a particular starting point and may be terminated at a particular ending point.
For example, the quantized transform coefficients may be changed into a 1D vector form by scanning the coefficients of the block using the diagonal scan of fig. 13. Alternatively, the horizontal scan of fig. 14 or the vertical scan of fig. 15 may be used according to the size of the block and/or the intra prediction mode, instead of using the diagonal scan.
The vertical scanning may be an operation of scanning the 2D block type coefficients in the column direction. The horizontal scanning may be an operation of scanning the 2D block type coefficients in a row direction.
In other words, which one of the diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the block and/or the inter prediction mode.
As shown in fig. 13, 14, and 15, the quantized transform coefficients may be scanned in a diagonal direction, a horizontal direction, or a vertical direction.
The quantized transform coefficients may be represented by block shapes. Each block may include a plurality of sub-blocks. Each sub-block may be defined according to a minimum block size or a minimum block shape.
In the scanning, a scanning order according to the type or direction of the scanning may be first applied to the subblocks. Further, a scanning order according to the direction of scanning may be applied to the quantized transform coefficients in each sub-block.
For example, as shown in fig. 13, 14, and 15, when the size of the target block is 8 × 8, the quantized transform coefficient may be generated by the first transform, the secondary transform, and the quantization of the residual signal of the target block. Thus, one of three types of scanning orders may be applied to four 4 × 4 sub-blocks, and the quantized transform coefficients may also be scanned for each 4 × 4 sub-block according to the scanning order.
The encoding apparatus 100 may generate entropy-encoded quantized transform coefficients by performing entropy encoding on the scanned quantized transform coefficients, and may generate a bitstream including the entropy-encoded quantized transform coefficients.
The decoding apparatus 200 may extract entropy-encoded quantized transform coefficients from a bitstream, and may generate the quantized transform coefficients by performing entropy decoding on the entropy-encoded quantized transform coefficients. The quantized transform coefficients may be arranged in the form of 2D blocks via inverse scanning. Here, as a method of the inverse scanning, at least one of the upper right diagonal scanning, the vertical scanning, and the horizontal scanning may be performed.
In the decoding apparatus 200, inverse quantization may be performed on the quantized transform coefficients. The secondary inverse transform may be performed on a result generated by performing inverse quantization according to whether the secondary inverse transform is performed. Further, the first inverse transform may be performed on a result generated by performing the secondary inverse transform according to whether the first inverse transform is to be performed. The reconstructed residual signal may be generated by performing a first inverse transform on a result generated by performing the secondary inverse transform.
For the luminance component reconstructed via intra prediction or inter prediction, inverse mapping with dynamic range may be performed before loop filtering.
The dynamic range may be divided into 16 equal segments and the mapping function of the respective segments may be signaled. Such mapping functions may be signaled at the stripe level or at the parallel block group level.
An inverse mapping function for performing inverse mapping may be derived based on the mapping function.
Loop filtering, storage of reference pictures and motion compensation may be performed in the inverse mapped region.
The prediction block generated via inter prediction may be transformed to a mapping region by mapping using a mapping function, and the transformed prediction block may be used to generate a reconstructed block. However, since the intra prediction is performed in the mapping region, the prediction block generated via the intra prediction may be used to generate the reconstructed block without the need for mapping and/or inverse mapping.
For example, when the target block is a residual block of the chrominance component, the residual block may be transformed to the inverse mapping region by scaling the chrominance component of the mapping region.
Whether scaling is available may be signaled at the stripe level or the parallel block group level.
For example, scaling may only be applied to the case where the mapping is available for the luma component and the partitions of the chroma component follow the same tree structure.
Scaling may be performed based on an average of values of samples in a luma prediction block corresponding to a chroma prediction block. Here, when the target block uses inter prediction, the luma prediction block may represent a mapped luma prediction block.
The values required for scaling may be derived by referring to a lookup table using an index of a slice to which an average of sample values of the luma prediction block belongs.
The residual block may be transformed to the inverse mapping region by scaling the residual block using the finally derived value. Thereafter, for the block of the chrominance component, reconstruction, intra prediction, inter prediction, loop filtering, and storage of a reference picture may be performed in the inverse mapping region.
For example, information indicating whether mapping and/or inverse mapping of the luminance component and the chrominance component is available may be signaled by the sequence parameter set.
A prediction block of the target block may be generated based on the block vector. The block vector may indicate a displacement between the target block and the reference block. The reference block may be a block in the target image.
In this way, a prediction mode in which a prediction block is generated by referring to a target image may be referred to as an "Intra Block Copy (IBC) mode".
The IBC mode may be applied to a CU having a specific size. For example, the IBC mode may be applied to an mxn CU. Here, M and N may be less than or equal to 64.
The IBC mode may include a skip mode, a merge mode, an AMVP mode, and the like. In the case of the skip mode or the merge mode, the merge candidate list may be configured and the merge index may be signaled, and thus a single merge candidate may be specified among merge candidates existing in the merge candidate list. The block vector of the specified merging candidate may be used as the block vector of the target block.
In the case of AMVP mode, a differential block vector may be signaled. Furthermore, the prediction block vector may be derived from a left neighboring block and an upper neighboring block of the target block. Further, an index indicating which neighboring block will be used may be signaled.
The prediction block in the IBC mode may be included in the target CTU or the left CTU, and may be limited to a block within the previous reconstruction region. For example, the values of the block vector may be restricted such that the prediction block of the target block is located in a specific region. The specific region may be a region defined by three 64 × 64 blocks that are encoded and/or decoded before a 64 × 64 block including the target block. Limiting the value of the block vector in this manner, memory consumption and device complexity caused by implementation of the IBC mode can thus be reduced.
Fig. 16 is a configuration diagram of an encoding device according to an embodiment.
The encoding apparatus 1600 may correspond to the encoding apparatus 100 described above.
The encoding apparatus 1600 may include a processing unit 1610, a memory 1630, a User Interface (UI) input device 1650, a UI output device 1660, and a storage 1640 that communicate with each other over a bus 1690. The encoding device 1600 may also include a communication unit 1620 connected to the network 1699.
The processing unit 1610 may be a Central Processing Unit (CPU) or semiconductor device for executing processing instructions stored in the memory 1630 or the storage 1640. The processing unit 1610 may be at least one hardware processor.
The processing unit 1610 may generate and process signals, data, or information input to the encoding apparatus 1600, output from the encoding apparatus 1600, or used in the encoding apparatus 1600, and may perform checking, comparison, determination, or the like related to the signals, data, or information. In other words, in embodiments, the generation and processing of data or information, as well as the examination, comparison, and determination of data or information related thereto, may be performed by the processing unit 1610.
The processing unit 1610 may include an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
At least some of the inter prediction unit 110, the intra prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy encoding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190 may be program modules and may communicate with an external device or system. The program modules may be included in the encoding device 1600 in the form of an operating system, application program modules, or other program modules.
The program modules may be physically stored in various types of well-known storage devices. Additionally, at least some of the program modules may also be stored in remote memory storage devices that are capable of communicating with the encoding apparatus 1600.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations in accordance with the embodiments or for implementing abstract data types in accordance with the embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of the encoding apparatus 1600.
The processing unit 1610 may execute instructions or code in the inter-prediction unit 110, the intra-prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy encoding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190.
The memory unit may represent the memory 1630 and/or the memory 1640. Each of memory 1630 and storage 1640 may be any of various types of volatile or non-volatile storage media. For example, the memory 1630 may include at least one of Read Only Memory (ROM)1631 and Random Access Memory (RAM) 1632.
The memory unit may store data or information used to encode the operation of device 1600. In an embodiment, data or information of the encoding apparatus 1600 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
Encoding device 1600 may be implemented in a computer system that includes a computer-readable storage medium.
The storage medium may store at least one module required for the operation of the encoding apparatus 1600. Memory 1630 may store at least one module and may be configured such that the at least one module is executed by processing unit 1610.
Functions related to communication of data or information of the encoding apparatus 1600 may be performed by the communication unit 1620.
For example, the communication unit 1620 may transmit the bitstream to the decoding apparatus 1700 to be described later.
Fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment.
The decoding apparatus 1700 may correspond to the decoding apparatus 200 described above.
The decoding apparatus 1700 may include a processing unit 1710, a memory 1730, a User Interface (UI) input device 1750, a UI output device 1760, and a storage 1740 that communicate with each other through a bus 1790. The decoding apparatus 1700 may further include a communication unit 1720 connected to a network 1799.
The processing unit 1710 may be a Central Processing Unit (CPU) or a semiconductor device for executing processing instructions stored in the memory 1730 or the storage 1740. The processing unit 1710 can be at least one hardware processor.
The processing unit 1710 may generate and process a signal, data, or information input to the decoding apparatus 1700, output from the decoding apparatus 1700, or used in the decoding apparatus 1700, and may perform checking, comparison, determination, or the like, with respect to the signal, data, or information. In other words, in embodiments, the generation and processing of data or information, as well as the examination, comparison, and determination related to the data or information, may be performed by the processing unit 1710.
The processing unit 1710 may include the entropy decoding unit 210, the inverse quantization unit 220, the inverse transform unit 230, the intra prediction unit 240, the inter prediction unit 250, the switch 245, the adder 255, the filter unit 260, and the reference picture buffer 270.
At least some of the entropy decoding unit 210, the inverse quantization unit 220, the inverse transform unit 230, the intra prediction unit 240, the inter prediction unit 250, the adder 255, the switch 245, the filter unit 260, and the reference picture buffer 270 of the decoding apparatus 200 may be program modules and may communicate with an external device or system. The program modules may be included in the decoding apparatus 1700 in the form of an operating system, application program modules, or other program modules.
Program modules may be physically stored in various types of well-known memory devices. Furthermore, at least some of the program modules may also be stored in a remote memory storage device that is capable of communicating with the decoding apparatus 1700.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations in accordance with the embodiments or for implementing abstract data types in accordance with the embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of the decoding apparatus 1700.
Processing unit 1710 may execute instructions or code in entropy decoding unit 210, inverse quantization unit 220, inverse transform unit 230, intra prediction unit 240, inter prediction unit 250, switch 245, adder 255, filter unit 260, and reference picture buffer 270.
The memory units may represent memory 1730 and/or storage 1740. Memory 1730 and storage 1740 can each be any of a variety of types of volatile or non-volatile storage media. For example, memory 1730 may include at least one of ROM 1731 and RAM 1732.
The storage unit may store data or information for the operation of the decoding apparatus 1700. In an embodiment, data or information of the decoding apparatus 1700 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
The decoding apparatus 1700 may be implemented in a computer system including a computer-readable storage medium.
The storage medium may store at least one module required for the operation of the decoding apparatus 1700. The memory 1730 may store at least one module and may be configured to cause the at least one module to be executed by the processing unit 1710.
Functions related to communication of data or information of the decoding apparatus 1700 can be performed by the communication unit 1720.
For example, the communication unit 1720 may receive a bitstream from the encoding apparatus 1700.
Region segmentation
In embodiments, methods and apparatus may be described that: instead of performing encoding/decoding integrally on a single entire picture, the image may be divided (segmented) into a plurality of regions and then performing independent encoding/decoding on the respective regions.
In general, in picture encoding/decoding, prediction and reconstruction may be performed in units of blocks. When a picture is divided into a plurality of regions and independent (i.e., parallel) encoding/decoding operations are performed on the plurality of regions resulting from the division, more efficient decoding can be provided to applications that provide output in a particular viewport, such as 360 ° video.
Furthermore, in an embodiment, such an efficient signaling method may be provided: for dividing a picture into a plurality of regions and performing independent encoding/decoding operations on the respective regions, instead of integrally encoding a single entire picture, in order to perform encoding/decoding on a region of interest (ROI), such as 360 ° video.
In picture encoding/decoding, a virtual boundary in a picture may be used. The image quality of a (reconstructed) picture can be improved by determining whether an in-loop filter is applied to a discontinuous boundary in the picture. In an embodiment, when a single picture is generated by combining regions generated by division, it may be determined whether an in-loop filter is applied to the generating step. Further, information about whether an in-loop filter is applied may be signaled.
The virtual boundary in the screen may be specified based on predetermined virtual boundary information. The virtual boundary information may include at least one of information on the number of virtual boundaries and information on the position of the virtual boundaries. The virtual boundary information may be defined for each of the horizontal direction and the vertical direction.
For example, the virtual boundary number information may include information on the number of horizontal virtual boundaries and the number of vertical virtual boundaries.
The virtual boundary position information may include information about the position of the horizontal virtual boundary (e.g., the y-coordinate of the horizontal virtual boundary) and information about the position of the vertical virtual boundary (e.g., the x-coordinate of the vertical virtual boundary). The position information may be obtained by encoding in units of predefined n samples or encoding in units of a specific segment region. Here, n may be 4, 8, 16 or more.
Such a slice region may be a sprite, a slice, a parallel block, a Coded Tree Block (CTB), or a max/min Coded Block (CB).
The virtual boundary information may be adaptively acquired based on a first flag indicating whether the virtual boundary information is to be signaled. The first flag may be defined as a single flag regardless of horizontal/vertical directions, or may be separately defined for horizontal/vertical directions, similar to virtual boundary information.
The first flag and the virtual boundary information may be information encoded/decoded at a first high level or information encoded/decoded at a second high level. For example, the first high level may represent a video parameter set or a Sequence Parameter Set (SPS). The second high level may be a level lower than the first high level and may represent a picture parameter set, a Picture Header (PH), or a slice header.
For convenience of description, it may be assumed that the first high level and the second high level are a sequence parameter set and a picture header, respectively. The first flag and the virtual boundary information encoded/decoded in the SPS may be designated as SPS information, and the first flag and the virtual boundary information encoded/decoded in the PH may be designated as PH information.
SPS information and PH information may be adaptively signaled based on a flag (hereinafter, referred to as a second flag) indicating whether there is a restriction on applying an in-loop filter to a virtual boundary in a picture.
For example, a case where the second flag has a first value (e.g., "0" or false) may indicate that disabling the in-loop filter for the virtual boundary is not to be applied. A case where the second flag has a second value (e.g., "1" or true) may indicate that the disabling of the in-loop filter may be applied to the virtual boundary. Accordingly, in the case where the second flag has the first value, SPS information and PH information may not be signaled. At least one of the SPS information and the PH information may be signaled only if the second flag has the second value.
In addition, in order to specify a virtual boundary in the target screen, any one of SPS information and PH information may be selectively used. For selective use, a separate third flag indicating which one of the SPS information and the PH information is to be used may be defined, and the first flag included in the SPS information may be used. For example, the third flag may indicate such a high level: the first flag and the virtual boundary information to be used are included to specify a virtual boundary in the target screen.
For example, when virtual boundary information of an SPS level exists according to the first flag of the SPS information, a virtual boundary in the target screen may be specified based on the virtual boundary information of the SPS information. In contrast, when virtual boundary information of an SPS level does not exist according to the first flag of the SPS information, a virtual boundary in the target picture may be specified based on the virtual boundary information of the PH information. However, the designation may be performed on the condition that virtual boundary information of PH level exists according to the first flag of PH information.
A match of the specified boundary of the target block with the preset virtual boundary may indicate that the specified boundary of the target block is a discontinuous boundary. Therefore, in the case where the specified boundary of the target block matches the preset virtual boundary, there may be a restriction that the in-loop filter is applied to the specified boundary of the target block, so that the application of the in-loop filter is not performed.
The first flag, the second flag, and the third flag are merely exemplary names. The first flag may be replaced with virtual boundary signaling information. The second flag may be replaced with in-loop filter disable information. The third flag may be replaced with virtual boundary level information.
Fig. 18 illustrates division of a picture in a raster scan stripe mode according to an example.
One screen may be divided into one or more regions.
For example, each region may have a rectangular shape (i.e., a non-square rectangle). In other words, one or more rectangular regions may constitute one screen.
Alternatively, each region may have a shape in which a plurality of rectangles are combined with each other. For example, the plurality of mutually combined rectangles may be continuous units following a raster scan order.
In an embodiment, the rectangle may be limited to a square.
The division of the unit may be any one of a stripe, a parallel block, a partition, and a Coding Tree Unit (CTU).
In fig. 18, each band is indicated by a thick solid line. Each parallel block is indicated by a thin solid line. Each CTU is indicated by a dashed line.
A picture can be divided into slices, parallel blocks, and partitions. A chunk can be a segment (fragment) that is smaller than a concurrent block.
A picture may be divided into one or more stripes. Optionally, the frame may include one or more stripes.
Information for partitioning into one or more stripes may be signaled through a Network Abstraction Layer (NAL) unit. A NAL unit may include a slice header and slice data.
In fig. 18, a picture can be divided into three slices.
Each picture may be divided into one or more parallel blocks. In dividing a screen into one or more parallel blocks, the number of rows and the number of columns corresponding to the one or more parallel blocks may be used. In other words, the frame can be based on n1Or m1Is divided into one or more parallel blocks. n is a radical of an alkyl radical1Representing the number of one or more rows corresponding to one or more parallel blocks. m is1Representing the number of one or more columns corresponding to one or more parallel blocks.
In fig. 18, a picture is divided into 12 parallel blocks. The first stripe of the picture comprises two parallel blocks. The second stripe of the picture comprises five parallel blocks. The third stripe of the picture comprises five parallel blocks.
Each picture may be divided into one or more CTUs. Each parallel block may include one orMore Code Tree Units (CTUs). Each parallel block may be formed from n2Lines and m2One or more CTUs corresponding to each column. n is2May be an integer of 1 or more, and m2And may be an integer of 1 or more. Further, a picture may be composed of a set of one or more CTUs included in a rectangular region of each parallel block.
In fig. 18, each parallel block includes 18 CTUs. 18 CTUs may be configured using three rows and six columns.
At least part of the encoding of the stripes and/or the parallel blocks and at least part of the decoding of the stripes and/or the parallel blocks can be performed independently for further stripes and/or further parallel blocks. For example, encoding may include prediction, transformation, quantization, entropy coding, inverse quantization, inverse transformation, and reconstruction. Decoding may include inverse quantization, inverse transformation, prediction, and reconstruction.
By utilizing these features, slices and regions can be used for parallel processing, which is required because the encoding apparatus 1600 and the decoding apparatus 1700 are complicated, and slices and regions can be used to provide a region of interest (ROI) in a picture.
Each parallel block may be divided into one or more partitions. In a parallel block, each block may be defined as the number of rows of one or more CTUs. In other words, a block may be a certain number of columns corresponding to one or more CTUs in a parallel block. Thus, a single parallel block can be considered a block even if each parallel block is not divided into several blocks. In contrast, a partition corresponding to a portion of a parallel block may not be considered a parallel block.
A stripe may be represented by the number of one or more parallel blocks in a picture. Further, a stripe may be represented by the number of one or more partitions.
Each slice in a picture can be defined by two schemes. The first scheme may be a scheme based on a raster scan stripe pattern. The second scheme may be a scheme based on a rectangular stripe pattern.
In raster scan stripe mode, each stripe can be defined as a set of one or more parallel blocks in the picture that follow a raster scan order. Alternatively, each stripe may be defined as the number of one or more parallel blocks in the screen that follow a raster scan order.
In fig. 18, an example in which a screen is divided into one or more bands in the raster scan band mode is shown.
Fig. 19 illustrates division of a picture in a rectangular stripe mode according to an example.
In fig. 19, each band is indicated by a thick solid line. Each parallel block is indicated by a thin solid line. Each CTU is indicated by a dashed line.
In the rectangular stripe mode, each stripe may be defined by one or more partitions that form a rectangular area in the screen. In other words, one or more tiles in a strip may form a rectangular area. A rectangular stripe may be a collection of one or more tiles that follow a raster scan order.
In fig. 19, an example in which a picture is divided into one or more slices in the rectangular slice mode is shown.
In fig. 19, a picture is divided into 24 parallel blocks and nine rectangular stripes. The 24 parallel blocks comprise six rows and four columns.
As shown in fig. 18 and 19, each stripe may include a plurality of parallel blocks.
FIG. 20 shows parallel blocks, partitions, and rectangular stripes in a picture according to an example.
In fig. 20, each band is indicated by a thick solid line. Each of the parallel blocks is indicated by a thin solid line. Each block is indicated by a dashed line.
In fig. 20, a picture is divided into four parallel blocks. Further, the picture is divided into 11 blocks and four rectangular stripes.
Each parallel block may include one or more tiles.
In fig. 20, the upper left parallel block includes one block. The top right parallel block includes five partitions. The lower left parallel block includes two partitions. The bottom right parallel block includes three partitions.
A picture can be coded using three different color planes. The color plane identifier for each band may indicate the color plane of the respective band. Here, the slice may include only CTUs of a color corresponding to the color plane identifier of the slice. Each color array may be composed of stripes having the same color plane identifier.
As described above with reference to fig. 18 to 20, each stripe may be composed of one or more parallel blocks, and may be composed of one or more blocks among the parallel blocks. One or more parallel blocks and one or more partitions may not be allowed to be used together to configure the configuration of the stripe in parallel blocks.
Fig. 21 illustrates a picture coding method according to an embodiment.
In step 2110, the processing unit 1610 of the encoding apparatus 1600 may determine a target sub-picture by dividing the target picture.
The processing unit 1610 may determine a target sprite that is a part of the target picture.
The target sprite may include a plurality of target sprites. The plurality of target sprites may constitute a target frame.
In step 2120, the processing unit 1610 may generate encoded target sub-picture information by performing encoding on the target sub-picture.
Here, the encoding may include prediction, transformation, and quantization of the target sub-picture. Prediction, transformation, and quantization may be performed on each of the blocks in the target sprite.
In step 2130, the processing unit 1610 can generate a bitstream comprising the target sprite information.
The processing unit 1610 may generate entropy-encoded target sub-picture information by performing entropy encoding on the target sub-picture information, and may generate a bitstream including the entropy-encoded target sub-picture information.
In addition, the bitstream may include information related to the sub-picture, which will be described later. The sprite-related information may include information required to divide the target picture into target sprites and may include information for generating a reconstructed target picture using the reconstructed target sprite.
The encoding apparatus 1600 may store or transmit the bitstream at step 2140.
For example, the memory 1640 may store a bitstream. Alternatively, the communication unit 1620 may transmit the bit stream to the decoding apparatus 1700.
In step 2150, the processing unit 1610 may generate a reconstructed target sprite by performing decoding using the target sprite information.
The processing unit 1610 can determine a target sprite before a reconstructed target sprite is generated. Here, the target sprite may correspond to sprite related information.
The reconstructed target sprite may include a plurality of target sprites.
Here, the decoding may include inverse quantization, inverse transformation, and prediction of the target sub-picture. Inverse quantization, inverse transformation, and prediction may be performed on each block in the target sub-picture. Each reconstructed target sprite may be composed of reconstructed blocks.
In step 2160, the processing unit 1610 may use the reconstructed target sprite to generate a reconstructed target picture.
The processing unit 1610 may generate a reconstructed target picture by applying merging and filtering to the reconstructed target sprite.
The processing unit 1610 may apply filtering to the boundary line between each reconstructed target sprite and the further reconstructed sprite.
The plurality of reconstructed target sprites may constitute a reconstructed target picture.
The processing unit 1610 may generate a reconstructed target picture by applying merging and filtering to the plurality of reconstructed target sprites.
The reconstructed target picture may then be used as a reference picture for encoding/decoding another picture.
Fig. 22 illustrates a picture decoding method according to an embodiment.
In step 2210, the decoding apparatus 1700 may obtain a bitstream.
For example, the communication unit 1720 may receive a bitstream from the encoding device 1600. Alternatively, the memory 1740 may provide a bitstream stored in the decoding apparatus 1700 to the processing unit 1710.
The bitstream may include target sprite information. Alternatively, the bitstream may include entropy-encoded target sub-picture information.
The bitstream may include information related to the sub-picture. The sub-picture related information may include division (segmentation) information.
In step 2220, the processing unit 1710 may determine a target sub-screen by dividing the target screen using the division information.
The processing unit 1710 can use the division information to determine a target sub-picture that is part of the target picture.
In step 2230, the processing unit 1710 can acquire the target sub-picture information from the bitstream.
The processing unit 1710 may generate target sub-picture information by performing entropy decoding on entropy-encoded target sub-picture information of the bitstream.
In step 2230, the processing unit 1710 may generate a reconstructed target sub-picture for the target sub-picture by performing decoding using the target sub-picture information.
The reconstructed target sprite may include a plurality of target sprites.
Here, the decoding may include inverse quantization, inverse transformation, and prediction of the target sub-picture. Inverse quantization, inverse transformation, and prediction may be performed on each of the blocks in the target sub-picture. Each reconstructed target sprite may be composed of reconstructed blocks.
In step 2240, the processing unit 1610 may generate a reconstructed target picture using the reconstructed target sprite.
The processing unit 1610 may generate a reconstructed target picture by applying merging and filtering to the reconstructed target sprite.
The processing unit 1610 may apply filtering to the boundary line between each reconstructed target sprite and the further sprite.
The plurality of reconstructed target sprites may constitute a reconstructed target picture.
The processing unit 1610 may generate a reconstructed target picture by applying merging and filtering to the plurality of reconstructed target sprites.
The reconstructed target picture may then be used as a reference picture for encoding/decoding another picture.
Hereinafter, certain operations may be performed by the encoding apparatus 1600 and the decoding apparatus 1700 together. The processing unit may represent the processing unit 1610 of the encoding apparatus 1600 and the processing unit 1710 of the decoding apparatus 1700.
Hereinafter, the target screen may be simply designated as a screen. The target sprite may simply be designated as the sprite.
Fig. 23 illustrates division of a picture according to an example.
Dividing an image into a plurality of regions and performing independent encoding/decoding on each of the plurality of regions may be more efficient than performing encoding/decoding on a single entire image. For example, parallel encoding/decoding operations for multiple regions may be performed more efficiently.
In the embodiment, "image", "picture", "slice", and "frame" as the division target may have the same meaning. For example, each "region" resulting from the partitioning may be a "sprite", "sub-slice", or "sub-frame". Alternatively, a "region" may be a parallel block or a group of parallel blocks. The set of parallel blocks may represent one or more parallel blocks.
In other words, the term "image" may be a unit composed of one or more specific areas. The one or more specific regions may be one or more sub-pictures, one or more sub-stripes, one or more sub-frames, one or more parallel blocks, or one or more groups of parallel blocks.
For example, as shown in fig. 23, an image may be a picture, and the picture may include a plurality of sub-pictures.
As shown in fig. 23, the encoding/decoding operations for a plurality of sub-pictures may be independently performed.
The processing unit may divide the picture into one or more sub-pictures using the division information of the image.
The division information of the picture may include the number of one or more sub-pictures. Further, the division information may include 1) a start position, 2) a width, and 3) a height of each of the one or more sub-pictures.
The processing unit may use the partition information to configure one or more sub-pictures of the picture such that the one or more sub-pictures have a particular size.
Each sprite may include one or more stripes, one or more parallel blocks, or one or more tiles.
The division information of the picture may include division information of each of one or more sub-pictures of the picture.
The division information of each sub-picture may include: 1) slice information for each sprite, 2) parallel block information for each sprite, and/or 3) blocking information for each sprite. The processing unit may configure one or more sub-pictures in the image using the division information of each sub-picture.
The slice information of each sprite may specify one or more slices constituting the corresponding sprite. The slice information of each sprite may include: 1) a number of one or more slices constituting the corresponding sprite, 2) a position of one or more slices constituting the corresponding sprite, and/or 3) an index of one or more slices constituting the corresponding sprite.
The parallel block information of each sprite may specify one or more parallel blocks constituting the corresponding sprite. For example, the parallel block information of each sub-picture may include 1) the number of one or more parallel blocks constituting the corresponding sub-picture, 2) the position of one or more parallel blocks constituting the corresponding sub-picture, 3) the size (or width and height) of one or more parallel blocks constituting the corresponding sub-picture, and/or 4) the index of one or more parallel blocks constituting the corresponding sub-picture.
The partition information of each sprite may specify one or more partitions constituting the corresponding sprite. For example, the blocking information of each sub-picture may include 1) the number of one or more blocks constituting the corresponding sub-picture, 2) the size (or width and height) of one or more blocks constituting the corresponding sub-picture, and/or 3) an index of one or more blocks constituting the corresponding sub-picture.
The processing unit 1610 of the encoding apparatus 1600 may divide a picture into one or more sub-pictures. The processing unit 1610 may generate division information required in order to divide a picture into one or more sub-pictures. The partition information may be signaled from the encoding apparatus 1600 to the decoding apparatus 1700 by a bitstream.
Fig. 24 shows a sprite to which padding is applied according to an example.
As described above, independent encoding/decoding may be performed for each of one or more sub-pictures in a picture. This independence can mean: when a specific sub-picture is encoded/decoded, other sub-pictures (e.g., adjacent sub-pictures) are not referred to. In other words, encoding/decoding of sub-pictures can be performed without interaction between the sub-pictures.
The processing unit may apply padding to the boundary line of each of the one or more sub-pictures because adjacent sub-pictures cannot be referenced.
When each sprite occupies a certain area in the picture, the filling can be performed by considering (virtual) pixels that exist even outside this area of the sprite. Thus, by padding, samples outside the area of the sub-picture can be used to perform encoding/decoding on the sub-picture.
When the size of the sprite is m × n and the coordinates of the upper left part of the sprite in the screen are (p, q), the size of the sprite to which padding is applied may be (m +2a) × (n +2a), and the coordinates of the upper left part of the sprite to which padding is applied may be (p-a ). m may be an integer of 1 or more. n may be an integer of 1 or more. p may be an integer of 0 or more. q may be an integer of 0 or more. "a" may be an integer of 1 or more.
In other words, the filling of the boundary lines of the sub-picture may be configured to expand the sub-picture 1) by "a" pixels in a direction upward from the top boundary of the sub-picture, 2) in a direction leftward from the left boundary of the sub-picture, 3) in a direction rightward from the left boundary of the sub-picture, and 4) in a direction downward from the lower boundary of the sub-picture.
A specific method may be used to determine the value of the pixel added to the sprite by the padding. For example, the value of the pixel added to the sprite by the padding may be set to the value of the pixel in the sprite that is closest thereto.
After padding has been applied to the boundary line of the sub-picture, the processing unit may perform specific encoding/decoding on the sub-picture to which padding has been applied. For example, the particular encoding may include prediction, transform, quantization, entropy encoding, inverse quantization, and inverse transform. The particular decoding may include inverse quantization, inverse transformation, prediction, and reconstruction.
The pixels added by padding may be used only for encoding/decoding the sub-picture. In other words, it can be considered that the pixels added to the sub-picture by padding are usable only for encoding/decoding the sub-picture. Further, when the sprite is reconstructed, the pixels added by the padding may be discarded. Alternatively, when merging, pixels added by filling may be excluded, which will be described later.
For example, the processing unit may perform motion estimation and motion compensation on the sprite to which the padding is applied.
The filling of the boundary line for the sprite may be selectively performed. The processing unit may use the padding information of the sprite to determine whether to apply padding to the boundary line of the sprite.
The padding information may indicate whether padding is to be applied to the boundary line of the sprite. For example, the padding information may include a flag indicating whether padding is to be applied to the boundary line of the sprite. Alternatively, the padding information may include an index indicating a method for padding the boundary line of the sprite.
The padding information may be commonly applied to one or more sub-pictures of a picture. Alternatively, the padding information may be applied to the respective sprites individually.
The padding information may indicate a width (or height) "a" of a pixel to be padded to one boundary line of each sprite.
The processing unit 1610 of the encoding apparatus 1600 may determine whether padding is to be applied to each boundary line of the picture. The processing unit 1610 may generate padding information indicating whether padding is to be applied to the boundary line of the picture. The padding information may be signaled from the encoding apparatus 1600 to the decoding apparatus 1700 through a bitstream.
Sharing of reference pictures
One or more of the sub-pictures may share a reference picture. Alternatively, one or more sub-pictures may use separate reference pictures.
The processing unit may determine whether one or more sub-pictures will share a reference picture based on the reference picture sharing information. Here, the shared reference picture may represent a shared reference picture list.
For example, the reference picture sharing information may be a reference picture sharing flag, a reference picture sharing list, or a reference picture sharing index.
For example, when the value of the reference picture sharing flag of the sub-picture has a first value (e.g., "0"), the sub-picture may have a separate reference picture. When the value of the reference picture sharing flag of the sub-picture is a second value (e.g., "1"), the sub-picture may share the reference picture of the additional sub-picture with the additional sub-picture. Here, the additional sprite may be a sprite preceding the corresponding sprite. That is, a single reference picture list may be shared between adjacent sub-pictures.
For example, when the value of the reference picture sharing flag of the first sub-picture is a first value, the first sub-picture may have a reference picture list, and the reference picture list for the first sub-picture may be configured. When each of the values of the reference picture sharing flags of the second, third, and fourth sub-pictures is a second value, the reference picture list of the first sub-picture can be shared by the second, third, and fourth sub-pictures.
For example, the reference picture sharing list may indicate sub-pictures that share a reference picture. The reference picture sharing list may include identifiers of sub-pictures that share the reference picture. The identifier of the reference picture may be an index of the reference picture.
For example, the reference picture sharing list may have one or more sharing indicators. The one or more sharing indicators may correspond to one or more sub-pictures, respectively. The order of the sharing indicators in the reference picture sharing list may correspond to the index of the sub-picture. When the value of the sharing indicator is a first value (e.g., "0"), the sub-picture corresponding to the sharing indicator may not share the reference picture of the first sub-picture. When the value of the sharing indicator is a second value (e.g., "1"), the sub-picture corresponding to the sharing indicator may share the reference picture of the first sub-picture.
Fig. 25 illustrates filtering across boundary lines of sprites in accordance with merging of sprites according to an example.
The processing unit may merge one or more sub-pictures into a single picture. For example, when one or more sub-pictures are reconstructed, the processing unit may generate a reconstructed picture by combining the one or more reconstructed sub-pictures.
In embodiments, the merging may be conceptual. In other words, a reconstructed sub-picture can be generated for an area occupied by the sub-picture by performing encoding/decoding on the sub-picture in the picture. A reconstructed picture may be generated by generating one or more reconstructed sub-pictures (sequentially or in parallel) for one or more sub-pictures.
In an embodiment, merging and filtering the sub-pictures may mean merging and filtering the reconstructed sub-pictures.
When merging one or more sub-pictures into a single picture, the processing unit may perform filtering across a boundary line between the sub-pictures. The filtering may improve the image quality of the picture by reducing a difference occurring at a boundary line between the sub-pictures.
Hereinafter, filtering each sub-picture may mean filtering at least one of one or more boundary lines crossing the sub-picture.
The filter used to perform the filtering may be a deblocking filter, a Sample Adaptive Offset (SAO) filter, an Adaptive Loop Filter (ALF), or a non-local filter (NLF). Furthermore, one or more filters may be used for filtering.
The filtering of each sprite may be selectively performed. The processing unit may use the filtering information of the sub-picture to determine whether filtering is to be applied to the sub-picture.
The processing unit may determine whether filtering is to be applied to a boundary line between the sub-pictures using the filtering information.
The filtering information may indicate whether filtering is to be applied to a boundary line between the sub-pictures. For example, the filtering information may include a flag indicating whether filtering is to be applied to a boundary line between sub-pictures. Alternatively, the filtering information may include an index indicating a method for filtering the boundary line between the sub-pictures.
The filtering information may be commonly applied to a boundary line between one or more sub-pictures in the picture. Alternatively, the filtering information may be applied to a boundary line between specific sprites. Alternatively, the filtering information may be applied to the boundary line of a specific sprite. Alternatively, the filter information may be applied to a specific boundary line.
The filtering information may indicate a boundary line to which filtering is to be applied. For example, the filter information may indicate coordinates of a boundary line to which the filter is to be applied. Alternatively, the coordinates indicated by the filter information may indicate the boundary line.
The filtering information may indicate each sub-picture to which filtering is to be applied. For example, one or more pieces of filtering information may be used for one or more sub-pictures in a picture, respectively.
The filtering information may indicate adaptive decisions regarding filtering. The processing unit may adaptively decide whether filtering is to be applied to the boundary line indicated by the filtering information. Here, the adaptation decision may aim at determining whether or not filtering is to be applied based on coding parameters related to the sub-picture and/or the boundary line, or may aim at setting a default value for filtering information using coding parameters related to the sub-picture and/or the boundary line.
Fig. 26 illustrates a picture divided into six sub-pictures according to an example.
When a 360 ° image (video) is configured by Cube Map Projection (CMP), each face of the 360 ° image can be defined and processed as a sprite.
For example, when the picture is divided into the first to sixth sub-pictures, the first sub-picture may be a right sub-picture. The second sprite may be a front-side sprite. The third sprite may be a left sprite. The fourth sprite may be a bottom sprite. The fifth sprite may be a rear-side sprite. The sixth sprite may be the top sprite.
The number of facets may vary depending on the projection method. Therefore, the number of sprites may vary according to the projection method. Further, the sprite may correspond to another specific area other than the surface. That is, another specific area except for each face may be defined as a sprite.
Fig. 27 illustrates padding of six sprites according to an example.
The padding operation may be applied to the six sprites described above with reference to fig. 26. After the padding operation has been applied, the encoding/decoding operations for the six sub-pictures may be performed (independently or in parallel). In other words, the encoding/decoding operations may be performed on six planes (independently or in parallel).
According to the setting, the padding operation may not be applied to the six sub-pictures, and the encoding/decoding operation may be performed on the six sub-pictures (independently or in parallel) without padding.
Fig. 28 illustrates filtering performed after combining six sub-pictures according to an example.
After the processing of the six sprites described above with reference to fig. 27 has been performed, the six sprites may be merged with each other, and filtering may be applied to the six sprites.
Six sprites may be merged into a single picture, and filtering may be performed across a boundary line between the sprites.
According to the setting, the filtering may not be applied to the boundary line between the sprites.
High level related to picture divisionGrammar
Fig. 29 illustrates a first syntax for providing division information of a picture according to an example.
The syntax elements may be parsed according to the syntax shown in fig. 29.
The above-described division information may include the following syntax elements 1) to 6).
1) single _ pic _ flag: information indicating whether a picture is to be divided into one or more sub-pictures. (or a flag indicating whether a picture is to be divided into one or more sub-pictures)
2) num _ sub _ pic _ col _ minus 1: information indicating the number of columns corresponding to one or more sub-pictures generated by the division. (or the number of columns corresponding to one or more sub-pictures resulting from the division-1)
3) num _ sub _ pic _ row _ minus 1: information indicating the number of lines corresponding to one or more sub-pictures generated by the division. (or the number of lines corresponding to one or more sub-pictures resulting from the division-1)
4) uniform _ sub _ pic _ spacing _ flag: information indicating whether one or more sub-pictures generated by the division have a uniform size (or a flag indicating whether one or more sub-pictures generated by the division have a uniform size)
5) sub _ pic _ width _ minus1[ i ]: information indicating the width of the corresponding sprite. (or width of sprite with index i)
6) sub _ pic _ height _ minus1[ i ]: information indicating the height of the corresponding sprite. (or height of sprite with index i)
The index of the sub-picture may be the number of the corresponding sub-picture. The index may start at 0.
Fig. 30 illustrates a second syntax for providing division information of a picture according to an example.
The syntax elements may be parsed according to the syntax shown in fig. 30.
The above-described division information may include the following syntax elements 1) to 6).
1) single _ pic _ flag: information indicating whether a picture is to be divided into one or more sub-pictures. (or a flag indicating whether a picture is to be divided into one or more sub-pictures)
2) num _ sub _ pic _ minus 1: information indicating the number of one or more sub-pictures generated by the division. (or the number of one or more sprites resulting from the division-1)
3) sub _ pic _ boundary _ pox _ x [ i ]: information indicating an x-coordinate of the corresponding sprite. (or x-coordinate of sprite indexed by i)
4) sub _ pic _ boundary _ pox _ y [ i ]: information indicating the y-coordinate of the corresponding sub-picture. (or y-coordinate of sprite with index i)
Here, the x-coordinate and the y-coordinate of the sprite may be coordinates of an upper left portion of the sprite.
The index of the corresponding sprite may be the number of the sprite. The index may start at 0.
Fig. 31 illustrates a third syntax for providing division information of a picture according to an example.
The syntax elements may be parsed according to the syntax shown in fig. 31.
The above-described division information may include the following syntax elements 1) to 5).
1) single _ pic _ flag: information indicating whether a picture is to be divided into one or more sub-pictures. (or a flag indicating whether a picture is to be divided into one or more sub-pictures)
2) num _ ver _ sub _ pic _ minus 1: information indicating the number of vertical columns corresponding to one or more sub-pictures. (or the number of vertical columns corresponding to one or more sub-pictures-1)
3) sub _ pic _ boundary _ pox _ x [ i ]: information indicating x-coordinate of vertical column (or x-coordinate of sprite in vertical column indexed by i)
4) num _ hor _ sub _ pic _ minus 1: information indicating the number of horizontal lines corresponding to one or more sub-pictures. (or the number of horizontal lines corresponding to one or more sub-pictures-1)
5) sub _ pic _ boundary _ pox _ y [ i ]: information indicating the y-coordinate of the horizontal line. (or y-coordinate of sprite in horizontal line indexed i)
Here, the x-coordinate and the y-coordinate of the sprite may be coordinates of an upper left portion of the sprite.
The index of the vertical column may be a number of the vertical column corresponding to one or more sub-pictures. The index may start at 0.
The index of the horizontal line may be a number of the horizontal line corresponding to one or more sub-pictures. The index may start at 0.
High level syntax of sprite header
The sprite header may include sprite definition information related to the definition of the sprite. The sprite definition information may include the following syntax elements 1) to 6).
1) sub _ pic _ id: an identifier of the sub-picture.
2) single _ tile _ in _ sub _ pic _ flag: information (flag) indicating whether a single parallel block is to be applied to a sub-picture (or information (flag) indicating whether a parallel block is to be used in a sub-picture)
3) sub _ pic _ width: the width of the sprite.
4) sub _ pic _ qp _ offset: a Quantization Parameter (QP) for a luma component of a sprite.
5) sub _ pic _ cb _ qp _ offset: QP for cb chroma component of the sub-picture.
6) sub _ pic _ cr _ qp _ offset: QP for cr chroma component of sprite.
Further, the sprite definition information may include coordinates of a boundary of the sprite. The sprite definition information may include information on parallel blocks in the sprite.
High level syntax for padding and filtering.
Syntax for padding and filtering, which will be described later, may be used together with the aforementioned syntax related to the division of the picture. Alternatively, the syntax for padding and filtering may be used independently.
The padding information may include the following syntax elements.
Sub _ pic _ padding _ flag: information for determining whether padding is to be performed.
The filtering information may include the following syntax elements.
Sub _ pic _ loop _ filtering _ disable _ flag: information (flag) indicating whether filtering is to be applied to the borderline of the sub-picture (or information (flag) indicating whether filtering is disabled for the sub-picture)
High level syntax for sharing information between sprites
Information required for encoding/decoding a sub-picture can be shared between sub-pictures.
The sharing information may indicate information whether the sub-picture is to share another sub-picture. The sharing information may include the above-described reference picture sharing information.
The shared information may include the following syntax elements 1) and 2).
1) sub _ pic _ refer _ flag: information indicating whether additional sub-pictures will be referred to. (or a flag indicating whether additional sprites are to be referenced)
2) sub _ pic _ refer _ id: an identifier of a sub-picture to be referred to. (or index of sprite as reference target)
Alternatively, the shared information may include information on additional sub-pictures referring to the corresponding sub-picture. The shared information may include information indicating a reference relationship between the sub-pictures.
Signalling of information
The aforementioned information related to the sprite can be signaled through a specific structure in the bitstream. Furthermore, the aforementioned information related to the sprite can be (collectively) applied to an object that refers to or uses a specific structure.
For example, the plurality of pieces of information related to the sprite may include division information, padding information, sharing information, filtering information, and sprite definition information.
For example, the specific structure may include a Video Parameter Set (VPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), a Decoding Parameter Set (DPS), an Adaptive Parameter Set (APS), a slice header, and a sub-picture header.
The information signaled by the specified structure can be applied to pictures that reference a particular structure. For example, the sub-picture related information in the VPS can be applied to a picture that refers to the VPS. The sub-picture related information in the SPS may be applied to pictures that reference the SPS. The sub-picture related information in the PPS may be applied to a picture that refers to the PPS.
Based on the standard principle of encoding/decoding, the sub-picture related information in the structure may have a higher priority as the range to which the structure is applied is narrowed. For example, when the PPS referred to by the picture includes the first information and the SPS referred to by the picture includes the second information, the first information may be used with higher priority.
Two or more structures may be used for signal transmission. For example, some of the plurality of pieces of sub-picture related information may be signaled by a first structure, and the rest of the sub-picture related information may be signaled by a second structure.
Some of the pieces of sub-picture related information may be selectively signaled. For example, when the division information in the specific structure indicates that the picture is to be divided into one or more sub-pictures, the specific structure may include padding information, sharing information, filtering information, and sub-picture definition information. When the division information in the specific structure indicates that the picture is not divided into one or more sub-pictures, the specific structure may not include the padding information, the sharing information, the filtering information, and the sub-picture definition information.
Some of the plurality of pieces of sub-picture related information may be adaptively decided. For example, rather than explicitly signaling the sub-picture related information through a bitstream, the sub-picture related information may be determined based on coding parameters related to the sub-picture. Alternatively, when the sub-picture related information is not explicitly signaled through a bitstream, a default value for the sub-picture related information may be set.
For example, the coding parameters related to the sub-pictures may include 1) the type of each sub-picture, 2) the absolute position of each sub-picture, 3) the relative position of each sub-picture, 4) the size of each sub-picture, 5) the width of each sub-picture, 6) the height of each sub-picture, 7) the components of each sub-picture, 8) the number of sub-pictures, 9) the number of slices in each sub-picture, and so on.
The above-described embodiments may be performed by the encoding apparatus 1600 and the decoding apparatus 1700 using the same and/or corresponding methods as each other. Further, for encoding and/or decoding of images, combinations of one or more of the above embodiments may be used.
The order in which the embodiments are applied in the encoding apparatus 1600 and the decoding apparatus 1700 may be different from each other. Alternatively, the order in which the embodiments are applied in the encoding apparatus 1600 and the decoding apparatus 1700 may be (at least partially) the same as each other.
The embodiment may be performed on each of the luminance signal and the chrominance signal. Embodiments may be equally performed on luminance signals and chrominance signals.
The form of the block to which embodiments of the present disclosure are applied may have a square or non-square shape.
The embodiments of the present disclosure may be applied according to the size of at least one of a target block, an encoding block, a prediction block, a transform block, a current block, an encoding unit, a prediction unit, a transform unit, a unit, and a current unit. Here, the size may be defined as a minimum size and/or a maximum size such that the embodiment is applied, and may be defined as a fixed size to which the embodiment is applied. Further, in the embodiments, the first embodiment may be applied to the first size, and the second embodiment may be applied to the second size. That is, the embodiments can be compositely applied according to the size. Further, the embodiments of the present disclosure may be applied only to the case where the size is equal to or greater than the minimum size and less than or equal to the maximum size. That is, embodiments may only be applied to cases where block sizes fall within a particular range.
Further, the embodiments of the present disclosure may be applied only to the case where the condition that the size is equal to or greater than the minimum size and the condition that the size is less than or equal to the maximum size are satisfied, wherein each of the minimum size and the maximum size may be the size of one of the blocks described in the above embodiments and the units described in the above embodiments. That is, a block that is a target of the minimum size may be different from a block that is a target of the maximum size. For example, embodiments of the present disclosure may be applied only to the case where the size of the target block is equal to or greater than the minimum size of the block and less than or equal to the maximum size of the block.
For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 8 × 8. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 16 × 16. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 32 × 32. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 64 × 64. For example, the embodiment can be applied only to the case where the size of the target block is equal to or larger than 128 × 128. For example, the embodiment can be applied only to the case where the size of the target block is 4 × 4. For example, the embodiments may be applied only to the case where the size of the target block is less than or equal to 8 × 8. For example, the embodiments may be applied only to the case where the size of the target block is less than or equal to 16 × 16. For example, the embodiments can be applied only to the case where the size of the target block is equal to or larger than 8 × 8 and smaller than or equal to 16 × 16. For example, the embodiments can be applied only to the case where the size of the target block is equal to or larger than 16 × 16 and smaller than or equal to 64 × 64.
Embodiments of the present disclosure may be applied according to temporal layers. To identify the temporal layer to which an embodiment is applicable, a separate identifier may be signaled, and an embodiment may be applied to a temporal layer specified by the corresponding identifier. Here, the identifier may be defined as the lowest (bottom) layer and/or the highest (top) layer to which the embodiment is applicable, and may be defined to indicate a specific layer to which the embodiment is applied. In addition, fixed time layers for application embodiments may also be defined.
For example, the embodiment can be applied only to a case where the temporal layer of the target image is the lowermost layer. For example, the embodiments may be applied only to the case where the temporal layer identifier of the target image is equal to or greater than 1. For example, the embodiment may be applied only to a case where the temporal layer of the target image is the highest layer.
A stripe type or a parallel block group type of an embodiment of the present invention to which the embodiment is applied may be defined, and the embodiment of the present invention may be applied according to the corresponding stripe type or parallel block group type.
In the above-described embodiments, it may be explained that, during application of a specific process to a specific target, assuming that a specific condition may be required and the specific process is performed under a specific determination, the specific encoding parameter may be replaced with an additional encoding parameter when the description has been made, so that it is determined whether the specific condition is satisfied based on the specific encoding parameter or the specific determination is made based on the specific encoding parameter. In other words, encoding parameters that affect a particular condition or a particular determination may be considered merely exemplary, and it is understood that a combination of one or more additional encoding parameters, in addition to a particular encoding parameter, is used as a particular encoding parameter.
In the above-described embodiments, although the method has been described based on the flowchart as a series of steps or units, the present disclosure is not limited to the order of the steps, and some steps may be performed in an order different from that of the described steps or simultaneously with other steps. Furthermore, those skilled in the art will understand that: the steps shown in the flowcharts are not exclusive and may also include other steps, or one or more steps in the flowcharts may be deleted without departing from the scope of the present disclosure.
The above-described embodiments include examples of various aspects. Although not all possible combinations for indicating the various aspects may be described, a person skilled in the art will appreciate that other combinations are possible than those explicitly described. Accordingly, it is to be understood that the present disclosure includes other substitutions, alterations, and modifications as fall within the scope of the appended claims.
The above-described embodiments according to the present disclosure may be implemented as programs that can be executed by various computer devices, and may be recorded on a computer-readable storage medium. Computer readable storage media may include program instructions, data files, and data structures, alone or in combination. The program instructions recorded on the storage medium may be specially designed and configured for the present disclosure, or may be known or available to those having ordinary skill in the computer software art.
Computer-readable storage media may include information used in embodiments of the present disclosure. For example, a computer-readable storage medium may include a bitstream, and the bitstream may include information described above in embodiments of the present disclosure.
The computer-readable storage medium may include a non-transitory computer-readable medium.
Examples of the computer-readable storage medium may include all types of hardware devices specifically configured to record and execute program instructions, such as magnetic media (such as hard disks, floppy disks, and magnetic tapes), optical media (such as Compact Disk (CD) -ROMs and Digital Versatile Disks (DVDs)), magneto-optical media (such as floppy disks, ROMs, RAMs, and flash memories). Examples of program instructions include both machine code, such as created by a compiler, and high-level language code that may be executed by the computer using an interpreter. The hardware devices may be configured to operate as one or more software modules to perform the operations of the present disclosure, and vice versa.
As described above, although the present disclosure has been described based on specific details (such as detailed components and a limited number of embodiments and drawings), which are provided only for easy understanding of the entire disclosure, the present disclosure is not limited to these embodiments, and those skilled in the art will practice various changes and modifications according to the above description.
Therefore, it is to be understood that the spirit of the present embodiments is not limited to the above-described embodiments, and that the appended claims and their equivalents and modifications fall within the scope of the present disclosure.

Claims (20)

1. A decoding method, comprising:
determining a target sprite that is a portion of the target picture;
generating a reconstructed target sprite for the target sprite; and is provided with
Generating a reconstructed target picture using the reconstructed target sprite.
2. The decoding method according to claim 1, wherein in generating the reconstructed target picture, filtering is applied to the reconstructed target sub-picture.
3. The decoding method according to claim 2, wherein the filtering is applied to a boundary line between the reconstructed target sub-picture and a further sub-picture.
4. The decoding method of claim 2, wherein the filter in the filtering is a loop filter.
5. The decoding method of claim 2, wherein:
for the filtering, filtering information is used, and
the filtering information includes information indicating whether filtering of the target sprite is to be disabled.
6. The decoding method of claim 2, wherein:
the filtering information is included in a sequence parameter set SPS, and
the filtering information is applied to pictures that reference the SPS.
7. The decoding method of claim 6, wherein a default value for the filtering information is set when the filtering information is not explicitly signaled by a bitstream.
8. An encoding method, comprising:
determining a target sprite that is a portion of the target picture;
generating a reconstructed target sprite for the target sprite; and is provided with
Generating a reconstructed target picture using the reconstructed target sprite.
9. The encoding method according to claim 8, wherein in generating the reconstructed target picture, filtering is applied to the reconstructed target sub-picture.
10. The encoding method according to claim 9, wherein said filtering is applied to a boundary line between said reconstructed target sub-picture and a further sub-picture.
11. The encoding method of claim 9, wherein the filter in the filtering is a loop filter.
12. The encoding method of claim 9, wherein:
generating filtering information for said filtering, and
the filtering information includes information indicating whether filtering of the target sub-picture is to be disabled.
13. The encoding method of claim 9, wherein:
the filtering information is included in a sequence parameter set SPS, and
the filtering information is applied to pictures that reference the SPS.
14. A storage medium storing a bitstream generated by the encoding method of claim 13.
15. A computer-readable storage medium storing a bitstream for decoding a target picture, wherein:
the bitstream includes information on a sub-picture,
determining a target sprite that is a portion of the target picture,
generating a reconstructed target sprite for said target sprite, an
Generating a reconstructed target picture using the reconstructed target sprite.
16. The computer-readable storage medium of claim 15, wherein filtering is applied to the reconstructed target sprite in generating the reconstructed target picture.
17. The computer-readable storage medium of claim 16, wherein the filtering is applied to a boundary line between the reconstructed target sprite and a further sprite.
18. The computer-readable storage medium of claim 16, wherein the filter in the filtering is a loop filter.
19. The computer-readable storage medium of claim 16, wherein:
for the filtering, filtering information is used, and
the filtering information includes information indicating whether filtering of the target sprite is to be disabled.
20. The computer-readable storage medium of claim 16, wherein:
The filtering information is included in a sequence parameter set SPS, and
the filtering information is applied to pictures that reference the SPS.
CN202080059045.XA 2019-06-20 2020-06-22 Method and apparatus for image encoding and image decoding using region segmentation Pending CN114342388A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR20190073694 2019-06-20
KR10-2019-0073694 2019-06-20
PCT/KR2020/008081 WO2020256522A1 (en) 2019-06-20 2020-06-22 Method and apparatus for image encoding and image decoding using area segmentation

Publications (1)

Publication Number Publication Date
CN114342388A true CN114342388A (en) 2022-04-12

Family

ID=74088312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080059045.XA Pending CN114342388A (en) 2019-06-20 2020-06-22 Method and apparatus for image encoding and image decoding using region segmentation

Country Status (2)

Country Link
KR (1) KR20200145779A (en)
CN (1) CN114342388A (en)

Also Published As

Publication number Publication date
KR20200145779A (en) 2020-12-30

Similar Documents

Publication Publication Date Title
CN110463201B (en) Prediction method and apparatus using reference block
CN111567045A (en) Method and apparatus for using inter prediction information
CN110476425B (en) Prediction method and device based on block form
CN111699682A (en) Method and apparatus for encoding and decoding using selective information sharing between channels
US20220321890A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
CN113574875A (en) Encoding/decoding method and apparatus based on intra block copy and bit stream storage medium
CN113924779A (en) Video encoding/decoding method and apparatus, and bit stream storage medium
CN113228651A (en) Quantization matrix encoding/decoding method and apparatus, and recording medium storing bit stream
CN112740694A (en) Method and apparatus for encoding/decoding image and recording medium for storing bitstream
KR20200026758A (en) Method and apparatus for encoding/decoding image, recording medium for stroing bitstream
CN113940077A (en) Virtual boundary signaling method and apparatus for video encoding/decoding
CN114450946A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
CN113906743A (en) Quantization matrix encoding/decoding method and apparatus, and recording medium storing bit stream
US20220201295A1 (en) Method, apparatus and storage medium for image encoding/decoding using prediction
CN113170104A (en) Encoding/decoding method and apparatus using region-based inter/intra prediction
CN111919448A (en) Method and apparatus for image encoding and image decoding using temporal motion information
CN111684801A (en) Bidirectional intra prediction method and apparatus
US20220312009A1 (en) Method and apparatus for image encoding and image decoding using area segmentation
CN116325730A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
US20220272321A1 (en) Method, device, and recording medium for encoding/decoding image using reference picture
CN114270865A (en) Method, apparatus and recording medium for encoding/decoding image
CN113841404A (en) Video encoding/decoding method and apparatus, and recording medium storing bitstream
US20220295059A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using partitioning
CN114270828A (en) Method and apparatus for image encoding and image decoding using block type-based prediction
KR20210063276A (en) Method, apparatus and recoding medium for video processing using motion prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination