CN116325730A - Method, apparatus and recording medium for encoding/decoding image by using geometric partition - Google Patents

Method, apparatus and recording medium for encoding/decoding image by using geometric partition Download PDF

Info

Publication number
CN116325730A
CN116325730A CN202180064175.7A CN202180064175A CN116325730A CN 116325730 A CN116325730 A CN 116325730A CN 202180064175 A CN202180064175 A CN 202180064175A CN 116325730 A CN116325730 A CN 116325730A
Authority
CN
China
Prior art keywords
block
target block
prediction
information
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180064175.7A
Other languages
Chinese (zh)
Inventor
林雄
方健
沈东圭
吴承俊
朴俊泽
李旻勳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Priority claimed from PCT/KR2021/009340 external-priority patent/WO2022019613A1/en
Publication of CN116325730A publication Critical patent/CN116325730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Abstract

Disclosed herein are a method, apparatus, and storage medium for image encoding/decoding using geometric partitioning. The directionality in the block may be used to determine the partition of the block. Straight lines are detected in the block, some partition methods available for the block are excluded according to the type of the detected straight lines, and the remaining partition methods are limited only to selection targets for encoding and/or decoding the target block. Further, directionality in the block may be used to determine the pattern of the geometric partition pattern. Straight lines are detected in the block, some available modes of the geometric partition modes are excluded according to the type of the detected straight lines, and the remaining modes are limited to selection targets for encoding and/or decoding the target block.

Description

Method, apparatus and recording medium for encoding/decoding image by using geometric partition
Technical Field
The present disclosure relates generally to a method, apparatus, and storage medium for image encoding/decoding. More particularly, the present disclosure relates to a method, apparatus, and storage medium for performing image encoding/decoding using geometric partitioning.
The present disclosure claims the benefits of korean patent application No. 10-2020-0089813, filed on 7/20/2020, 10-2020-0111151, filed on 9/15/2020, and 10-2021-0094899, filed on 2021/7/20, which are incorporated herein by reference in their entireties.
Background
With the continued development of the information and communication industry, broadcast services supporting High Definition (HD) resolution have been popular throughout the world. Through this popularity, a large number of users have become accustomed to high resolution and high definition images and/or videos.
In order to meet the demands of users for high definition, a large number of institutions have accelerated the development of next-generation imaging devices. In addition to High Definition TV (HDTV) and Full High Definition (FHD) TV, user interest in UHD TV has increased, where the resolution of UHD TV is more than four times that of Full High Definition (FHD) TV. With the increasing interest thereof, image encoding/decoding techniques for images with higher resolution and higher definition are now required.
As an image compression technique, there are various techniques (such as an inter-prediction technique, an intra-prediction technique, a transform, a quantization technique, and an entropy encoding technique).
Inter prediction technology is a technology for predicting values of pixels included in a current picture using a picture before the current picture and/or a picture after the current picture. The intra prediction technique is a technique for predicting values of pixels included in a current picture using information about pixels in the current picture. The transform and quantization technique may be a technique for compressing the energy of the residual signal. Entropy coding techniques are techniques for assigning short codewords to frequently occurring values and assigning long codewords to less frequently occurring values.
By using these image compression techniques, data about an image can be efficiently compressed, transmitted, and stored.
Disclosure of Invention
Technical problem
Embodiments are directed to an apparatus and method for performing a fast determination of geometric partition modes using directionality in a block.
Embodiments are directed to an apparatus and method for performing a fast determination of a block partition using directionality in the block.
Technical proposal
In one aspect, there is provided an encoding method comprising: determining a prediction mode required for prediction of the target block; and generating information about the encoded target block by encoding the target block using the prediction mode.
The target block may be encoded using a geometric partition mode GPM.
The directionality in the target block may be used to determine the mode of the GPM.
The directionality in the target block may be used to determine a partition mode of the target block.
The directionality may be derived using a hough transform.
The hough transform may be applied to an edge map of the target block.
Edges in the edge map may be detected using one or more of a sobel operation, a laplace operation, and a Canny operation.
When the magnitude of the value of the edge is less than or equal to a threshold, the partition of the target block may not be applied.
An accumulated frequency threshold for the hough transform may be determined based on a size of the target block.
The hough transform may be used to detect straight lines in the horizontal direction, the vertical direction, the 45 ° diagonal direction, and the 135 ° diagonal direction.
The directionality may be determined by a straight line detected in the target block.
A portion of the available partition modes may be excluded from encoding the target block based on the detected straight lines in the target block.
When no straight line is detected in the target block, the partition mode may not be used.
When the horizontal straight line and the vertical straight line are not detected in the target block, the horizontal partitioning method and the vertical partitioning method may not be applied to the target block.
When a vertical straight line is not detected in the target block, a vertical partition method may not be applied to the target block.
The vertical partitioning method may include a vertical Binary Tree (BT) partitioning method and a vertical Trigeminal Tree (TT) partitioning method.
The texture properties of the target block may be used to determine a partition mode for the target block.
Based on texture properties of the target block, a portion of the available partition modes may be excluded from encoding the target block.
The texture attributes may include one or more of edges, variances, and averages.
In another aspect, there is provided a decoding method comprising: determining a prediction mode required for prediction of the target block using the bitstream; and performing decoding on the target block using the prediction mode and information on the encoded target block.
In another aspect, there is provided a computer-readable storage medium storing a bitstream for image decoding, the bitstream including information about an encoded target block, wherein a prediction mode required for prediction of the target block is determined using the bitstream, and decoding is performed on the target block using the prediction mode and the information about the encoded target block.
Advantageous effects
An apparatus and method for performing a fast determination of a geometric partition mode using directionality in a block are provided.
An apparatus and method for performing a fast determination of a block partition using directionality in a block are provided.
The fast determination of the geometric partition mode and/or the fast determination of the partition may be performed by using directionality in the block to improve coding efficiency.
The fast determination of geometric partition modes and/or the fast determination of partitions may be performed by using directionality in the block to perform encoding on low complexity video.
Drawings
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied;
fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied;
fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded;
fig. 4 is a diagram showing a form of a Prediction Unit (PU) that an encoding unit (CU) can include;
fig. 5 is a diagram illustrating a form of a Transform Unit (TU) that can be included in a CU;
FIG. 6 illustrates partitioning of blocks according to an example;
fig. 7 is a diagram for explaining an embodiment of an intra prediction process;
Fig. 8 is a diagram illustrating reference samples used in an intra prediction process;
fig. 9 is a diagram for explaining an embodiment of an inter prediction process;
FIG. 10 illustrates spatial candidates according to an embodiment;
fig. 11 illustrates an order of adding motion information of spatial candidates to a merge list according to an embodiment;
FIG. 12 illustrates a transform and quantization process according to an example;
FIG. 13 illustrates a diagonal scan according to an example;
FIG. 14 illustrates a horizontal scan according to an example;
FIG. 15 illustrates a vertical scan according to an example;
fig. 16 is a configuration diagram of an encoding apparatus according to an embodiment;
fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment;
FIG. 18 illustrates a partitioning method for a block according to an example;
FIG. 19 illustrates an angular distribution in a geometric partitioning mode according to an example;
FIG. 20 illustrates a first example of partitioning in a geometric partitioning mode;
FIG. 21 illustrates a second example of partitioning in a geometric partitioning mode;
FIG. 22 illustrates boundaries of geometric partition modes that may be selected by a ρ value that depends on one θ in the target block;
FIG. 23 illustrates boundaries of geometric partition modes that may be selected by a ρ value that depends on another θ in the target block;
FIG. 24 is a flowchart of a process to perform determination of partition mode and prediction mode for a target block according to an embodiment;
FIG. 25 is a flowchart of a process for determining a partition mode and a prediction mode of a target block using texture attributes of the target block, according to an embodiment;
FIG. 26 is a flowchart of a process for determining a GPM mode for a target block using texture attributes of the target block, according to an embodiment;
FIG. 27 is a flowchart of a process for determining a GPM mode for a target block using texture attributes of the target block and prediction modes of neighboring blocks, according to an embodiment;
FIG. 28 is a flow chart illustrating a method for determining GPM patterns to perform a rate distortion cost search on a target block by comparing GPM patterns of neighboring blocks, according to an example;
FIG. 29 illustrates a first category of angles for a pattern of GPMs according to an example;
FIG. 30 illustrates a second category of angles for a pattern of GPMs according to an example;
FIG. 31 illustrates a third category of angles for a pattern of GPMs according to an example;
FIG. 32 illustrates a fourth category of angles for a pattern of GPMs according to an example;
FIG. 33 is a flow chart illustrating a method for determining a mode of a GPM that is to perform a rate-distortion cost search on a target block based on prediction mode information for GPMs of neighboring blocks, according to an example;
FIG. 34 is a flow chart illustrating a method for determining a pattern of GPMs that will perform rate-distortion cost searches on target blocks using information of straight lines detected in neighboring blocks, according to an example;
FIG. 35 illustrates a list of motion vectors according to an example;
FIG. 36 is a flowchart of a method for searching for an optimal motion vector for each sub-block in a pattern of GPMs according to an example;
fig. 37 illustrates a method of configuring the MV list of fig. 35 using Motion Vectors (MVs) of neighboring blocks according to an embodiment;
fig. 38 is a flowchart of a method for adding MVs of neighboring blocks to an MV list according to an embodiment;
fig. 39 illustrates a method for adding MVs of neighboring blocks to an MV list according to an example;
fig. 40 illustrates a transformation of a coordinate system according to a Hough transformation according to an example;
FIG. 41 is a flow chart of an encoding method according to an embodiment; and
fig. 42 is a flowchart of a decoding method according to an embodiment.
Detailed Description
The present invention is susceptible to various modifications and alternative embodiments, and specific embodiments thereof are described in detail below with reference to the accompanying drawings. It should be understood, however, that the examples are not intended to limit the invention to the particular forms disclosed, but to include all changes, equivalents, or modifications falling within the spirit and scope of the invention.
The following exemplary embodiments will be described in detail with reference to the accompanying drawings showing specific embodiments. These embodiments are described so that those of ordinary skill in the art to which the present disclosure pertains will be readily able to practice them. It should be noted that the various embodiments are different from each other, but need not be mutually exclusive. For example, the specific shapes, structures and characteristics described herein may be implemented as other embodiments related to one embodiment without departing from the spirit and scope of the other embodiments. Further, it is to be understood that the location or arrangement of individual components within each disclosed embodiment may be changed without departing from the spirit and scope of the embodiments. Accordingly, the following detailed description is not intended to limit the scope of the disclosure, and the scope of the exemplary embodiments is defined only by the appended claims and their equivalents (as long as they are properly described).
In the drawings, like reference numerals are used to designate the same or similar functions in all respects. The shapes, sizes, etc. of components in the drawings may be exaggerated to make the description clear.
Terms such as "first" and "second" may be used to describe various components, but the components are not limited by the terms. The terms are used only to distinguish one component from another. For example, a first component may be referred to as a second component without departing from the scope of the present description. Similarly, the second component may be referred to as a first component. The term "and/or" may include a combination of a plurality of related descriptive items or any of a plurality of related descriptive items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, the two elements can be directly connected or coupled to each other or intervening elements may be present between the two elements. On the other hand, it will be understood that when components are referred to as being "directly connected or joined," there are no intervening components between the two components.
The components described in the embodiments in this specification are independently shown to indicate different feature functions, but this does not mean that each component is formed of a separate piece of hardware or software. That is, for convenience of description, a plurality of components are individually arranged and included. For example, at least two of the plurality of components may be integrated into a single component. Instead, one component may be divided into a plurality of components. Embodiments in which multiple components are integrated or embodiments in which some components are separated are included in the scope of the present specification as long as they do not depart from the essence of the present specification.
Furthermore, in the example embodiments, the description indicating "including" a specific element means that elements other than the specific element are not excluded, and additional elements may be included within the scope of the implementation of the example embodiments or the technical spirit of the example embodiments.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. In this specification, it should be understood that terms such as "comprises" or "comprising" are only intended to indicate the presence of features, numbers, steps, operations, components, parts, or combinations thereof, but are not intended to exclude the possibility that one or more other features, numbers, steps, operations, components, parts, or combinations thereof will be present or added. That is, in the present invention, the expression that describes a component "including" a specific component means that another component may be included within the scope of the practice of the present invention or the technical spirit of the present invention, but does not exclude the presence of components other than the specific component.
In this specification, the term "at least one" may mean one of one or more quantities (such as 1, 2, 3, and 4). In this specification, the term "plurality" may mean one of two or more numbers (such as 2, 3, and 4).
Some components of the present invention are not necessary components for performing necessary functions, but may be optional components for improving performance only. The invention may be implemented using only the necessary components for achieving the essence of the invention. For example, structures that include only the necessary components (excluding optional components for improving performance only) are also included within the scope of the present invention.
The embodiments will be described in detail below with reference to the drawings so that those skilled in the art to which the embodiments pertain can easily implement the embodiments. In the following description of the embodiments, a detailed description of known functions or configurations that are considered to obscure the gist of the present specification will be omitted. In addition, the same reference numerals are used to designate the same components throughout the drawings, and repeated descriptions of the same components will be omitted.
Hereinafter, "image" may represent a single picture constituting a video, or may represent the video itself. For example, "encoding and/or decoding an image" may mean "encoding and/or decoding a video" and may also mean "encoding and/or decoding any one of a plurality of images constituting a video".
Hereinafter, the terms "video" and "moving picture" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target image may be an encoding target image that is a target to be encoded and/or a decoding target image that is a target to be decoded. Further, the target image may be an input image input to the encoding apparatus or an input image input to the decoding apparatus. And, the target image may be a current image, i.e., a target to be currently encoded and/or decoded. For example, the terms "target image" and "current image" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "image", "picture", "frame" and "screen" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the target block may be an encoding target block (i.e., a target to be encoded) and/or a decoding target block (i.e., a target to be decoded). Furthermore, the target block may be a current block, i.e., a target that is currently to be encoded and/or decoded. Here, the terms "target block" and "current block" may be used to have the same meaning and may be used interchangeably with each other. The current block may represent an encoding target block that is an encoding target during encoding and/or a decoding target block that is a decoding target during decoding. Further, the current block may be at least one of a coded block, a predicted block, a residual block, and a transformed block.
Hereinafter, the terms "block" and "unit" may be used to have the same meaning and may be used interchangeably with each other. Alternatively, "block" may represent a particular unit.
Hereinafter, the terms "region" and "fragment" are used interchangeably.
In the following embodiments, specific information, data, flags, indexes, elements, and attributes may have their respective values. A value of "0" corresponding to each of the information, data, flags, indexes, elements, and attributes may indicate false, logical false, or a first predefined value. In other words, the values "0", false, logical false and first predefined value may be used interchangeably with each other. The value "1" corresponding to each of the information, data, flags, indexes, elements, and attributes may indicate true, logical true, or a second predefined value. In other words, the values "1", true, logical true and second predefined values may be used interchangeably with each other.
When a variable such as i or j is used to indicate a row, column, or index, the value i may be an integer of 0 or an integer greater than 0, or may be an integer of 1 or an integer greater than 1. In other words, in an embodiment, each of the rows, columns, and indexes may be counted starting from 0, or may be counted starting from 1.
In an embodiment, the term "one or more" or the term "at least one" may mean the term "plurality". The terms "one or more" or the term "at least one" may be used interchangeably with "plurality.
Next, terms to be used in the embodiments will be described.
An encoder: the encoder represents means for performing encoding. That is, the encoder may represent an encoding device.
A decoder: the decoder represents means for performing decoding. That is, the decoder may represent a decoding device.
A unit: the cells may represent cells of image encoding and decoding. The terms "unit" and "block" may be used with the same meaning and are used interchangeably with each other.
The cell may be an array of M x N samples. Each of M and N may be a positive integer. The cells may generally represent an array of samples in two dimensions.
During the encoding and decoding of the images, a "unit" may be a region generated by partitioning one image. In other words, a "cell" may be an area specified in one image. A single image may be partitioned into multiple units. Alternatively, one image may be partitioned into sub-portions, and a unit may represent each partitioned sub-portion when encoding or decoding is performed on the partitioned sub-portion.
During the encoding and decoding of the image, a predefined process may be performed on each unit according to the type of unit.
Unit types can be classified into macro units, coding Units (CUs), prediction Units (PUs), residual units, transform Units (TUs), etc. according to functions. Alternatively, the units may represent blocks, macro blocks, coding tree units, coding tree blocks, coding units, coding blocks, prediction units, prediction blocks, residual units, residual blocks, transform units, transform blocks, etc., according to the function. For example, the target unit that is the target of encoding and/or decoding may be at least one of a CU, a PU, a residual unit, and a TU.
The term "unit" may denote information including a luminance (luma) component block, a chrominance (chroma) component block corresponding to the luminance component block, and syntax elements for the respective blocks, such that the unit is designated as being distinguished from the block.
The size and shape of the cells may be implemented differently. Further, the cells may have any of a variety of sizes and shapes. In particular, the shape of the cell may include not only square, but also geometric shapes (such as rectangle, trapezoid, triangle, and pentagon) that may be represented in two dimensions (2D).
Further, the unit information may include one or more of a type of a unit, a size of a unit, a depth of a unit, an encoding order of a unit, a decoding order of a unit, and the like. For example, the type of unit may indicate one of a CU, a PU, a residual unit, and a TU.
One unit may be partitioned into sub-units, each sub-unit having a smaller size than the size of the associated unit.
Depth: depth may represent the degree to which a cell is partitioned. Further, the depth of a cell may indicate the level at which the corresponding cell exists when the cell is represented by a tree structure.
The cell partition information may comprise a depth indicating the depth of the cell. The depth may indicate the number of times a unit is partitioned and/or the degree to which the unit is partitioned.
In a tree structure, the depth of the root node may be considered to be minimum and the depth of the leaf node to be maximum. The root node may be the highest (top) node. The leaf node may be the lowest node.
A single unit may be hierarchically partitioned into a plurality of sub-units, while the single unit has tree-structure based depth information. In other words, a unit and a sub-unit generated by partitioning the unit may correspond to a node and a sub-node of the node, respectively. Each partitioned sub-unit may have a unit depth. Since the depth indicates the number of times a unit is partitioned and/or the degree to which a unit is partitioned, the partition information of a sub-unit may include information about the size of the sub-unit.
In a tree structure, the top node may correspond to the initial node before partitioning. The top node may be referred to as the "root node". Further, the root node may have a minimum depth value. Here, the depth of the top node may be the level "0".
A node of depth level "1" may represent a unit generated when the initial unit is partitioned once. A node of depth level "2" may represent a cell that is generated when an initial cell is partitioned twice.
A leaf node of depth level "n" may represent a unit that is generated when the initial unit is partitioned n times.
A leaf node may be a bottom node that cannot be partitioned further. The depth of the leaf node may be the maximum level. For example, the predefined value for the maximum level may be 3.
QT depth may represent depth for a quadrant. BT depth may represent depth for a bi-partition. The TT depth may represent the depth for the tri-partition.
Sampling point: the samples may be the basic units constituting the block. From 0 to 2 according to bit depth (Bd) Bd A value of-1 to represent the sample point.
The samples may be pixels or pixel values.
In the following, the terms "pixel" and "sample" may be used with the same meaning and are used interchangeably with each other.
Coding Tree Unit (CTU): the CTU may be composed of a single luma component (Y) coding tree block and two chroma component (i.e., cb, cr) coding tree blocks associated with the luma component coding tree block. Further, the CTU may represent information including the above-described blocks and syntax elements for each block.
Each Coding Tree Unit (CTU) may be partitioned using one or more partitioning methods, such as Quadtree (QT), binary Tree (BT) and Trigeminal Tree (TT), in order to configure sub-units, such as coding units, prediction units and transform units. The quadtree may represent a quad-tree. Further, each coding tree unit may be partitioned using a multi-type tree (MTT) using one or more partitioning methods.
"CTU" may be used as a term designating a block of pixels as a processing unit in image decoding and encoding processes (as in the case of partitioning an input image).
Coding Tree Block (CTB): "CTB" may be used as a term specifying any one of a Y code tree block, a Cb code tree block, and a Cr code tree block.
Adjacent blocks: the neighboring block (or neighboring block) may represent a block adjacent to the target block. The neighboring blocks may represent reconstructed neighboring blocks.
Hereinafter, the terms "adjacent block" and "neighboring block" may be used to have the same meaning and may be used interchangeably with each other.
The neighboring blocks may represent reconstructed neighboring blocks.
Spatial neighboring blocks: the spatially adjacent block may be a block spatially adjacent to the target block. The neighboring blocks may include spatially neighboring blocks.
The target block and the spatial neighboring block may be included in the target picture.
Spatially adjacent blocks may represent blocks whose boundaries are in contact with the target block or blocks located within a predetermined distance from the target block.
The spatially neighboring blocks may represent blocks adjacent to the vertices of the target block. Here, a block adjacent to a vertex of the target block may represent a block vertically adjacent to an adjacent block horizontally adjacent to the target block or a block horizontally adjacent to an adjacent block vertically adjacent to the target block.
Time proximity block: the temporal neighboring block may be a block that is temporally adjacent to the target block. The neighboring blocks may include temporal neighboring blocks.
The temporal neighboring blocks may comprise co-located blocks (col blocks).
The col block may be a block in a co-located picture (col picture) reconstructed previously. The location of the col block in the col picture may correspond to the location of the target block in the target picture. Alternatively, the location of the col block in the col picture may be equal to the location of the target block in the target picture. The col picture may be a picture included in the reference picture list.
The temporal neighboring blocks may be blocks temporally adjacent to the spatial neighboring blocks of the target block.
Prediction mode: the prediction mode may be information indicating a mode for intra prediction or a mode for inter prediction.
Prediction unit: the prediction unit may be a basic unit for prediction such as inter prediction, intra prediction, inter compensation, intra compensation and motion compensation.
A single prediction unit may be divided into a plurality of partitions or sub-prediction units having smaller sizes. The plurality of partitions may also be basic units in performing prediction or compensation. The partition generated by dividing the prediction unit may also be the prediction unit.
Prediction unit partitioning: the prediction unit partition may be a shape into which the prediction unit is divided.
Reconstructed neighboring units: the reconstructed neighboring unit may be a unit that is neighboring the target unit that has been decoded and reconstructed.
The reconstructed neighboring cells may be cells spatially or temporally adjacent to the target cell.
The reconstructed spatial neighboring units may be units comprised in the target picture that have been reconstructed by encoding and/or decoding.
The reconstructed temporal neighboring units may be units comprised in the reference image that have been reconstructed by encoding and/or decoding. The position of the reconstructed temporal neighboring unit in the reference image may be the same as the position of the target unit in the target picture or may correspond to the position of the target unit in the target picture. Furthermore, the reconstructed temporal neighboring unit may be a block neighboring a corresponding block in the reference image. Here, the position of the corresponding block in the reference image may correspond to the position of the target block in the target image. Here, the fact that the positions of the blocks correspond to each other may mean that the positions of the blocks are identical to each other, that one block is included in another block, or that one block occupies a specific position in another block.
Sub-picture: a picture may be divided into one or more sub-pictures. A sprite may be composed of one or more parallel block rows and one or more parallel block columns.
The sprite may be an area in the picture having a square shape or a rectangular (i.e. non-square rectangular) shape. Further, the sprite may include one or more CTUs.
A sub-picture may be a rectangular area of one or more slices in the picture.
A sprite may comprise one or more parallel blocks, one or more tiles (blocks) and/or one or more slices.
Parallel blocks: the parallel blocks may be areas in the picture having a square shape or a rectangular (i.e., non-square rectangular) shape.
The parallel block may comprise one or more CTUs.
The parallel blocks may be partitioned into one or more partitions.
And (3) blocking: a chunk may represent one or more CTU rows in a parallel chunk.
The parallel blocks may be partitioned into one or more partitions. Each chunk may include one or more CTU rows.
Parallel blocks that are not partitioned into two parts may also represent partitions.
A strip: a slice may include one or more parallel blocks in a picture. Alternatively, the stripe may include one or more partitions in parallel blocks.
The sprite may contain one or more strips that collectively cover a rectangular area of the picture. Thus, each sub-picture boundary is also always a slice boundary, and each vertical sub-picture boundary is also always a vertical parallel block boundary.
Parameter set: the parameter set may correspond to header information in an internal structure of the bitstream.
The parameter set may include at least one of a Video Parameter Set (VPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), an Adaptive Parameter Set (APS), a Decoding Parameter Set (DPS), and the like.
The information signaled by each parameter set can be applied to the picture referencing the corresponding parameter set. For example, information in a VPS may be applied to a picture referencing the VPS. The information in the SPS may be applied to pictures referencing the SPS. The information in the PPS may be applied to a picture referencing the PPS.
Each parameter set may refer to a higher parameter set. For example, PPS may refer to SPS. SPS may refer to VPS.
Furthermore, the parameter set may comprise parallel block groups, slice header information and parallel block header information. The parallel block group may be a group including a plurality of parallel blocks. Further, the meaning of "parallel block group" may be the same as that of "stripe".
Rate-distortion optimization: the encoding device may use rate distortion optimization to provide high encoding efficiency by utilizing a combination of: the size of the Coding Unit (CU), the prediction mode, the size of the Prediction Unit (PU), the motion information, and the size of the Transform Unit (TU).
The rate-distortion optimization scheme may calculate the rate-distortion costs for each combination to select the optimal combination from among the combinations. The rate distortion cost may be calculated using the equation "d+λ x R". In general, the combination that minimizes the rate-distortion cost may be selected as the optimal combination under the rate-distortion optimization scheme.
D may represent distortion. D may be an average value of squares (i.e., mean square error) of differences between original transform coefficients and reconstructed transform coefficients in the transform unit.
R may represent the rate, which may represent the bit rate using the relevant context information.
- λ represents the lagrangian multiplier. R may include not only coding parameter information (such as a prediction mode, motion information, and a coding block flag), but also bits generated due to coding of transform coefficients.
The encoding device may perform processes such as inter-prediction and/or intra-prediction, transformation, quantization, entropy coding, inverse quantization (inverse quantization) and/or inverse transformation in order to calculate accurate D and R. These processes can greatly increase the complexity of the encoding device.
-bit stream: the bitstream may represent a stream of bits including encoded image information.
Analysis: parsing may be a determination of values of syntax elements made by performing entropy decoding on a bitstream. Alternatively, the term "parsing" may refer to such entropy decoding itself.
The symbols: the symbol may be at least one of a syntax element, an encoding parameter, and a transform coefficient of the encoding target unit and/or the decoding target unit. Furthermore, the symbol may be a target of entropy encoding or a result of entropy decoding.
Reference picture: the reference picture may be an image that is referenced by a unit in order to perform inter prediction or motion compensation. Alternatively, the reference picture may be an image including a reference unit that is referenced by the target unit in order to perform inter prediction or motion compensation.
Hereinafter, the terms "reference picture" and "reference image" may be used to have the same meaning and may be used interchangeably with each other.
Reference picture list: the reference picture list may be a list including one or more reference pictures used for inter prediction or motion compensation.
Types of reference picture lists may include combined List (LC), list 0 (L0), list 1 (L1), list 2 (L2), list 3 (L3), etc.
For inter prediction, one or more reference picture lists may be used.
Inter prediction indicator: the inter prediction indicator may indicate an inter prediction direction for the target unit. Inter prediction may be one of unidirectional prediction and bidirectional prediction. Alternatively, the inter prediction indicator may represent the number of reference pictures used to generate the prediction unit of the target unit. Alternatively, the inter prediction indicator may represent the number of prediction blocks used for inter prediction or motion compensation of the target unit.
The prediction list utilization flag: the prediction list utilization flag may indicate whether to generate a prediction unit using at least one reference picture in a particular reference picture list.
-the inter prediction indicator can be derived using the prediction list utilisation flag. Instead, the prediction list utilization flag may be derived using an inter prediction indicator. For example, a case where the prediction list indicates "0" (as the first value) with a flag may indicate that, for the target unit, the reference pictures in the reference picture list are not used to generate the prediction block. The case where the prediction list utilization flag indicates "1" (as the second value) may indicate that for the target unit, the prediction unit is generated using the reference picture list.
Reference picture index: the reference picture index may be an index indicating a specific reference picture in the reference picture list.
Picture Order Count (POC): the POC value of a picture may represent the order in which the corresponding pictures are displayed.
Motion Vector (MV): the motion vector may be a 2D vector for inter prediction or motion compensation. The motion vector may represent an offset between the target image and the reference image.
For example, it may be as follows (mv) x ,mv y ) Is expressed as MV. mv (mv) x Can indicate the horizontal component, mv y The vertical component may be indicated.
Search range: the search range may be a 2D region in which a search for MVs is performed during inter prediction. For example, the size of the search range may be mxn. M and N may each be positive integers.
Motion vector candidates: the motion vector candidate may be a block as a prediction candidate when the motion vector is predicted or a motion vector of a block as a prediction candidate.
The motion vector candidates may be included in a motion vector candidate list.
Motion vector candidate list: the motion vector candidate list may be a list configured using one or more motion vector candidates.
Motion vector candidate index: the motion vector candidate index may be an indicator for indicating motion vector candidates in the motion vector candidate list. Alternatively, the motion vector candidate index may be an index of a motion vector predictor.
Motion information: the motion information may be information including a reference picture list, a reference picture, a motion vector candidate index, at least one of a merge candidate and a merge index, and a motion vector, a reference picture index and an inter prediction indicator.
Merging candidate list: the merge candidate list may be a list using one or more merge candidate configurations.
Combining candidates: the merge candidates may be spatial merge candidates, temporal merge candidates, combined bi-predictive merge candidates, history-based candidates, candidates based on an average of two candidates, zero merge candidates, etc. The merge candidate may include an inter prediction indicator, and may include motion information such as prediction type information, a reference picture index for each list, a motion vector, a prediction list utilization flag, and an inter prediction indicator.
Merging index: the merge index may be an indicator for indicating a merge candidate in the merge candidate list.
The merging index may indicate a reconstruction unit for deriving a merging candidate among a reconstruction unit spatially adjacent to the target unit and a reconstruction unit temporally adjacent to the target unit.
The merge index may indicate at least one of a plurality of pieces of motion information of the merge candidate.
A conversion unit: the transform unit may be a basic unit of residual signal encoding and/or residual signal decoding, such as transform, inverse transform, quantization, inverse quantization, transform coefficient encoding and transform coefficient decoding. A single transform unit may be partitioned into multiple sub-transform units having smaller sizes. Here, the transforms may include one or more of primary transforms and secondary transforms, and the inverse transforms may include one or more of primary inverse transforms and secondary inverse transforms.
Scaling: scaling may represent the process of multiplying a factor by the transform coefficient level.
As a result of scaling the transform coefficient levels, transform coefficients may be generated. Scaling may also be referred to as "dequantizing".
Quantization Parameter (QP): the quantization parameter may be a value for generating a transform coefficient level for the transform coefficient in quantization. Alternatively, the quantization parameter may also be a value for generating the transform coefficient by scaling the transform coefficient level in inverse quantization. Alternatively, the quantization parameter may be a value mapped to a quantization step size.
Delta (Delta) quantization parameter: the delta quantization parameter may represent a difference between the quantization parameter of the target unit and the predicted quantization parameter.
Scanning: scanning may represent a method of sequentially arranging coefficients in units, blocks or matrices. For example, a method for arranging a 2D array in the form of a one-dimensional (1D) array may be referred to as "scanning". Alternatively, a method for arranging the 1D array in the form of a 2D array may also be referred to as "scanning" or "inverse scanning".
Transform coefficients: the transform coefficients may be coefficient values generated when the encoding device performs the transform. Alternatively, the transform coefficient may be a coefficient value generated when the decoding apparatus performs at least one of entropy decoding and dequantization.
The quantized level or quantized transform coefficient level generated by applying quantization to the transform coefficients or residual signal may also be included in the meaning of the term "transform coefficient".
Quantized grade: the level of quantization may be a value generated when the encoding apparatus performs quantization on the transform coefficient or the residual signal. Alternatively, the level of quantization may be a value that is a target of inverse quantization when the decoding apparatus performs inverse quantization.
Quantized transform coefficient levels as a result of the transform and quantization may also be included in the meaning of quantized levels.
Non-zero transform coefficients: the non-zero transform coefficients may be transform coefficients having values other than 0, or may be transform coefficient levels having values other than 0. Alternatively, the non-zero transform coefficient may be a transform coefficient whose magnitude is not 0, or may be a transform coefficient level whose magnitude is not 0.
Quantization matrix: the quantization matrix may be a matrix used in a quantization process or an inverse quantization process in order to improve subjective image quality or objective image quality of an image. The quantization matrix may also be referred to as a "scaling list".
Quantization matrix coefficients: the quantization matrix coefficient may be each element in the quantization matrix. The quantized matrix coefficients may also be referred to as "matrix coefficients".
Default matrix: the default matrix may be a quantization matrix predefined by the encoding device and decoding device.
Non-default matrix: the non-default matrix may be a quantization matrix that is not predefined by the encoding device and decoding device. The non-default matrix may represent a quantization matrix signaled by a user from the encoding device to the decoding device.
Most Probable Mode (MPM): the MPM may represent an intra-prediction mode with a high probability of being used for intra-prediction for the target block.
The encoding device and the decoding device may determine one or more MPMs based on the encoding parameters associated with the target block and the attributes of the entity associated with the target block.
The encoding device and the decoding device may determine one or more MPMs based on the intra-prediction mode of the reference block. The reference block may include a plurality of reference blocks. The plurality of reference blocks may include a spatially neighboring block adjacent to the left side of the target block and a spatially neighboring block adjacent to the upper side of the target block. In other words, one or more different MPMs may be determined depending on which intra prediction modes have been used for the reference block.
One or more MPMs may be determined in the same way in both the encoding device and the decoding device. That is, the encoding device and the decoding device may share the same MPM list including one or more MPMs.
MPM list: the MPM list may be a list including one or more MPMs. The number of one or more MPMs in the MPM list may be predefined.
MPM indicator: the MPM indicator may indicate an MPM to be used for intra-prediction for the target block among one or more MPMs in the MPM list. For example, the MPM indicator may be an index for the MPM list.
Since the MPM list is determined in the same way in both the encoding device and the decoding device, there may be no need to send the MPM list itself from the encoding device to the decoding device.
The MPM indicator may be signaled from the encoding device to the decoding device. Since the MPM indicator is signaled, the decoding apparatus may determine an MPM to be used for intra prediction for the target block among MPMs in the MPM list.
MPM usage indicator: the MPM use indicator may indicate whether an MPM use mode is to be used for prediction for the target block. The MPM usage pattern may be a pattern in which an MPM list is used to determine MPMs to be used for intra prediction for a target block.
The MPM use indicator may be signaled from the encoding device to the decoding device.
Signaling: "signaling" may mean that information is sent from the encoding device to the decoding device. Alternatively, "signaling" may mean that the information is included in a bitstream or a recording medium by an encoding apparatus. The information signaled by the encoding device may be used by the decoding device.
The encoding device may generate encoded information by performing encoding of the information to be signaled. The encoded information may be transmitted from the encoding device to the decoding device. The decoding device may obtain the information by decoding the transmitted encoded information. Here, the encoding may be entropy encoding, and the decoding may be entropy decoding.
Selective signaling: the information may optionally be signaled. Selective signaling for information may mean that the encoding device selectively includes information in a bitstream or a recording medium (according to specific conditions). Selective signaling for information may mean that the decoding device selectively extracts information from the bitstream (according to certain conditions).
Omission of signaling: signaling for information may be omitted. The omission of signaling for information with respect to information may mean that the encoding device does not include information in the bitstream or the recording medium (according to a specific condition). Omission of signaling for information may mean that the decoding device does not extract information from the bitstream (according to certain conditions).
Statistics: the variables, coding parameters, constants, etc. may have computable values. The statistical value may be a value generated by performing calculation (operation) on a value of a specified target. For example, the statistics may indicate one or more of an average, a weighted sum, a minimum, a maximum, a mode, a median, and an interpolation of values of a particular variable, a particular encoding parameter, a particular constant, and the like.
Fig. 1 is a block diagram showing a configuration of an embodiment of an encoding apparatus to which the present disclosure is applied.
The encoding apparatus 100 may be an encoder, a video encoding apparatus, or an image encoding apparatus. The video may include one or more images (pictures). The encoding apparatus 100 may sequentially encode one or more images of the video.
The encoding device may generate encoded information by encoding information to be signaled. The encoded information may be transmitted from the encoding device to the decoding device. The decoding apparatus may acquire information by decoding the received encoded information. Here, the encoding may be entropy encoding, and the decoding may be entropy decoding.
Referring to fig. 1, the encoding apparatus 100 includes an inter prediction unit 110, an intra prediction unit 120, a switcher 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy encoding unit 150, an inverse quantization (inverse quantization) unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
The encoding apparatus 100 may perform encoding on the target image using intra mode and/or inter mode. In other words, the prediction mode of the target block may be one of an intra mode and an inter mode.
Hereinafter, the terms "intra mode", "intra prediction mode", "intra mode", and "intra prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "inter mode", "inter prediction mode", "inter mode", and "inter prediction mode" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the term "image" may indicate only a partial image, or may indicate a block. Further, the processing of the "image" may indicate sequential processing of a plurality of blocks.
Further, the encoding apparatus 100 may generate a bitstream including encoded information by encoding a target image, and may output and store the generated bitstream. The generated bit stream may be stored in a computer readable storage medium and may be streamed over a wired and/or wireless transmission medium.
When the intra mode is used as the prediction mode, the switcher 115 can switch to the intra mode. When the inter mode is used as the prediction mode, the switcher 115 may switch to the inter mode.
The encoding apparatus 100 may generate a prediction block of the target block. Further, after the prediction block has been generated, the encoding apparatus 100 may encode a residual block for the target block using a residual between the target block and the prediction block.
When the prediction mode is an intra mode, the intra prediction unit 120 may use pixels of a neighboring block adjacent to the target block, which is previously encoded/decoded, as a reference sample. The intra prediction unit 120 may perform spatial prediction on the target block using the reference samples, and may generate prediction samples for the target block via spatial prediction. The predicted samples may represent samples in a prediction block.
The inter prediction unit 110 may include a motion prediction unit and a motion compensation unit.
When the prediction mode is an inter mode, the motion prediction unit may search for a region in the reference image that best matches the target block in the motion prediction process, and may derive a motion vector for the target block and the found region based on the found region. Here, the motion prediction unit may use the search range as a target area for searching.
The reference picture may be stored in a reference picture buffer 190. More specifically, when encoding and/or decoding of a reference picture has been processed, the encoded and/or decoded reference picture may be stored in the reference picture buffer 190.
Since decoded pictures are stored, the reference picture buffer 190 may be a Decoded Picture Buffer (DPB).
The motion compensation unit may generate a prediction block for the target block by performing motion compensation using the motion vector. Here, the motion vector may be a two-dimensional (2D) vector for inter prediction. Further, the motion vector may indicate an offset between the target image and the reference image.
When the motion vector has a value other than an integer, the motion prediction unit and the motion compensation unit may generate the prediction block by applying an interpolation filter to a partial region of the reference image. In order to perform inter prediction or motion compensation, it may be determined which mode of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a method for predicting and compensating for motion of a PU included in a CU based on the CU, and inter prediction or motion compensation may be performed according to the mode.
The subtractor 125 may generate a residual block, where the residual block is the difference between the target block and the prediction block. The residual block may also be referred to as a "residual signal".
The residual signal may be the difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming or quantizing a difference between the original signal and the predicted signal or a signal generated by transforming and quantizing the difference. The residual block may be a residual signal for a block unit.
The transform unit 130 may generate transform coefficients by transforming the residual block, and may output the generated transform coefficients. Here, the transform coefficient may be a coefficient value generated by transforming the residual block.
The transformation unit 130 may use one of a plurality of predefined transformation methods in performing the transformation.
The plurality of predefined transform methods may include Discrete Cosine Transform (DCT), discrete Sine Transform (DST), karhunen-Loeve transform (KLT), and the like.
A transform method for transforming the residual block may be determined according to at least one of the encoding parameters for the target block and/or the neighboring block. For example, the transform method may be determined based on at least one of an inter prediction mode for the PU, an intra prediction mode for the PU, a size of the TU, and a shape of the TU. Alternatively, the transform information indicating the transform method may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the transform skip mode is used, the transform unit 130 may omit an operation of transforming the residual block.
By quantizing the transform coefficients, quantized transform coefficient levels or quantized levels may be generated. Hereinafter, in the embodiments, each of the quantized transform coefficient level and the quantized level may also be referred to as a "transform coefficient".
The quantization unit 140 may generate quantized transform coefficient levels (i.e., quantized levels or quantized coefficients) by quantizing the transform coefficients according to quantization parameters. The quantization unit 140 may output the generated quantized transform coefficient level. In this case, the quantization unit 140 may quantize the transform coefficient using a quantization matrix.
The entropy encoding unit 150 may generate a bitstream by performing entropy encoding based on probability distribution based on the values calculated by the quantization unit 140 and/or the encoding parameter values calculated in the encoding process. The entropy encoding unit 150 may output the generated bitstream.
The entropy encoding unit 150 may perform entropy encoding on information about pixels of an image and information required to decode the image. For example, information required for decoding an image may include syntax elements and the like.
When entropy coding is applied, fewer bits may be allocated to more frequently occurring symbols and more bits may be allocated to less frequently occurring symbols. Since the symbol is represented by this allocation, the size of the bit string for the target symbol to be encoded can be reduced. Accordingly, the compression performance of video coding can be improved by entropy coding.
Further, for entropy encoding, the entropy encoding unit 150 may use an encoding method such as exponential golomb, context Adaptive Variable Length Coding (CAVLC), or Context Adaptive Binary Arithmetic Coding (CABAC). For example, the entropy encoding unit 150 may perform entropy encoding using a variable length coding/coding (VLC) table. For example, entropy encoding unit 150 may derive a binarization method for the target symbol. Furthermore, entropy encoding unit 150 may derive a probability model for the target symbol/binary bit. The entropy encoding unit 150 may perform arithmetic encoding using the derived binarization method, probability model, and context model.
The entropy encoding unit 150 may transform coefficients in the form of 2D blocks into the form of 1D vectors through a transform coefficient scanning method in order to encode quantized transform coefficient levels.
The encoding parameters may be information required for encoding and/or decoding. The encoding parameters may include information encoded by the encoding device 100 and transmitted from the encoding device 100 to the decoding device, and may also include information that may be derived during encoding or decoding. For example, the information transmitted to the decoding device may include a syntax element.
The encoding parameters may include not only information (or flags or indexes) such as syntax elements encoded by the encoding device and signaled by the encoding device to the decoding device, but also information derived in the encoding or decoding process. Furthermore, the encoding parameters may include information required to encode or decode the image. For example, the encoding parameters may include at least one of the following values, combinations or statistics of the following: the size of the unit/block, the shape/form of the unit/block, the depth of the unit/block, the partition information of the unit/block, the partition structure of the unit/block, the information indicating whether the unit/block is partitioned in a quadtree structure, the information indicating whether the unit/block is partitioned in a binary tree structure, the partition direction (horizontal direction or vertical direction) of a binary tree structure, the partition form (symmetric partition or asymmetric partition) of a binary tree structure, the information indicating whether the unit/block is partitioned in a trigeminal tree structure, the partition direction (horizontal direction or vertical direction) of a trigeminal tree structure, the partition form (symmetric partition or asymmetric partition, etc.), the information indicating whether the unit/block is partitioned in a multi-type tree structure the combination of partitions of the multi-type tree structure and the direction (horizontal direction or vertical direction, etc.), the partition form of the multi-type tree structure (symmetric partition or asymmetric partition, etc.), the partition tree of the multi-type tree form (binary tree or trigeminal tree), the prediction type (intra prediction or inter prediction), the intra prediction mode/direction, the intra luminance prediction mode/direction, the intra chroma prediction mode/direction, the intra partition information, the inter partition information, the coding block partition flag, the prediction block partition flag, the transform block partition flag, the reference sample point filtering method, the reference sample point filter tap, the reference sample point filter coefficient, the prediction block filtering method, the prediction block filter tap, the prediction block filter coefficient, the prediction block boundary filtering method, the prediction block boundary filter tap, prediction block boundary filter coefficients, inter prediction modes, motion information, motion vectors, motion vector differences, reference picture indices, inter prediction directions, inter prediction indicators, prediction list utilization flags, reference picture lists, reference pictures, POCs, motion vector predictors, motion vector prediction indices, motion vector prediction candidates, motion vector candidate lists, information indicating whether a merge mode is used, merge indices, merge candidates, merge candidate lists, information indicating whether a skip mode is used, type of interpolation filter, tap of interpolation filter, filter coefficients of interpolation filter, size of motion vector, accuracy of motion vector representation, type of transform, transform size, information indicating whether a first transform is used, information indicating whether an additional (second) transform is used, information indicating whether a second transform is used, and the first transform selection information (or first transform index), the second transform selection information (or second transform index), information indicating the presence or absence of a residual signal, a coding block pattern, a coding block flag, a quantization parameter, a residual quantization parameter, a quantization matrix, information about an in-loop filter, information indicating whether an in-loop filter is applied, coefficients of an in-loop filter, taps of an in-loop filter, shape/form of an in-loop filter, information indicating whether a deblocking filter is applied, coefficients of a deblocking filter, taps of a deblocking filter, deblocking filter strength, shape/form of a deblocking filter, information indicating whether an adaptive sample offset is applied, a value of an adaptive sample offset, a category of an adaptive sample offset, a type of an adaptive sample offset, information indicating whether an adaptive loop filter is applied, coefficients of the adaptive loop filter, taps of the adaptive loop filter, shape/form of the adaptive loop filter, binarization/inverse binarization method, context model decision method, context model update method, information indicating whether a normal mode is performed, information indicating whether a bypass (bypass) mode is performed, significant coefficient flag, last significant coefficient flag, encoding flag of a coefficient group, position of last significant coefficient, information indicating whether a value of a coefficient is greater than 1, information indicating whether a value of a coefficient is greater than 2, information indicating whether a value of a coefficient is greater than 3, residual coefficient value information, positive and negative sign information, reconstructed luminance sample point, reconstructed chrominance sample point, context binary bit, bypass binary bit, residual coefficient value information, positive and negative sign information, reconstructed chrominance sample point, and residual luminance sample point residual luminance sample point, residual chroma sample point, transform coefficient, luminance transform coefficient, chroma transform coefficient, quantized level, luminance quantized level, chroma quantized level, transform coefficient level scanning method, size of motion vector search area on decoding apparatus side, shape/form of motion vector search area on decoding apparatus side, number of times of motion vector search on decoding apparatus side, size of CTU, minimum block size, maximum block depth, minimum block depth, image display/output order, stripe identification information, stripe type, stripe partition information, parallel block group identification information, parallel block group type, parallel block group partition information, parallel block identification information, parallel block type, parallel block partition information, picture type, bit depth, input sample bit depth, reconstruction sample bit depth, residual sample bit depth, transform coefficient bit depth, quantized level bit depth, information about luminance signal, information about chrominance signal, color space of a target block, and color space of a residual block. In addition, the above-described encoding parameter related information may also be included in the encoding parameters. Information for calculating and/or deriving the above-mentioned coding parameters may also be included in the coding parameters. Information calculated or derived using the above-described encoding parameters may also be included in the encoding parameters.
The first transform selection information may indicate a first transform applied to the target block.
The second transform selection information may indicate a second transform applied to the target block.
The residual signal may represent a difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming a difference between the original signal and the predicted signal. Alternatively, the residual signal may be a signal generated by transforming and quantizing a difference between the original signal and the predicted signal. The residual block may be a residual signal for the block.
Here, signaling information may mean that the encoding apparatus 100 includes entropy-encoded information generated by performing entropy encoding on a flag or an index in a bitstream, and may mean that the decoding apparatus 200 acquires information by performing entropy decoding on the entropy-encoded information extracted from the bitstream. Here, the information may include a flag, an index, and the like.
A signal may mean information to be signaled. Hereinafter, information for the image and the block may be referred to as a "signal". Further, in the following, the terms "information" and "signal" may be used to have the same meaning and may be used interchangeably with each other. For example, the specific signal may be a signal representing a specific block. The original signal may be a signal representing the target block. The prediction signal may be a signal representing a prediction block. The residual signal may be a signal representing a residual block.
The bitstream may include information based on a specific syntax. The encoding apparatus 100 may generate a bitstream including information according to a specific syntax. The decoding apparatus 200 may acquire information from the bitstream according to a specific syntax.
Since the encoding apparatus 100 performs encoding via inter prediction, the encoded target image may be used as a reference image for another image to be subsequently processed. Accordingly, the encoding apparatus 100 may reconstruct or decode the encoded target image and store the reconstructed or decoded image as a reference image in the reference picture buffer 190. For decoding, inverse quantization and inverse transformation of the encoded target image may be performed.
The quantized level may be inverse quantized by the inverse quantization unit 160 and may be inverse transformed by the inverse transformation unit 170. The inverse quantization unit 160 may generate an inverse quantized coefficient by performing inverse transform for the quantized level. The inverse transform unit 170 may generate inverse quantized and inverse transformed coefficients by performing inverse transform on the inverse quantized coefficients.
The inverse quantized and inverse transformed coefficients may be added to the prediction block by adder 175. The inverse quantized and inverse transformed coefficients and the prediction block are added, and then a reconstructed block may be generated. Here, the inversely quantized and/or inverse transformed coefficients may represent coefficients on which one or more of inverse quantization and inverse transformation are performed, and may also represent a reconstructed residual block. Here, the reconstructed block may represent a restored block or a decoded block.
The reconstructed block may be filtered by a filter unit 180. The filter unit 180 may apply one or more of a deblocking filter, a Sample Adaptive Offset (SAO) filter, an Adaptive Loop Filter (ALF), and a non-local filter (NLF) to a reconstructed sample, a reconstructed block, or a reconstructed picture. The filter unit 180 may also be referred to as an "in-loop filter".
The deblocking filter may remove block distortion that occurs at boundaries between blocks in the reconstructed picture. To determine whether to apply the deblocking filter, a decision may be made to be included in the block and include the number of columns or rows of pixels based on which the determination is made as to whether to apply the deblocking filter to the target block.
When a deblocking filter is applied to a target block, the applied filter may be different depending on the strength of the deblocking filter required. In other words, among different filters, a filter determined in consideration of the intensity of deblocking filtering may be applied to the target block. When the deblocking filter is applied to the target block, one or more of a long tap filter, a strong filter, a weak filter, and a gaussian filter may be applied to the target block according to the required intensity of the deblocking filter.
Further, when vertical filtering and horizontal filtering are performed on the target block, horizontal filtering and vertical filtering may be performed in parallel.
SAO may add the appropriate offset to the pixel value to compensate for the coding error. The SAO may perform a correction on the image to which deblocking is applied based on pixels, wherein the correction uses an offset of a difference between the original image and the image to which deblocking is applied. In order to perform offset correction for an image, a method for dividing pixels included in the image into a certain number of regions, determining a region to which an offset is to be applied among the divided regions, and applying the offset to the determined region may be used, and a method for applying the offset in consideration of edge information of each pixel may also be used.
The ALF may perform filtering based on a value obtained by comparing the reconstructed image with the original image. After pixels included in an image have been divided into a predetermined number of groups, filters to be applied to each group may be determined, and filtering may be differently performed for the respective groups. Information regarding whether to apply the adaptive loop filter may be signaled for each CU. Such information may be signaled for the luminance signal. The shape and filter coefficients of the ALF to be applied to each block may be different for each block. Alternatively, ALF having a fixed form may be applied to a block regardless of the characteristics of the block.
The non-local filter may perform filtering based on a reconstructed block similar to the target block. Regions similar to the target block may be selected from the reconstructed picture and filtering of the target block may be performed using statistical properties of the selected similar regions. Information about whether to apply the non-local filter may be signaled for the Coding Unit (CU). Furthermore, the shape and filter coefficients of the non-local filter to be applied to a block may be different depending on the block.
The reconstructed block or the reconstructed image filtered by the filter unit 180 may be stored as a reference picture in a reference picture buffer 190. The reconstructed block filtered by the filter unit 180 may be part of a reference picture. In other words, the reference picture may be a reconstructed picture composed of the reconstructed blocks filtered by the filter unit 180. The stored reference pictures may then be used for inter prediction or motion compensation.
Fig. 2 is a block diagram showing a configuration of an embodiment of a decoding apparatus to which the present disclosure is applied.
The decoding apparatus 200 may be a decoder, a video decoding apparatus, or an image decoding apparatus.
Referring to fig. 2, the decoding apparatus 200 may include an entropy decoding unit 210, an inverse quantization (inverse quantization) unit 220, an inverse transformation unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, a filter unit 260, and a reference picture buffer 270.
The decoding apparatus 200 may receive the bit stream output from the encoding apparatus 100. The decoding apparatus 200 may receive a bit stream stored in a computer-readable storage medium and may receive a bit stream transmitted through a wired/wireless transmission medium stream.
The decoding apparatus 200 may perform decoding on the bit stream in an intra mode and/or an inter mode. Further, the decoding apparatus 200 may generate a reconstructed image or a decoded image via decoding, and may output the reconstructed image or the decoded image.
For example, an operation of switching to the intra mode or the inter mode based on the prediction mode for decoding may be performed by the switch 245. When the prediction mode for decoding is an intra mode, the switch 245 may be operated to switch to the intra mode. When the prediction mode for decoding is an inter mode, the switch 245 may be operated to switch to the inter mode.
The decoding apparatus 200 may acquire a reconstructed residual block by decoding an input bitstream, and may generate a prediction block. When the reconstructed residual block and the prediction block are acquired, the decoding apparatus 200 may generate a reconstructed block as a target to be decoded by adding the reconstructed residual block and the prediction block.
The entropy decoding unit 210 may generate symbols by performing entropy decoding on the bitstream based on probability distribution of the bitstream. The generated symbols may include symbols in the form of quantized transform coefficient levels (i.e., quantized levels or quantized coefficients). Here, the entropy decoding method may be similar to the entropy encoding method described above. That is, the entropy decoding method may be an inverse of the entropy encoding method described above.
The entropy decoding unit 210 may change coefficients having a one-dimensional (1D) vector form into a 2D block shape through a transform coefficient scanning method in order to decode quantized transform coefficient levels.
For example, the coefficients of a block may be changed to a 2D block shape by scanning the block coefficients using an upper right diagonal scan. Alternatively, which of the upper right diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the corresponding block and/or the intra prediction mode.
The quantized coefficients may be dequantized by dequantization unit 220. The dequantization unit 220 may generate dequantized coefficients by performing dequantization on the quantized coefficients. Further, the inversely quantized coefficients may be inversely transformed by the inverse transformation unit 230. The inverse transform unit 230 may generate a reconstructed residual block by performing inverse transform on the inversely quantized coefficients. As a result of performing inverse quantization and inverse transformation on the quantized coefficients, a reconstructed residual block may be generated. Here, when generating the reconstructed residual block, the inverse quantization unit 220 may apply a quantization matrix to the quantized coefficients.
When the intra mode is used, the intra prediction unit 240 may generate a prediction block by performing spatial prediction for a target block, wherein the spatial prediction uses pixel values of previously decoded neighboring blocks adjacent to the target block.
The inter prediction unit 250 may include a motion compensation unit. Alternatively, the inter prediction unit 250 may be designated as a "motion compensation unit".
When the inter mode is used, the motion compensation unit may generate a prediction block by performing motion compensation for the target block, wherein the motion compensation uses a motion vector and a reference image stored in the reference picture buffer 270.
The motion compensation unit may apply an interpolation filter to a partial region of the reference image when the motion vector has a value other than an integer, and may generate the prediction block using the reference image to which the interpolation filter is applied. In order to perform motion compensation, the motion compensation unit may determine which mode of a skip mode, a merge mode, an Advanced Motion Vector Prediction (AMVP) mode, and a current picture reference mode corresponds to a motion compensation method for a PU included in the CU, based on the CU, and may perform motion compensation according to the determined mode.
The reconstructed residual block and the prediction block may be added to each other by an adder 255. The adder 255 may generate a reconstructed block by adding the reconstructed residual block and the prediction block.
The reconstructed block may be filtered by a filter unit 260. The filter unit 260 may apply at least one of a deblocking filter, an SAO filter, an ALF, and an NLF to the reconstructed block or the reconstructed image. The reconstructed image may be a picture comprising reconstructed blocks.
The filter unit may output the reconstructed image.
The reconstructed image and/or the reconstructed block filtered by the filter unit 260 may be stored as a reference picture in a reference picture buffer 270. The reconstructed block filtered by the filter unit 260 may be part of a reference picture. In other words, the reference picture may be an image composed of the reconstructed block filtered by the filter unit 260. The stored reference pictures may then be used for inter prediction or motion compensation.
Fig. 3 is a diagram schematically showing a partition structure of an image when the image is encoded and decoded.
Fig. 3 may schematically illustrate an example in which a single unit is partitioned into a plurality of sub-units.
In order to partition an image efficiently, a Coding Unit (CU) may be used in encoding and decoding. The term "unit" may be used to collectively designate 1) a block including image samples and 2) a syntax element. For example, "partition of a unit" may represent "partition of a block corresponding to the unit".
A CU may be used as a basic unit for image encoding/decoding. A CU may be used as a unit to which one mode selected from an intra mode and an inter mode is applied in image encoding/decoding. In other words, in image encoding/decoding, it may be determined which one of the intra mode and the inter mode is to be applied to each CU.
Furthermore, a CU may be a basic unit that predicts, transforms, quantizes, inverse transforms, inverse quantizes, and encodes/decodes transform coefficients.
Referring to fig. 3, an image 300 may be sequentially partitioned into units corresponding to a Largest Coding Unit (LCU), and a partition structure may be determined for each LCU. Here, LCUs may be used to have the same meaning as Code Tree Units (CTUs).
Partitioning a unit may mean partitioning a block corresponding to the unit. The block partition information may include depth information regarding the depth of the unit. The depth information may indicate the number of times a unit is partitioned and/or the degree to which the unit is partitioned. A single unit may be hierarchically partitioned into multiple sub-units while having tree-structure-based depth information. Each partitioned subunit may have depth information. The depth information may be information indicating the size of the CU. The depth information may be stored for each CU.
Each CU may have depth information. When a CU is partitioned, the depth of the CU generated from the partition may be increased by 1 from the depth of the partitioned CU.
The partition structure may represent a distribution of Coding Units (CUs) in LCU 310 for efficiently encoding an image. Such a distribution may be determined according to whether a single CU is to be partitioned into multiple CUs. The number of CUs generated by partitioning may be a positive integer of 2 or more, including 2, 3, 4, 8, 16, and the like.
Depending on the number of CUs generated by performing partitioning, the horizontal and vertical sizes of each CU generated by performing partitioning may be smaller than those of the CU before being partitioned. For example, the horizontal and vertical sizes of each CU generated by partitioning may be half the horizontal and vertical sizes of the CU before partitioning.
Each partitioned CU may be recursively partitioned into four CUs in the same manner. The at least one of the horizontal and vertical dimensions of each partitioned CU may be reduced via recursive partitioning as compared to the at least one of the horizontal and vertical dimensions of the CU prior to partitioning.
Partitioning of the CU may be performed recursively until a predefined depth or a predefined size.
For example, the depth of a CU may have a value ranging from 0 to 3. The size of a CU may range from a 64 x 64 size to an 8 x 8 size depending on the depth of the CU.
For example, the depth of LCU 310 may be 0 and the depth of the Smallest Coding Unit (SCU) may be a predefined maximum depth. Here, as described above, the LCU may be a CU having a maximum coding unit size, and the SCU may be a CU having a minimum coding unit size.
Partitioning may begin at LCU 310 and the depth of a CU may increase by 1 each time the horizontal and/or vertical size of the CU is reduced by partitioning.
For example, for each depth, a CU that is not partitioned may have a size of 2n×2n. Further, in the case where a CU is partitioned, a CU having a size of 2n×2n may be partitioned into four CUs each having a size of n×n. The value of N may be halved each time the depth increases by 1.
Referring to fig. 3, an LCU of depth 0 may have 64×64 pixels or a 64×64 block. 0 may be the minimum depth. An SCU of depth 3 may have 8 x 8 pixels or 8 x 8 blocks. 3 may be the maximum depth. Here, a CU having a block of 64×64 as an LCU may be represented by a depth of 0. A CU with 32 x 32 blocks may be represented by a depth of 1. A CU with 16 x 16 blocks may be represented by depth 2. A CU with 8 x 8 blocks as SCU may be represented by depth 3.
The information on whether the corresponding CU is partitioned may be represented by partition information of the CU. The partition information may be 1-bit information. All CUs except SCU may include partition information. For example, the value of partition information of a CU that is not partitioned may be a first value. The value of partition information of the partitioned CU may be a second value. When the partition information indicates whether the CU is partitioned, the first value may be "0" and the second value may be "1".
For example, when a single CU is partitioned into four CUs, the horizontal size and the vertical size of each of the four CUs generated by performing the partitioning may be half of the horizontal size and the vertical size of the CU before being partitioned. When a CU of size 32×32 is partitioned into four CUs, the size of each of the partitioned four CUs may be 16×16. When a single CU is partitioned into four CUs, the CUs may be considered to have been partitioned in a quadtree structure. In other words, quadtree partitioning may be considered as already applied to CUs.
For example, when a single CU is partitioned into two CUs, the horizontal size or the vertical size of each of the two CUs generated by performing the partitioning may be half of the horizontal size or the vertical size of the CU before being partitioned. When a CU of size 32×32 is vertically partitioned into two CUs, the size of each of the partitioned two CUs may be 16×32. When a CU of size 32×32 is horizontally partitioned into two CUs, the size of each of the partitioned two CUs may be 32×16. When a single CU is partitioned into two CUs, the CUs may be considered to have been partitioned in a binary tree structure. In other words, the binary tree partition may be considered to have been applied to the CU.
For example, when a single CU is partitioned (or divided) into three CUs, the original CU before being partitioned is partitioned such that its horizontal or vertical size is 1:2: the ratio of 1 is divided, thus enabling three sub-CUs to be generated. For example, when a CU of size 16×32 is horizontally partitioned into three sub-CUs, the three sub-CUs generated by the partition may have sizes of 16×8, 16×16, and 16×8, respectively, in a top-to-bottom direction. For example, when a CU of size 32×32 is vertically partitioned into three sub-CUs, the three sub-CUs generated by the partition may have sizes of 8×32, 16×32, and 8×32, respectively, in the left-to-right direction. When a single CU is partitioned into three CUs, the CUs may be considered to be partitioned in a trigeminal form. In other words, the trigeminal tree partition can be considered to have been applied to the CU.
Both the quadtree partition and the binary tree partition are applied to LCU 310 of fig. 3.
In the encoding apparatus 100, a Coding Tree Unit (CTU) having a size of 64×64 may be partitioned into a plurality of smaller CUs by a recursive quadtree structure. A single CU may be partitioned into four CUs having the same size. Each CU may be recursively partitioned and may have a quadtree structure.
By recursive partitioning of the CU, an optimal partitioning method that incurs the minimum rate-distortion cost may be selected.
Coding Tree Unit (CTU) 320 in fig. 3 is an example of a CTU in which quadtree partitions, binary tree partitions, and trigeminal tree partitions are all applied.
As described above, in order to partition the CTU, at least one of the quadtree partition, the binary tree partition, and the trigeminal tree partition may be applied to the CTU. Partitioning may be applied based on a particular priority.
For example, quadtree partitions may be preferentially applied to CTUs. A CU that cannot be further partitioned in a quadtree form may correspond to a leaf node of the quadtree. The CUs corresponding to the leaf nodes of the quadtree may be root nodes of the binary tree and/or the trigeminal tree. That is, the CUs corresponding to leaf nodes of the quadtree may be partitioned in a binary tree form or a trigeminal tree form, or may not be further partitioned. In this case, each CU generated by applying the binary tree partition or the trigeminal tree partition to the CU corresponding to the leaf node of the quadtree is prevented from being partitioned again by the quadtree, thereby effectively performing the partitioning of the block and/or the operation of signaling the block partition information.
The partition information may be used to signal the partition of the CU corresponding to each node of the quadtree. The four partition information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a quadtree form. The four partition information having a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in quadtree form. The quarter-zone information may be a flag having a specific length (e.g., 1 bit).
There may be no priority between the binary tree partition and the trigeminal tree partition. That is, CUs corresponding to leaf nodes of the quadtree may be partitioned in a binary tree form or a trigeminal tree form. Furthermore, a CU generated by binary tree partitioning or trigeminal tree partitioning may or may not be further partitioned in binary tree form or trigeminal tree form.
Partitions that execute when there is no priority between a binary tree partition and a trigeminal tree partition may be referred to as "multi-type tree partitions". That is, a CU corresponding to a leaf node of a quadtree may be a root node of a multi-type tree. The partition of the CU corresponding to each node of the multi-type tree may be signaled using at least one of information indicating whether the CU is partitioned in the multi-type tree, partition direction information, and partition tree information. For the partition of the CU corresponding to each node of the multi-type tree, information indicating whether the partition of the multi-type tree is performed, partition direction information, and partition tree information may be sequentially signaled.
For example, information indicating whether a CU is partitioned in a multi-type tree and has a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a multi-type tree form. The information indicating whether the CU is partitioned in the multi-type tree and has a second value (e.g., "0") may indicate that the corresponding CU is not partitioned in the multi-type tree form.
When a CU corresponding to each node of the multi-type tree is partitioned in the form of the multi-type tree, the corresponding CU may further include partition direction information.
The partition direction information may indicate a partition direction of the multi-type tree partition. Partition direction information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a vertical direction. Partition direction information having a second value (e.g., "0") may indicate that the corresponding CU is partitioned in the horizontal direction.
When a CU corresponding to each node of the multi-type tree is partitioned in the form of the multi-type tree, the corresponding CU may further include partition tree information. The partition tree information may indicate a tree that is used for multi-type tree partitions.
For example, partition tree information having a first value (e.g., "1") may indicate that the corresponding CU is partitioned in a binary tree form. Partition tree information having a second value (e.g., "0") may indicate that the corresponding CU is partitioned in a trigeminal tree form.
Here, each of the above information indicating whether partitioning by the multi-type tree is performed, partition tree information, and partition direction information may be a flag having a specific length (e.g., 1 bit).
At least one of the above-described four-partition information, information indicating whether partitioning by multi-type tree is performed, partition direction information, and partition tree information may be entropy-encoded and/or entropy-decoded. To perform entropy encoding/decoding of such information, information of neighboring CUs adjacent to the target CU may be used.
For example, it may be considered that the partition forms (i.e., partition/non-partition, partition tree, and/or partition direction) of the left CU and/or the upper CU and the partition forms of the target CU may be similar to each other with a high probability. Thus, based on the information of the neighboring CUs, context information for entropy encoding and/or entropy decoding of the information of the target CU may be derived. Here, the information of the neighboring CU may include at least one of: 1) four partition information of the neighboring CU, 2) information indicating whether the neighboring CU is partitioned in a multi-type tree, 3) partition direction information of the neighboring CU, and 4) partition tree information of the neighboring CU.
In another embodiment of the binary tree partition and the trigeminal tree partition, the binary tree partition may be preferentially executed. That is, the binary tree partition may be applied first, and then the CU corresponding to the leaf node of the binary tree may be set as the root node of the trigeminal tree. In this case, quad-tree partitioning or binary tree partitioning may not be performed on CUs corresponding to nodes of the trigeminal tree.
A CU that is not further partitioned by quadtree partitions, binary tree partitions, and/or trigeminal tree partitions may be a unit of coding, prediction, and/or transformation. That is, the CU may not be further partitioned for prediction and/or transformation. Accordingly, a partition structure for partitioning a CU into Prediction Units (PUs)/or Transform Units (TUs), partition information thereof, and the like may not exist in the bitstream.
However, when the size of the CU as a unit of partition is greater than the size of the maximum transform block, the CU may be recursively partitioned until the size of the CU becomes less than or equal to the size of the maximum transform block. For example, when the size of a CU is 64×64 and the size of the largest transform block is 32×32, the CU may be partitioned into four 32×32 blocks in order to perform the transform. For example, when the size of a CU is 32×64 and the size of the largest transform block is 32×32, the CU may be partitioned into two 32×32 blocks.
In this case, information indicating whether the CU is partitioned for transformation may not be separately signaled. Without signaling, it may be determined whether the CU is partitioned via a comparison between the horizontal size (and/or vertical size) of the CU and the horizontal size (and/or vertical size) of the largest transform block. For example, a CU may be vertically halved when the horizontal size of the CU is greater than the horizontal size of the largest transform block. Further, when the vertical size of the CU is greater than that of the largest transform block, the CU may be horizontally halved.
Information about the maximum size and/or minimum size of the CU and information about the maximum size and/or minimum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a parallel block group level, or a slice level. For example, the minimum size of the CU may be set to 4×4. For example, the maximum size of the transform block may be set to 64×64. For example, the maximum size of the transform block may be set to 4×4.
Information about the minimum size of a CU corresponding to a leaf node of the quadtree (i.e., the minimum size of the quadtree) and/or information about the maximum depth of a path from the root node of the multi-type tree to the leaf node (i.e., the maximum depth of the multi-type tree) may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a slice level, a parallel block group level, or a parallel block level. Information about the minimum size of the quadtree and/or information about the maximum depth of the multi-type tree may be signaled or determined separately at each of the intra-stripe and inter-stripe levels.
Information about the difference between the size of the CTU and the maximum size of the transform block may be signaled or determined at a level higher than the level of the CU. For example, the higher level may be a sequence level, a picture level, a slice level, a parallel block group level, or a parallel block level. Information about the maximum size of the CU corresponding to each node of the binary tree (i.e., the maximum size of the binary tree) may be determined based on the size of the CTU and the information of the difference. The maximum size of the CU corresponding to each node of the trigeminal tree (i.e., the maximum size of the trigeminal tree) may have different values depending on the type of stripe. For example, the maximum size of the trigeminal tree of intra-stripe levels may be 32×32. For example, the maximum size of the trigeminal tree of inter-stripe levels may be 128×128. For example, the minimum size of the CU corresponding to each node of the binary tree (i.e., the minimum size of the binary tree) and/or the minimum size of the CU corresponding to each node of the trigeminal tree (i.e., the minimum size of the trigeminal tree) may be set to the minimum size of the CU.
In another example, the maximum size of the binary tree and/or the maximum size of the trigeminal tree may be signaled or determined at the stripe level. Further, the minimum size of the binary tree and/or the minimum size of the trigeminal tree may be signaled or determined at the stripe level.
Based on the various block sizes and depths described above, the four-partition information, information indicating whether partitioning by multi-type tree is performed, partition tree information, and/or partition direction information may or may not be present in the bitstream.
For example, when the size of the CU is not greater than the minimum size of the quadtree, the CU may not include the quadbregion information, and the quadbregion information of the CU may be inferred as the second value.
For example, when the size (horizontal and vertical) of a CU corresponding to each node of the multi-type tree is greater than the maximum size (horizontal and vertical) of the binary tree and/or the maximum size (horizontal and vertical) of the trigeminal tree, the CU may not be partitioned in the binary tree form and/or the trigeminal tree form. In this manner of determination, information indicating whether a partition in the multi-type tree is executed may not be signaled, but may be inferred as a second value.
Alternatively, a CU may not be partitioned in a binary tree form and/or a trigeminal tree form when the size (horizontal size and vertical size) of the CU corresponding to each node of the multi-type tree is equal to the minimum size (horizontal size and vertical size) of the binary tree, or when the size (horizontal size and vertical size) of the CU is equal to twice the minimum size (horizontal size and vertical size) of the trigeminal tree. In this manner of determination, information indicating whether a partition in the multi-type tree is executed may be not signaled, but may be inferred as a second value. The reason for this is that when a CU is partitioned in a binary tree form and/or a trigeminal tree form, a CU smaller than the minimum size of the binary tree and/or the minimum size of the trigeminal tree is generated.
Alternatively, the binary tree partition or the trigeminal tree partition may be limited based on the size of the virtual pipeline data unit (i.e., the size of the pipeline buffer). For example, a binary tree partition or a trigeminal tree partition may be limited when a CU is partitioned into sub-CUs that are unsuitable for the size of the pipeline buffer by the binary tree partition or the trigeminal tree partition. The size of the pipeline buffer may be equal to the maximum size of the transform block (e.g., 64 x 64).
For example, when the size of the pipeline buffer is 64×64, the following partitions may be limited.
Trigeminal tree partition for NxM CUs (where N and/or M is 128)
Horizontal binary tree partition for 128×n CUs (where N < = 64)
Vertical binary tree partition for n×128 CUs (where N < = 64)
Alternatively, when the depth of a CU corresponding to each node of the multi-type tree is equal to the maximum depth of the multi-type tree, the CU may not be partitioned in binary tree form and/or in trigeminal tree form. In this manner of determination, information indicating whether a partition in the multi-type tree is executed may be not signaled, but may be inferred as a second value.
Alternatively, the information indicating whether partitioning by the multi-type tree is performed may be signaled only when at least one of the vertical binary tree partition, the horizontal binary tree partition, the vertical trigeminal tree partition, and the horizontal trigeminal tree partition is possible for a CU corresponding to each node of the multi-type tree. Otherwise, the CUs may not be partitioned in binary tree form and/or trigeminal tree form. In this way, information indicating whether a partition in the multi-type tree is executed may not be signaled, but may be inferred as a second value.
Alternatively, for a CU corresponding to each node of the multi-type tree, partition direction information may be signaled only if both vertical and horizontal binary tree partitions are available or only if both vertical and horizontal trigeminal tree partitions are available. Otherwise, partition direction information may not be signaled, but may be inferred as a value indicating the direction in which a CU may be partitioned.
Alternatively, for CUs corresponding to each node of the multi-type tree, partition tree information may be signaled only if both vertical binary tree partitions and vertical trigeminal tree partitions are available or only if both horizontal binary tree partitions and horizontal trigeminal tree partitions are available. Otherwise, partition tree information may not be signaled, but may be inferred as a value indicating a tree applicable to the partition of the CU.
Fig. 4 is a diagram illustrating a form of a prediction unit that an encoding unit can include.
Among CUs partitioned from LCUs, a CU that is no longer partitioned may be partitioned into one or more Prediction Units (PUs). This partitioning is also referred to as "partitioning".
A PU may be a base unit for prediction. The PU may be encoded and decoded in any one of a skip mode, an inter mode, and an intra mode. The PU may be partitioned into various shapes according to various modes. For example, the target block described above with reference to fig. 1 and the target block described above with reference to fig. 2 may both be PUs.
A CU may not be partitioned into PUs. When a CU is not divided into PUs, the size of the CU and the size of the PU may be equal to each other.
In skip mode, no partition may be present in the CU. In the skip mode, the 2n×2n mode 410 may be supported without partitioning, wherein in the 2n×2n mode 410, the size of the PU and the size of the CU are identical to each other.
In inter mode, there may be 8 types of partition shapes in a CU. For example, in the inter mode, a 2n×2n mode 410, a 2n×n mode 415, an n×2n mode 420, an n×n mode 425, a 2n×nu mode 430, a 2n×nd mode 435, an nl×2n mode 440, and an nr×2n mode 445 may be supported.
In intra mode, a 2nx2n mode 410 and an nxn mode 425 may be supported.
In the 2n×2n mode 410, PUs of size 2n×2n may be encoded. A PU of size 2N x 2N may represent a PU of the same size as a CU. For example, a PU of size 2N x 2N may have a size of 64 x 64, 32 x 32, 16 x 16, or 8 x 8.
In the nxn mode 425, a PU of size nxn may be encoded.
For example, in intra prediction, four partitioned PUs may be encoded when the PU size is 8 x 8. The size of each partitioned PU may be 4 x 4.
When encoding a PU in intra mode, the PU may be encoded using any of a plurality of intra prediction modes. For example, HEVC techniques may provide 35 intra-prediction modes, and a PU may be encoded in any of the 35 intra-prediction modes.
Which of the 2nx2n mode 410 and the nxn mode 425 is to be used to encode the PU may be determined based on the rate distortion cost.
The encoding apparatus 100 may perform an encoding operation on a PU having a size of 2nx2n. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. By the encoding operation, the best intra prediction mode for a PU of size 2N x 2N can be derived. The optimal intra prediction mode may be an intra prediction mode that exhibits a minimum rate distortion cost when encoding a PU having a size of 2n×2n among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
Further, the encoding apparatus 100 may sequentially perform encoding operations on respective PUs obtained by performing nxn partitioning. Here, the encoding operation may be an operation of encoding the PU in each of a plurality of intra prediction modes that can be used by the encoding apparatus 100. By the encoding operation, the best intra prediction mode for a PU of size nxn can be derived. The optimal intra prediction mode may be an intra prediction mode that exhibits a minimum rate distortion cost when encoding a PU of size nxn among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
The encoding apparatus 100 may determine which of a PU of size 2nx2n and a PU of size nxn is to be encoded based on a comparison between the rate-distortion cost of the PU of size 2nx2n and the rate-distortion cost of the PU of size nxn.
A single CU may be partitioned into one or more PUs, and a PU may be partitioned into multiple PUs.
For example, when a single PU is partitioned into four PUs, the horizontal and vertical dimensions of each of the four PUs generated by the partitioning may be half the horizontal and vertical dimensions of the PU prior to being partitioned. When a PU of size 32 x 32 is partitioned into four PUs, the size of each of the four partitioned PUs may be 16 x 16. When a single PU is partitioned into four PUs, the PUs may be considered to have been partitioned in a quadtree structure.
For example, when a single PU is partitioned into two PUs, the horizontal or vertical size of each of the two PUs generated by the partitioning may be half the horizontal or vertical size of the PU prior to being partitioned. When a PU of size 32 x 32 is vertically partitioned into two PUs, the size of each of the two partitioned PUs may be 16 x 32. When a PU of size 32 x 32 is horizontally partitioned into two PUs, the size of each of the two partitioned PUs may be 32 x 16. When a single PU is partitioned into two PUs, the PUs may be considered to have been partitioned in a binary tree structure.
Fig. 5 is a diagram showing a form of a transform unit that can be included in an encoding unit.
A Transform Unit (TU) may be a basic unit in a CU used for processes such as transform, quantization, inverse transform, inverse quantization, entropy encoding, and entropy decoding.
The TUs may have a square shape or a rectangular shape. The shape of the TU may be determined based on the size and/or shape of the CU.
Among CUs partitioned from LCUs, a CU that is no longer partitioned into CUs may be partitioned into one or more TUs. Here, the partition structure of the TUs may be a quadtree structure. For example, as shown in FIG. 5, a single CU 510 may be partitioned one or more times according to a quadtree structure. With such partitioning, a single CU 510 may be composed of TUs having various sizes.
A CU may be considered to be recursively partitioned when a single CU is partitioned two or more times. By partitioning, a single CU may be composed of Transform Units (TUs) having various sizes.
Alternatively, a single CU may be partitioned into one or more TUs based on the number of vertical and/or horizontal lines that partition the CU.
A CU may be divided into symmetric TUs or asymmetric TUs. For division into asymmetric TUs, information about the size and/or shape of each TU may be signaled from the encoding apparatus 100 to the decoding apparatus 200. Alternatively, the size and/or shape of each TU may be derived from information about the size and/or shape of the CU.
A CU may not be partitioned into TUs. When a CU is not divided into TUs, the size of the CU and the size of the TUs may be equal to each other.
A single CU may be partitioned into one or more TUs, and a TU may be partitioned into multiple TUs.
For example, when a single TU is partitioned into four TUs, the horizontal and vertical sizes of each of the four TUs generated by the partitioning may be half the horizontal and vertical sizes of the TUs before being partitioned. When a TU of size 32×32 is partitioned into four TUs, the size of each of the four partitioned TUs may be 16×16. When a single TU is partitioned into four TUs, the TUs may be considered to have been partitioned in a quadtree structure.
For example, when a single TU is partitioned into two TUs, the horizontal size or vertical size of each of the two TUs generated by the partitioning may be half the horizontal size or vertical size of the TUs before being partitioned. When a TU of size 32×32 is vertically partitioned into two TUs, the size of each of the two partitioned TUs may be 16×32. When a TU of size 32×32 is horizontally partitioned into two TUs, the size of each of the two partitioned TUs may be 32×16. When a single TU is partitioned into two TUs, the TUs may be considered to have been partitioned in a binary tree structure.
The CUs may be partitioned differently than shown in fig. 5.
For example, a single CU may be divided into three CUs. The horizontal or vertical sizes of the three CUs generated by the division may be 1/4, 1/2, and 1/4 of the horizontal or vertical sizes of the original CU before the division, respectively.
For example, when a CU having a size of 32×32 is vertically divided into three CUs, the sizes of the three CUs generated by the division may be 8×32, 16×32, and 8×32, respectively. In this way, when a single CU is divided into three CUs, the CUs can be considered to be divided in a form of a trigeminal tree.
One of the exemplary partition forms (i.e., quadtree partition, binary tree partition, and trigeminal tree partition) may be applied to the partition of the CU, and multiple partition schemes may be combined and used together for the partition of the CU. Here, a case where a plurality of division schemes are combined and used together may be referred to as "compound tree-type division".
Fig. 6 illustrates partitioning of blocks according to an example.
In the video encoding and/or decoding process, as shown in fig. 6, target blocks may be partitioned. For example, the target block may be a CU.
For the division of the target block, an indicator indicating the division information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The partition information may be information indicating how the target block is partitioned.
The division information may be one or more of a division flag (hereinafter referred to as "split_flag"), a four-binary flag (hereinafter referred to as "qb_flag"), a quadtree flag (hereinafter referred to as "quadtree_flag"), a binary tree flag (hereinafter referred to as "binaryre_flag"), and a binary type flag (hereinafter referred to as "btype_flag").
The "split_flag" may be a flag indicating whether a block is divided. For example, a split_flag value of 1 may indicate that the corresponding block is partitioned. A split flag value of 0 may indicate that the corresponding block is not partitioned.
The "qb_flag" may be a flag indicating which of the quadtree form and the binary tree form corresponds to a shape in which the block is divided. For example, a qb_flag value of 0 may indicate that the blocks are divided in a quadtree form. A qb_flag value of 1 may indicate that the blocks are partitioned in a binary tree form. Alternatively, a qb_flag value of 0 may indicate that the blocks are partitioned in a binary tree form. A qb_flag value of 1 may indicate that the blocks are divided in a quadtree form.
The "quadtree_flag" may be a flag indicating whether a block is divided in a quadtree form. For example, a quadtree_flag value of 1 may indicate that the blocks are partitioned in a quadtree form. A quadtree_flag value of 0 may indicate that the block is not partitioned in quadtree form.
The "binarytre_flag" may be a flag indicating whether the block is divided in a binary tree form. For example, a binarytre_flag value of 1 may indicate that the blocks are partitioned in a binary tree form. A binarytre_flag value of 0 may indicate that the block is not partitioned in a binary tree form.
The "btype_flag" may be a flag indicating which of the vertical division and the horizontal division corresponds to the division direction when the block is divided in the form of a binary tree. For example, a Btype flag value of 0 may indicate that the block is divided in the horizontal direction. A Btype flag value of 1 may indicate that the block is divided in the vertical direction. Alternatively, a Btype flag value of 0 may indicate that the block is divided in the vertical direction. A Btype flag value of 1 may indicate that the block is divided in the horizontal direction.
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of the quadtree_ flag, binarytree _flag and the btype_flag as shown in table 1 below.
TABLE 1
Figure BDA0004133786430000381
For example, the partition information of the block in fig. 6 may be derived by signaling at least one of split_flag, qb_flag, and btype_flag, as shown in table 2 below.
TABLE 2
Figure BDA0004133786430000391
The partitioning method may be limited to a quadtree or a binary tree depending on the size and/or shape of the block. When this restriction is applied, the split_flag may be a flag indicating whether the block is divided in a quadtree form or a flag indicating whether the block is divided in a binary tree form. The size and shape of the block may be pushed according to the depth information of the block, and the depth information may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
When the size of the block falls within a specific range, division in only a quadtree form is possible. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a quadtree form.
Information indicating the maximum block size and the minimum block size that can be divided only in the quadtree form may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Furthermore, this information may be signaled for at least one of the units such as video, sequence, picture, parameter, parallel block group and slice (or slice).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the block size is greater than 64×64 and less than 256×256, it is possible to divide only in a quadtree form. In this case, the split_flag may be a flag indicating whether to perform division in a quadtree form.
Partitioning in quadtree form only is possible when the size of the block is larger than the maximum size of the transformed block. Here, the sub-block generated by the partition may be at least one of a CU and a TU.
In this case, the split_flag may be a flag indicating whether or not the CU is partitioned in a quadtree form.
When the size of the block falls within a specific range, division in only a binary tree form or a trigeminal tree form is possible. For example, the specific range may be defined by at least one of a maximum block size and a minimum block size that can be divided only in a binary tree form or a trigeminal tree form.
Information indicating the maximum block size and/or the minimum block size that can be divided in a binary tree form only or in a trigeminal tree form can be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. Furthermore, this information may be signaled for at least one of the units such as sequence, picture and slice (or slice).
Alternatively, the maximum block size and/or the minimum block size may be a fixed size predefined by the encoding apparatus 100 and the decoding apparatus 200. For example, when the block size is greater than 8×8 and less than 16×16, it is possible to divide only in a binary tree form. In this case, the split_flag may be a flag indicating whether to perform division in a binary tree form or a trigeminal tree form.
The above description of partitioning in a quadtree form may be equally applied to binary tree forms and/or trigeminal tree forms.
The partitioning of a block may be limited by the previous partitioning. For example, when a block is partitioned in a particular binary tree form and multiple sub-blocks are generated from the partition, each sub-block may be further partitioned only in a particular tree form. Here, the specific tree form may be at least one of a binary tree form, a trigeminal tree form, and a quadtree form.
When the horizontal size or the vertical size of the tiles is a size that cannot be further divided, the above-described indicator may be signaled without a signal.
Fig. 7 is a diagram for explaining an embodiment of intra prediction processing.
Arrows extending radially from the center of the graph in fig. 7 indicate the prediction direction of the intra prediction mode. Further, numbers appearing near the arrows indicate examples of mode values assigned to intra prediction modes or prediction directions of intra prediction modes.
In fig. 7, the number 0 may represent a plane mode as a non-directional intra prediction mode. The number 1 may represent a DC mode as a non-directional intra prediction mode.
Intra-coding and/or decoding may be performed using reference samples of neighboring blocks of the target block. The neighboring blocks may be reconstructed neighboring blocks. The reference sample point may represent a neighboring sample point.
For example, intra-coding and/or decoding may be performed using values of reference samples included in the reconstructed neighboring blocks or coding parameters of the reconstructed neighboring blocks.
The encoding apparatus 100 and/or the decoding apparatus 200 may generate a prediction block by performing intra prediction on a target block based on information about samples in a target image. When intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may generate a prediction block for a target block by performing intra prediction based on information about samples in a target image. When intra prediction is performed, the encoding apparatus 100 and/or the decoding apparatus 200 may perform directional prediction and/or non-directional prediction based on at least one reconstructed reference sample.
The prediction block may be a block generated as a result of performing intra prediction. The prediction block may correspond to at least one of a CU, PU, and TU.
The unit of the prediction block may have a size corresponding to at least one of the CU, PU, and TU. The prediction block may have a square shape with a size of 2n×2n or n×n. The dimensions N x N may include dimensions 4 x 4, 8 x 8, 16 x 16, 32 x 32, 64 x 64, etc.
Alternatively, the prediction block may be a square block of size 2×2, 4×4, 8×8, 16×16, 32×32, 64×64, etc., or a rectangular block of size 2×8, 4×8, 2×16, 4×16, 8×16, etc.
Intra prediction may be performed considering an intra prediction mode for a target block. The number of intra prediction modes that the target block may have may be a predefined fixed value, and may be a value that is differently determined according to the properties of the prediction block. For example, the attribute of the prediction block may include the size of the prediction block, the type of the prediction block, and the like. Furthermore, the attribute of the prediction block may indicate the coding parameters for the prediction block.
For example, the number of intra prediction modes may be fixed to N regardless of the size of the prediction block. Alternatively, the number of intra prediction modes may be, for example, 3, 5, 9, 17, 34, 35, 36, 65, 67, or 95.
The intra prediction mode may be a non-directional mode or a directional mode.
For example, the intra prediction modes may include two non-directional modes and 65 directional modes corresponding to numbers 0 to 66 shown in fig. 7.
For example, in the case of using a specific intra prediction method, the intra prediction modes may include two non-directional modes and 93 directional modes corresponding to numbers-14 to 80 shown in fig. 7.
The two non-directional modes may include a DC mode and a planar mode.
The direction mode may be a prediction mode having a specific direction or a specific angle. The direction mode may also be referred to as an "angle mode".
The intra prediction mode may be represented by at least one of a mode number, a mode value, a mode angle, and a mode direction. In other words, the terms "intra prediction mode (mode) number", "intra prediction mode (mode) value", "intra prediction mode (mode) angle", and "intra prediction mode (mode) direction" may be used to have the same meaning and may be used interchangeably with each other.
The number of intra prediction modes may be M. The value of M may be 1 or greater. In other words, the number of intra prediction modes may be M, where M includes the number of non-directional modes and the number of directional modes.
The number of intra prediction modes may be fixed to M regardless of the size and/or color components of the block. For example, the number of intra prediction modes may be fixed to any one of 35 and 67 regardless of the size of the block.
Alternatively, the number of intra prediction modes may be different according to the shape, size, and/or type of color component of the block.
For example, in fig. 7, the direction prediction mode as shown by the dotted line may be applied only to prediction for non-square blocks.
For example, the larger the block size, the greater the number of intra prediction modes. Alternatively, the larger the block size, the fewer the number of intra prediction modes. When the size of the block is 4×4 or 8×8, the number of intra prediction modes may be 67. When the block size is 16×16, the number of intra prediction modes may be 35. When the block size is 32×32, the number of intra prediction modes may be 19. When the size of the block is 64×64, the number of intra prediction modes may be 7.
For example, the number of intra prediction modes may be different depending on whether a color component is a luminance signal or a chrominance signal. Alternatively, the number of intra prediction modes corresponding to the luminance component block may be greater than the number of intra prediction modes corresponding to the chrominance component block.
For example, in a vertical mode with a mode value of 50, prediction may be performed in a vertical direction based on pixel values of reference samples. For example, in a horizontal mode with a mode value of 18, prediction may be performed in the horizontal direction based on the pixel value of the reference sample.
Even in the direction modes other than the above-described modes, the encoding apparatus 100 and the decoding apparatus 200 can perform intra prediction on the target unit using the reference samples according to the angles corresponding to the direction modes.
The intra prediction mode located at the right side with respect to the vertical mode may be referred to as a "vertical-right mode". The intra prediction mode located below the horizontal mode may be referred to as a "horizontal-below mode". For example, in fig. 7, the intra prediction mode in which the mode value is one of 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, and 66 may be a vertical-right mode. The intra prediction mode, in which the mode value is one of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17, may be a horizontal-down mode.
The non-directional modes may include a DC mode and a planar mode. For example, the value of the DC mode may be 1. The value of the planar mode may be 0.
The direction pattern may include an angle pattern. Among the plurality of intra prediction modes, the remaining modes other than the DC mode and the plane mode may be directional modes.
When the intra prediction mode is a DC mode, a prediction block may be generated based on an average value of pixel values of a plurality of reference pixels. For example, the value of a pixel of the prediction block may be determined based on an average of pixel values of a plurality of reference pixels.
The number of intra prediction modes and the mode values of the respective intra prediction modes described above are merely exemplary. The number of intra prediction modes described above and the mode values of the respective intra prediction modes may be defined differently according to embodiments, implementations, and/or requirements.
In order to perform intra prediction on a target block, a step of checking whether a sample included in a reconstructed neighboring block can be used as a reference sample of the target block may be performed. When a sample that cannot be used as a reference sample of the target block exists among samples in the neighboring blocks, a value generated via interpolation and/or copying using at least one sample value among samples included in the reconstructed neighboring blocks may replace a sample value of a sample that cannot be used as a reference sample. When a value generated via copying and/or interpolation replaces a sample value of an existing sample, the sample may be used as a reference sample for the target block.
When intra prediction is used, a filter may be applied to at least one of the reference samples and the prediction samples based on at least one of the size of the target block and the intra prediction mode.
The type of filter to be applied to at least one of the reference sample point and the prediction sample point may be different according to at least one of an intra prediction mode of the target block, a size of the target block, and a shape of the target block. The type of filter may be classified according to one or more of the length of the filter taps, the values of the filter coefficients, and the filter strength. The length of the filter taps may represent the number of filter taps. Furthermore, the number of filter taps may represent the length of the filter.
When the intra prediction mode is a planar mode, a sample value of the prediction target block may be generated according to a position of a prediction target sample point in the prediction block when the prediction block of the target block is generated, using a weighted sum of an upper reference sample point of the target block, a left reference sample point of the target block, an upper right reference sample point of the target block, and a lower left reference sample point of the target block.
When the intra prediction mode is a DC mode, an average value of a reference sample above the target block and a reference sample to the left of the target block may be used in generating a prediction block of the target block. Furthermore, filtering using the values of the reference samples may be performed on a particular row or a particular column in the target block. The particular row may be one or more upper rows adjacent to the reference sample point. The particular column may be one or more left columns adjacent to the reference sample point.
When the intra prediction mode is a directional mode, a prediction block may be generated using an upper reference sample, a left reference sample, an upper right reference sample, and/or a lower left reference sample of the target block.
To generate the above-described predicted samples, real-based interpolation may be performed.
The intra prediction mode of the target block may be predicted from the intra prediction modes of neighboring blocks adjacent to the target block, and information for prediction may be entropy encoded/entropy decoded.
For example, when intra prediction modes of a target block and a neighboring block are identical to each other, a predefined flag may be used to signal that the intra prediction modes of the target block and the neighboring block are identical.
For example, an indicator for indicating the same intra prediction mode as that of the target block among the intra prediction modes of the plurality of neighboring blocks may be signaled.
When intra prediction modes of a target block and neighboring blocks are different from each other, information regarding the intra prediction modes of the target block may be encoded and/or decoded using entropy encoding and/or entropy decoding.
Fig. 8 is a diagram illustrating reference samples used in an intra prediction process.
The reconstructed reference points for intra-prediction of the target block may include a lower left reference point, a left reference point, an upper left corner reference point, an upper reference point, and an upper right reference point.
For example, the left reference sample point may represent a reconstructed reference pixel adjacent to the left side of the target block. The upper reference sample may represent a reconstructed reference pixel adjacent to the top of the target block. The upper left corner reference sample point may represent a reconstructed reference pixel located at the upper left corner of the target block. The lower left reference sample point may represent a reference sample point located below a left sample point line composed of left reference sample points among sample points located on the same line as the left sample point line. The upper right reference sample point may represent a reference sample point located to the right of an upper sample point line composed of upper reference sample points among sample points located on the same line as the upper sample point line.
When the size of the target block is n×n, the number of lower left reference samples, upper reference samples, and upper right reference samples may be N.
By performing intra prediction on a target block, a prediction block may be generated. The process of generating the prediction block may include determining values of pixels in the prediction block. The target block and the prediction block may be the same size.
The reference points for intra-predicting the target block may be changed according to the intra-prediction mode of the target block. The direction of the intra prediction mode may represent a dependency relationship between the reference sample point and the pixels of the prediction block. For example, the value of the specified reference sample point may be used as the value of one or more specified pixels in the prediction block. In this case, the one or more specified pixels in the specified reference sample and prediction block may be samples and pixels located on a straight line along a direction of the intra prediction mode. In other words, the value of the specified reference sample point may be copied as a value of a pixel located in a direction opposite to the direction of the intra prediction mode. Alternatively, the value of a pixel in the prediction block may be a value of a reference sample located in the direction of the intra prediction mode with respect to the position of the pixel.
In an example, when the intra prediction mode of the target block is a vertical mode, the upper reference sample may be used for intra prediction. When the intra prediction mode is a vertical mode, the value of a pixel in the prediction block may be the value of a reference sample vertically above the position of the pixel. Thus, the upper reference sample adjacent to the top of the target block may be used for intra prediction. Furthermore, the values of the pixels in a row of the prediction block may be the same as the values of the pixels of the upper reference sample.
In an example, when the intra prediction mode of the target block is a horizontal mode, the left reference sample may be used for intra prediction. When the intra prediction mode is a horizontal mode, the value of a pixel in the prediction block may be the value of a reference sample horizontally located to the left of the position of the pixel. Thus, a left reference sample adjacent to the left side of the target block may be used for intra prediction. Furthermore, the values of pixels in a column of the prediction block may be the same as the values of pixels of the left reference sample point.
In an example, when a mode value of an intra prediction mode of the current block is 34, at least some of the left reference samples, the upper left reference samples, and at least some of the upper reference samples may be used for intra prediction. When the mode value of the intra prediction mode is 34, the value of a pixel in the prediction block may be the value of a reference sample point diagonally located at the upper left corner of the pixel.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 52 to 66, at least a part of the upper right reference sample may be used for intra prediction.
Further, in the case of an intra prediction mode in which the mode value is a value ranging from 2 to 17, at least a part of the lower left reference sample may be used for intra prediction.
Furthermore, in the case of an intra prediction mode in which the mode value is a value ranging from 19 to 49, the upper left corner reference sample may be used for intra prediction.
The number of reference samples used to determine the pixel value of one pixel in the prediction block may be 1 or 2 or more.
As described above, the pixel values of the pixels in the prediction block may be determined according to the positions of the pixels and the positions of the reference samples indicated by the direction of the intra prediction mode. When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are integer positions, the value of one reference sample point indicated by the integer position may be used to determine the pixel value of the pixel in the prediction block.
When the position of the pixel and the position of the reference sample point indicated by the direction of the intra prediction mode are not integer positions, an interpolation reference sample point based on two reference sample points closest to the position of the reference sample point may be generated. The values of the interpolated reference samples may be used to determine pixel values for pixels in the prediction block. In other words, when the position of a pixel in the prediction block and the position of a reference sample indicated by the direction of the intra prediction mode indicate the position between two reference samples, interpolation based on the values of the two samples may be generated.
The prediction block generated via prediction may be different from the original target block. In other words, there may be a prediction error, which is a difference between the target block and the prediction block, and there may also be a prediction error between the pixels of the target block and the pixels of the prediction block.
Hereinafter, the terms "difference", "error" and "residual" may be used to have the same meaning and may be used interchangeably with each other.
For example, in the case of directional intra prediction, the longer the distance between the pixels of the prediction block and the reference sample, the greater the prediction error that may occur. Such prediction errors may result in discontinuities between the generated prediction block and neighboring blocks.
In order to reduce prediction errors, a filtering operation for the prediction block may be used. The filtering operation may be configured to adaptively apply the filter to regions of the prediction block that are considered to have large prediction errors. For example, the region considered to have a large prediction error may be a boundary of a prediction block. In addition, regions considered to have a large prediction error in a prediction block may differ according to an intra prediction mode, and characteristics of a filter may also differ according to an intra prediction mode.
As shown in fig. 8, for intra prediction of a target block, at least one of reference line 0 to reference line 3 may be used.
Each reference line in fig. 8 may indicate a reference sample line including one or more reference sample points. When the number of reference lines is smaller, reference sample lines closer to the target block may be indicated.
The samples in segments a and F may be obtained by padding using the sample closest to the target block in segments B and E, instead of from the reconstructed neighboring block.
Index information indicating a reference sample line to be used for intra prediction of a target block may be signaled. The index information may indicate a reference sample line to be used for intra prediction of the target block among a plurality of reference sample lines. For example, the index information may have a value corresponding to any one of 0 to 3.
When the upper boundary of the target block is the boundary of the CTU, only the reference sample line 0 may be available. Therefore, in this case, the index information may not be signaled. When an additional reference sample line other than the reference sample line 0 is used, filtering of a prediction block, which will be described later, may not be performed.
In the case of inter-color intra prediction, a prediction block of a target block of a second color component may be generated based on a corresponding reconstructed block of the first color component.
For example, the first color component may be a luminance component and the second color component may be a chrominance component.
To perform inter-color intra prediction, parameters of a linear model between the first color component and the second color component may be derived based on a template.
The template may comprise a reference sample above the target block (upper reference sample) and/or a reference sample to the left of the target block (left reference sample), and may comprise an upper reference sample and/or a left reference sample of the reconstructed block of the first color component corresponding to the reference sample.
For example, the following values may be used to derive parameters of the linear model: 1) a value of a sample of a first color component having a maximum value among samples in a template, 2) a value of a sample of a second color component corresponding to the sample of the first color component, 3) a value of a sample of a first color component having a minimum value among samples in a template, and 4) a value of a sample of a second color component corresponding to the sample of the first color component.
When deriving parameters of the linear model, a prediction block of the target block may be generated by applying the corresponding reconstructed block to the linear model.
Depending on the image format, sub-sampling may be performed on samples adjacent to the reconstructed block of the first color component and the corresponding reconstructed block of the first color component. For example, when one sample of the second color component corresponds to four samples of the first color component, one corresponding sample may be calculated by performing sub-sampling on the four samples of the first color component. When sub-sampling is performed, derivation of parameters of the linear model and inter-color intra prediction may be performed based on the corresponding samples that are sub-sampled.
Information regarding whether to perform inter-color intra prediction and/or the range of templates may be signaled in intra prediction mode.
The target block may be partitioned into two or four sub-blocks in the horizontal direction and/or the vertical direction.
The sub-blocks generated by the partition may be reconstructed sequentially. That is, when intra prediction is performed on each sub-block, a sub-prediction block of the sub-block may be generated. Further, when inverse quantization (inverse quantization) and/or inverse transformation is performed on each sub-block, a sub-residual block for the corresponding sub-block may be generated. The reconstructed sub-block may be generated by adding the sub-prediction block to the sub-residual block. The reconstructed sub-block may be used as a reference sample for intra prediction of the sub-block with the next priority.
A sub-block may be a block that includes a certain number (e.g., 16) or more samples. For example, when the target block is an 8×4 block or a 4×8 block, the target block may be partitioned into two sub-blocks. Further, when the target block is a 4×4 block, the target block cannot be partitioned into sub-blocks. When the target block has another size, the target block may be partitioned into four sub-blocks.
Information on whether to perform intra prediction based on these sub-blocks and/or information on a partition direction (horizontal direction or vertical direction) may be signaled.
Such sub-block based intra prediction may be limited such that it is performed only when the reference sample line 0 is used. When sub-block-based intra prediction is performed, filtering of a prediction block, which will be described below, may not be performed.
The final prediction block may be generated by performing filtering on a prediction block generated through intra prediction.
The filtering may be performed by applying a specific weight to a filtering target sample point, a left reference sample point, an upper reference sample point, and/or an upper left reference sample point, which are targets to be filtered.
The weights for filtering and/or the reference samples (e.g., range of reference samples, location of reference samples, etc.) may be determined based on at least one of the block size, intra prediction mode, and location of the filtering target samples in the prediction block.
For example, filtering may be performed only in a particular intra prediction mode (e.g., DC mode, planar mode, vertical mode, horizontal mode, diagonal mode, and/or adjacent diagonal mode).
The adjacent diagonal pattern may be a pattern having a number obtained by adding k to the number of the diagonal pattern, and may be a pattern having a number obtained by subtracting k from the number of the diagonal pattern. In other words, the number of the adjacent diagonal pattern may be the sum of the number of the diagonal pattern and k, or may be the difference between the number of the diagonal pattern and k. For example, k may be a positive integer of 8 or less.
Intra-prediction modes of the target block may be derived using intra-prediction modes of neighboring blocks present near the target block, and such derived intra-prediction modes may be entropy encoded and/or entropy decoded.
For example, when the intra prediction mode of the target block is identical to the intra prediction mode of the neighboring block, specific flag information may be used to signal information indicating that the intra prediction mode of the target block is identical to the intra prediction mode of the neighboring block.
Further, for example, indicator information of neighboring blocks having the same intra prediction mode as that of the target block among the intra prediction modes of the plurality of neighboring blocks may be signaled.
For example, when the intra prediction mode of the target block is different from the intra prediction modes of the neighboring blocks, entropy encoding and/or entropy decoding may be performed on information regarding the intra prediction mode of the target block by performing entropy encoding and/or entropy decoding based on the intra prediction modes of the neighboring blocks.
Fig. 9 is a diagram for explaining an embodiment of an inter prediction process.
The rectangle shown in fig. 9 may represent an image (or screen). Further, in fig. 9, an arrow may indicate a prediction direction. An arrow pointing from the first picture to the second picture indicates that the second picture references the first picture. That is, each image may be encoded and/or decoded according to a prediction direction.
The image may be classified into an intra picture (I picture), a single predicted picture or a predictive coded picture (P picture), and a bi-predicted picture or a bi-predictive coded picture (B picture) according to the type of encoding. Each picture may be encoded and/or decoded according to the type of encoding of each picture.
When the target image that is the target to be encoded is an I picture, the target image may be encoded using data contained in the image itself without inter prediction with reference to other images. For example, an I picture may be encoded via intra prediction only.
When the target image is a P picture, the target image may be encoded via inter prediction using a reference picture existing in one direction. Here, the one direction may be a forward direction or a backward direction.
When the target image is a B picture, the image may be encoded via inter prediction using reference pictures existing in both directions, or may be encoded via inter prediction using reference pictures existing in one of a forward direction and a backward direction. Here, the two directions may be a forward direction and a backward direction.
P-pictures and B-pictures encoded and/or decoded using reference pictures may be considered as pictures using inter-prediction.
Hereinafter, inter prediction in inter mode according to an embodiment will be described in detail.
Inter prediction or motion compensation may be performed using the reference image and the motion information.
In the inter mode, the encoding apparatus 100 may perform inter prediction and/or motion compensation on the target block. The decoding apparatus 200 may perform inter prediction and/or motion compensation on the target block corresponding to the inter prediction and/or motion compensation performed by the encoding apparatus 100.
The motion information of the target block may be derived separately by the encoding apparatus 100 and the decoding apparatus 200 during inter prediction. The motion information may be derived using the motion information of the reconstructed neighboring block, the motion information of the col block, and/or the motion information of the block adjacent to the col block.
For example, the encoding apparatus 100 or the decoding apparatus 200 may perform prediction and/or motion compensation by using motion information of spatial candidates and/or temporal candidates as motion information of a target block. The target block may represent a PU and/or a PU partition.
The spatial candidates may be reconstructed blocks spatially adjacent to the target block.
The temporal candidate may be a reconstructed block corresponding to the target block in a previously reconstructed co-located picture (col picture).
In the inter prediction, the encoding apparatus 100 and the decoding apparatus 200 may improve encoding efficiency and decoding efficiency by using motion information of spatial candidates and/or temporal candidates. The motion information of the spatial candidate may be referred to as "spatial motion information". The motion information of the temporal candidates may be referred to as "temporal motion information".
Next, the motion information of the spatial candidate may be motion information of a PU including the spatial candidate. The motion information of the temporal candidate may be motion information of a PU including the temporal candidate. The motion information of the candidate block may be motion information of a PU including the candidate block.
Inter prediction may be performed using a reference picture.
The reference picture may be at least one of a picture preceding the target picture and a picture following the target picture. The reference picture may be an image for prediction of the target block.
In inter prediction, a region in a reference picture may be specified using a reference picture index (or refIdx) indicating the reference picture, a motion vector to be described later, or the like. Here, the region specified in the reference picture may indicate a reference block.
Inter prediction may select a reference picture, and may also select a reference block corresponding to the target block from the reference picture. Furthermore, inter prediction may use the selected reference block to generate a prediction block for the target block.
The motion information may be derived by each of the encoding apparatus 100 and the decoding apparatus 200 during inter prediction.
The spatial candidates may be 1) blocks that exist in the target picture 2) have been previously reconstructed via encoding and/or decoding and 3) are adjacent to or located at corners of the target block. Here, the "block located at the corner of the target block" may be a block vertically adjacent to an adjacent block horizontally adjacent to the target block, or a block horizontally adjacent to an adjacent block vertically adjacent to the target block. Further, "a block located at a corner of a target block" may have the same meaning as "a block adjacent to a corner of a target block". The meaning of "a block located at a corner of a target block" may be included in the meaning of "a block adjacent to the target block".
For example, the spatial candidate may be a reconstructed block located to the left of the target block, a reconstructed block located above the target block, a reconstructed block located in the lower left corner of the target block, a reconstructed block located in the upper right corner of the target block, or a reconstructed block located in the upper left corner of the target block.
Each of the encoding apparatus 100 and the decoding apparatus 200 may identify a block existing in a location in the col picture that spatially corresponds to the target block. The position of the target block in the target picture and the position of the identified block in the col picture may correspond to each other.
Each of the encoding apparatus 100 and the decoding apparatus 200 may determine col blocks existing at predefined relevant locations for the identified blocks as time candidates. The predefined relevant locations may be locations that exist inside and/or outside the identified block.
For example, the col blocks may include a first col block and a second col block. When the coordinates of the identified block are (xP, yP) and the size of the identified block is expressed by (nPSW, nPSH), the first col block may be a block located at the coordinates (xp+npsw, yp+npsh). The second col block may be a block located at coordinates (xp+ (nPSW > > 1), yp+ (nPSH > > 1)). The second col block may be selectively used when the first col block is not available.
The motion vector of the target block may be determined based on the motion vector of the col block. Each of the encoding apparatus 100 and the decoding apparatus 200 may scale the motion vector of the col block. The scaled motion vector of the col block may be used as the motion vector of the target block. Further, the motion vector of the motion information of the temporal candidate stored in the list may be a scaled motion vector.
The ratio of the motion vector of the target block to the motion vector of the col block may be the same as the ratio of the first temporal distance to the second temporal distance. The first temporal distance may be a distance between a reference picture and a target picture of the target block. The second temporal distance may be a distance between the reference picture and a col picture of the col block.
The scheme for deriving motion information may vary according to the inter prediction mode of the target block. For example, as an inter prediction mode applied to inter prediction, there may be an Advanced Motion Vector Predictor (AMVP) mode, a merge mode, a skip mode, a merge mode with a motion vector difference, a sub-block merge mode, a triangle partition mode, an inter-intra combined prediction mode, an affine inter mode, a current picture reference mode, and the like. The merge mode may also be referred to as a "motion merge mode". The respective modes will be described in detail below.
1) AMVP mode
When the AMVP mode is used, the encoding apparatus 100 may search for similar blocks in a neighboring area of the target block. The encoding apparatus 100 may acquire a prediction block by performing prediction on a target block using motion information of the found similar block. The encoding apparatus 100 may encode a residual block, which is a difference between the target block and the prediction block.
1-1) creating a list of predicted motion vector candidates
When AMVP mode is used as the prediction mode, each of the encoding apparatus 100 and the decoding apparatus 200 may create a list of prediction motion vector candidates using a motion vector of a spatial candidate, a motion vector of a temporal candidate, and a zero vector. The predicted motion vector candidate list may include one or more predicted motion vector candidates. At least one of a motion vector of a spatial candidate, a motion vector of a temporal candidate, and a zero vector may be determined and used as a predicted motion vector candidate.
Hereinafter, the terms "predicted motion vector (candidate)" and "motion vector (candidate)" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "prediction motion vector candidate" and "AMVP candidate" may be used to have the same meaning and may be used interchangeably with each other.
Hereinafter, the terms "prediction motion vector candidate list" and "AMVP candidate list" may be used to have the same meaning and may be used interchangeably with each other.
The spatial candidates may include reconstructed spatial neighboring blocks. In other words, the motion vectors of the reconstructed neighboring blocks may be referred to as "spatial prediction motion vector candidates".
The temporal candidates may include col blocks and blocks adjacent to the col blocks. In other words, the motion vector of the col block or the motion vector of the block adjacent to the col block may be referred to as a "temporal prediction motion vector candidate".
The zero vector may be a (0, 0) motion vector.
The predicted motion vector candidates may be motion vector predictors for predicting motion vectors. Further, in the encoding apparatus 100, each predicted motion vector candidate may be an initial search position for a motion vector.
1-2) searching for motion vectors using a list of predicted motion vector candidates
The encoding apparatus 100 may determine a motion vector to be used for encoding the target block within the search range using the list of predicted motion vector candidates. Further, the encoding apparatus 100 may determine a predicted motion vector candidate to be used as a predicted motion vector of the target block among the predicted motion vector candidates existing in the predicted motion vector candidate list.
The motion vector to be used for encoding the target block may be a motion vector that may be encoded at a minimum cost.
Further, the encoding apparatus 100 may determine whether to encode the target block using the AMVP mode.
1-3) transmission of inter prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether AMVP mode is used, 2) a prediction motion vector index, 3) a Motion Vector Difference (MVD), 4) a reference direction, and 5) a reference picture index.
Hereinafter, the terms "prediction motion vector index" and "AMVP index" may be used to have the same meaning and may be used interchangeably with each other.
Furthermore, the inter prediction information may include a residual signal.
When the mode information indicates that AMVP mode is used, the decoding apparatus 200 may acquire a prediction motion vector index, MVD, reference direction, and reference picture index from the bitstream through entropy decoding.
The prediction motion vector index may indicate a prediction motion vector candidate to be used for predicting the target block among prediction motion vector candidates included in the prediction motion vector candidate list.
1-4) inter prediction in AMVP mode using inter prediction information
The decoding apparatus 200 may derive a predicted motion vector candidate using the predicted motion vector candidate list, and may determine motion information of the target block based on the derived predicted motion vector candidate.
The decoding apparatus 200 may determine a motion vector candidate for the target block among the predicted motion vector candidates included in the predicted motion vector candidate list using the predicted motion vector index. The decoding apparatus 200 may select the predicted motion vector candidate indicated by the predicted motion vector index from among the predicted motion vector candidates included in the predicted motion vector candidate list as the predicted motion vector of the target block.
The encoding apparatus 100 may generate an entropy-encoded prediction motion vector index by applying entropy encoding to the prediction motion vector index, and may generate a bitstream including the entropy-encoded prediction motion vector index. The entropy-encoded prediction motion vector index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract the entropy-encoded prediction motion vector index from the bitstream, and may acquire the prediction motion vector index by applying entropy decoding to the entropy-encoded prediction motion vector index.
The motion vector that will actually be used for inter prediction of the target block may not match the predicted motion vector. In order to indicate the difference between the motion vector that will actually be used for inter prediction of the target block and the predicted motion vector, MVD may be used. The encoding apparatus 100 may derive a prediction motion vector similar to a motion vector that will be actually used for inter prediction of a target block in order to use as small MVD as possible.
The Motion Vector Difference (MVD) may be a difference between a motion vector of the target block and a predicted motion vector. The encoding apparatus 100 may calculate an MVD, and may generate an entropy-encoded MVD by applying entropy encoding to the MVD. The encoding apparatus 100 may generate a bitstream including the entropy-encoded MVD.
The MVD may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract an entropy-encoded MVD from the bitstream, and may acquire the MVD by applying entropy decoding to the entropy-encoded MVD.
The decoding apparatus 200 may derive a motion vector of the target block by summing the MVD and the predicted motion vector. In other words, the motion vector of the target block derived by the decoding apparatus 200 may be the sum of the MVD and the motion vector candidates.
Further, the encoding apparatus 100 may generate entropy-encoded MVD resolution information by applying entropy encoding to the calculated MVD resolution information, and may generate a bitstream including the entropy-encoded MVD resolution information. The decoding apparatus 200 may extract entropy-encoded MVD resolution information from the bitstream, and may acquire the MVD resolution information by applying entropy decoding to the entropy-encoded MVD resolution information. The decoding apparatus 200 may adjust the resolution of the MVD using the MVD resolution information.
In addition, the encoding apparatus 100 may calculate MVDs based on affine models. The decoding apparatus 200 may derive an affine control motion vector of the target block from the sum of the MVD and affine control motion vector candidates, and may derive a motion vector of the sub-block using the affine control motion vector.
The reference direction may indicate a list of reference pictures to be used for predicting the target block. For example, the reference direction may indicate one of the reference picture list L0 and the reference picture list L1.
The reference direction only indicates a reference picture list to be used for prediction of the target block, and may not mean that the direction of the reference picture is limited to a forward direction or a backward direction. In other words, each of the reference picture list L0 and the reference picture list L1 may include pictures in a forward direction and/or a backward direction.
The reference direction being unidirectional may mean that a single reference picture list is used. The reference direction being bi-directional may mean that two reference picture lists are used. In other words, the reference direction may indicate one of the following: a case where only the reference picture list L0 is used, a case where only the reference picture list L1 is used, and a case where two reference picture lists are used.
The reference picture index may indicate a reference picture for predicting the target block among reference pictures existing in the reference picture list. The encoding apparatus 100 may generate an entropy-encoded reference picture index by applying entropy encoding to the reference picture index, and may generate a bitstream including the entropy-encoded reference picture index. The entropy-encoded reference picture index may be signaled from the encoding device 100 to the decoding device 200 through a bitstream. The decoding apparatus 200 may extract an entropy-encoded reference picture index from the bitstream, and may acquire the reference picture index by applying entropy decoding to the entropy-encoded reference picture index.
When two reference picture lists are used to predict a target block, a single reference picture index and a single motion vector may be used for each of the reference picture lists. Further, when two reference picture lists are used to predict a target block, two prediction blocks may be designated for the target block. For example, an average or weighted sum of two prediction blocks for a target block may be used to generate a (final) prediction block for the target block.
The motion vector of the target block may be derived by predicting the motion vector index, MVD, reference direction, and reference picture index.
The decoding apparatus 200 may generate a prediction block for the target block based on the derived motion vector and the reference picture index. For example, the prediction block may be a reference block indicated by a derived motion vector in a reference picture indicated by a reference picture index.
Since the prediction motion vector index and the MVD are encoded and the motion vector of the target block is not itself encoded, the number of bits transmitted from the encoding apparatus 100 to the decoding apparatus 200 may be reduced and the encoding efficiency may be improved.
For the target block, motion information of the reconstructed neighboring block may be used. In a specific inter prediction mode, the encoding apparatus 100 may not separately encode actual motion information of the target block. The motion information of the target block is not encoded, but may encode additional information that enables the motion information of the target block to be derived using the reconstructed motion information of the neighboring blocks. Since the additional information is encoded, the number of bits transmitted to the decoding apparatus 200 may be reduced and encoding efficiency may be improved.
For example, as an inter prediction mode in which motion information of a target block is not directly encoded, a skip mode and/or a merge mode may exist. Here, each of the encoding apparatus 100 and the decoding apparatus 200 may use an identifier and/or index of a unit, of which motion information is to be used as motion information of a target unit, among reconstructed neighboring units.
2) Merge mode
As a scheme for deriving motion information of a target block, there is merging. The term "merge" may mean merging the motion of multiple blocks. "merge" may mean that motion information of one block is also applied to other blocks. In other words, the merge mode may be a mode in which motion information of a target block is derived from motion information of neighboring blocks.
When the merge mode is used, the encoding apparatus 100 may predict motion information of the target block using motion information of spatial candidates and/or motion information of temporal candidates. The spatial candidates may include reconstructed spatially neighboring blocks that are spatially adjacent to the target block. The spatial neighboring blocks may include a left neighboring block and an upper neighboring block. The temporal candidates may include col blocks. The terms "spatial candidate" and "spatial merge candidate" may be used to have the same meaning and may be used interchangeably with each other. The terms "temporal candidates" and "temporal merging candidates" may be used to have the same meaning and may be used interchangeably with each other.
The encoding apparatus 100 may acquire a prediction block via prediction. The encoding apparatus 100 may encode a residual block, which is a difference between the target block and the prediction block.
2-1) creation of merge candidate list
When the merge mode is used, each of the encoding apparatus 100 and the decoding apparatus 200 may create a merge candidate list using motion information of spatial candidates and/or motion information of temporal candidates. The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may be unidirectional or bidirectional. The reference direction may represent an inter prediction indicator.
The merge candidate list may include merge candidates. The merge candidate may be motion information. In other words, the merge candidate list may be a list storing a plurality of pieces of motion information.
The merge candidate may be motion information of a plurality of temporal candidates and/or spatial candidates. In other words, the merge candidate list may include motion information of temporal candidates and/or spatial candidates, and the like.
Further, the merge candidate list may include new merge candidates generated by combining merge candidates already existing in the merge candidate list. In other words, the merge candidate list may include new motion information generated by combining pieces of motion information previously existing in the merge candidate list.
Further, the merge candidate list may include history-based merge candidates. The history-based merge candidate may be motion information of a block that was encoded and/or decoded prior to the target block.
Further, the merge candidate list may include merge candidates based on an average of two merge candidates.
The merge candidate may be a specific mode of deriving inter prediction information. The merge candidate may be information indicating a specific mode in which the inter prediction information is derived. Inter prediction information of the target block may be derived according to a specific mode indicated by the merge candidate. Further, the particular mode may include a process of deriving a series of inter prediction information. This particular mode may be an inter prediction information derivation mode or a motion information derivation mode.
The inter prediction information of the target block may be derived from a mode indicated by a merge candidate selected from among merge candidates in the merge candidate list by the merge index.
For example, the motion information derivation mode in the merge candidate list may be at least one of the following modes: 1) Motion information derivation mode for sub-block unit and 2) affine motion information derivation mode.
Further, the merge candidate list may include motion information of a zero vector. The zero vector may also be referred to as a "zero merge candidate".
In other words, the pieces of motion information in the merge candidate list may be at least one of the following information: 1) motion information of a spatial candidate, 2) motion information of a temporal candidate, 3) motion information generated by combining pieces of motion information previously existing in a merge candidate list, and 4) a zero vector.
The motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction. The reference direction may also be referred to as an "inter prediction indicator". The reference direction may be unidirectional or bidirectional. The unidirectional reference direction may indicate either L0 prediction or L1 prediction.
The merge candidate list may be created before prediction in the merge mode is performed.
The number of merging candidates in the merging candidate list may be defined in advance. Each of the encoding apparatus 100 and the decoding apparatus 200 may add the merge candidates to the merge candidate list according to a predefined scheme and a predefined priority such that the merge candidate list has a predefined number of merge candidates. The merge candidate list of the encoding device 100 and the merge candidate list of the decoding device 200 may be made identical to each other using a predefined scheme and a predefined priority.
Merging may be applied based on a CU or PU. When merging is performed based on a CU or PU, the encoding apparatus 100 may transmit a bitstream including predefined information to the decoding apparatus 200. For example, the predefined information may include 1) information indicating whether to perform merging for each block partition, and 2) information about blocks to be performed to merge among blocks that are spatial candidates and/or temporal candidates for a target block.
2-2) searching for motion vectors using merge candidate list
The encoding apparatus 100 may determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidates in the merge candidate list, and may generate a residual block for the merge candidates. The encoding apparatus 100 may encode the target block using a merge candidate that generates the minimum cost in the encoding of the prediction and residual blocks.
Further, the encoding apparatus 100 may determine whether to encode the target block using the merge mode.
2-3) transmission of inter prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The encoding apparatus 100 may generate entropy-encoded inter prediction information by performing entropy encoding on the inter prediction information, and may transmit a bitstream including the entropy-encoded inter prediction information to the decoding apparatus 200. The entropy-encoded inter prediction information may be signaled by the encoding apparatus 100 to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded inter prediction information from the bitstream, and may acquire the inter prediction information by applying entropy decoding to the entropy-encoded inter prediction information.
The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a merge mode is used, 2) a merge index, and 3) correction information.
Furthermore, the inter prediction information may include a residual signal.
The decoding apparatus 200 may acquire the merge index from the bitstream only when the mode information indicates that the merge mode is used.
The mode information may be a merge flag. The unit of mode information may be a block. The information about the block may include mode information, and the mode information may indicate whether a merge mode is applied to the block.
The merge index may indicate a merge candidate to be used for predicting the target block among the merge candidates included in the merge candidate list. Alternatively, the merge index may indicate a block to be merged with the target block among neighboring blocks spatially or temporally adjacent to the target block.
The encoding apparatus 100 may select a merge candidate having the highest encoding performance among the merge candidates included in the merge candidate list, and may set a value of a merge index to indicate the selected merge candidate.
The correction information may be information for correcting a motion vector. The encoding apparatus 100 may generate correction information. The decoding apparatus 200 may correct the motion vector of the merge candidate selected by the merge index based on the correction information.
The correction information may include at least one of information indicating whether correction is to be performed, correction direction information, and correction size information. The prediction mode of correcting the motion vector based on the signaled correction information may be referred to as a "merge mode with motion vector difference".
2-4) inter prediction using a merge mode of inter prediction information
The decoding apparatus 200 may perform prediction on the target block using the merge candidate indicated by the merge index among the merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merge candidate indicated by the merge index, the reference picture index, and the reference direction.
3) Skip mode
The skip mode may be a mode in which motion information of a spatial candidate or motion information of a temporal candidate is applied to a target block without change. In addition, the skip mode may be a mode in which a residual signal is not used. In other words, when the skip mode is used, the reconstructed block may be identical to the predicted block.
The difference between the merge mode and the skip mode is whether to transmit or use a residual signal. That is, the skip mode may be similar to the merge mode except that the residual signal is not transmitted or used.
When the skip mode is used, the encoding apparatus 100 may transmit information on a block whose motion information is to be used as motion information of a target block among blocks that are spatial candidates or temporal candidates to the decoding apparatus 200 through a bitstream. The encoding apparatus 100 may generate entropy-encoded information by performing entropy encoding on the information, and may signal the entropy-encoded information to the decoding apparatus 200 through a bitstream. The decoding apparatus 200 may extract entropy-encoded information from the bitstream, and may acquire the information by applying entropy decoding to the entropy-encoded information.
Further, when the skip mode is used, the encoding apparatus 100 may not transmit other syntax information (such as MVD) to the decoding apparatus 200. For example, when the skip mode is used, the encoding apparatus 100 may not signal syntax elements related to at least one of the MVD, the encoded block flag, and the transform coefficient level to the decoding apparatus 200.
3-1) creation of merge candidate list
The skip mode may also use a merge candidate list. In other words, the merge candidate list may be used in both the merge mode and the skip mode. In this regard, the merge candidate list may also be referred to as a "skip candidate list" or a "merge/skip candidate list".
Alternatively, the skip mode may use an additional candidate list different from the candidate list of the merge mode. In this case, in the following description, the merge candidate list and the merge candidate may be replaced with a skip candidate list and a skip candidate, respectively.
The merge candidate list may be created before prediction in the skip mode is performed.
3-2) searching for motion vectors using merge candidate list
The encoding apparatus 100 may determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may perform prediction on the target block using the merge candidates in the merge candidate list. The encoding apparatus 100 may encode the target block using the merge candidate that generates the minimum cost in prediction.
Further, the encoding apparatus 100 may determine whether to encode the target block using the skip mode.
3-3) transmission of inter prediction information
The encoding apparatus 100 may generate a bitstream including inter prediction information required for inter prediction. The decoding apparatus 200 may perform inter prediction on the target block using inter prediction information of the bitstream.
The inter prediction information may include 1) mode information indicating whether a skip mode is used and 2) a skip index.
The skip index may be the same as the merge index described above.
When the skip mode is used, the target block may be encoded without using a residual signal. The inter prediction information may not include a residual signal. Alternatively, the bitstream may not include a residual signal.
The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the skip mode is used. As described above, the merge index and the skip index may be identical to each other. The decoding apparatus 200 may acquire the skip index from the bitstream only when the mode information indicates that the merge mode or the skip mode is used.
The skip index may indicate a merge candidate to be used for predicting the target block among the merge candidates included in the merge candidate list.
3-4) inter prediction in skip mode using inter prediction information
The decoding apparatus 200 may perform prediction on the target block using the merge candidate indicated by the skip index among the merge candidates included in the merge candidate list.
The motion vector of the target block may be specified by the motion vector of the merge candidate indicated by the skip index, the reference picture index, and the reference direction.
4) Current picture reference mode
The current picture reference mode may represent such a prediction mode: the prediction mode uses a previously reconstructed region in the target picture to which the target block belongs.
A motion vector for specifying a previously reconstructed region may be used. The reference picture index of the target block may be used to determine whether the target block has been encoded in the current picture reference mode.
A flag or index indicating whether the target block is a block encoded in the current picture reference mode may be signaled by the encoding apparatus 100 to the decoding apparatus 200. Alternatively, it may be inferred from the reference picture index of the target block whether the target block is a block encoded in the current picture reference mode.
When a target block is encoded in a current picture reference mode, the current picture may exist at a fixed position or at an arbitrary position in a reference picture list for the target block.
For example, the fixed position may be a position or a last position where the value of the reference picture index is 0.
When a target picture exists at an arbitrary position in the reference picture list, an additional reference picture index indicating such an arbitrary position may be signaled by the encoding apparatus 100 to the decoding apparatus 200.
5) Sub-block merge mode
The sub-block merging mode may be a mode in which motion information is derived from sub-blocks of the CU.
When the sub-block merge mode is applied, a sub-block merge candidate list may be generated using motion information of a co-located sub-block (col-sub-block) of a target sub-block (i.e., sub-block-based temporal merge candidate) in the reference image and/or affine control point motion vector merge candidates.
6) Triangle partition mode
In the triangle division mode, the target block may be divided in a diagonal direction, and the sub-target block generated by the division may be generated. For each sub-target block, motion information of the corresponding sub-target block may be derived, and prediction samples of each sub-target block may be derived using the derived motion information. The prediction samples of the target block may be derived by a weighted sum of the prediction samples of the sub-target block generated via the partition.
7) Combining inter-intra prediction modes
The combined inter-intra prediction mode may be a mode in which prediction samples of the target block are derived using a weighted sum of prediction samples generated via inter prediction and prediction samples generated via intra prediction.
In the above mode, the decoding apparatus 200 may autonomously correct the derived motion information. For example, the decoding apparatus 200 may search for motion information having a minimum Sum of Absolute Differences (SAD) in a specific region based on a reference block indicated by the derived motion information, and may derive the found motion information as corrected motion information.
In the above mode, the decoding apparatus 200 may use the optical stream to compensate for prediction samples derived via inter prediction.
In the AMVP mode, merge mode, skip mode, and the like described above, the motion information to be used for predicting the target block among the pieces of motion information in the list may be specified using the index information of the list.
In order to improve encoding efficiency, the encoding apparatus 100 may signal only an index of an element generating the minimum cost in inter prediction of a target block among elements in a list. The encoding apparatus 100 may encode the index and may signal the encoded index.
Therefore, it is necessary to be able to derive the above-described list (i.e., the predicted motion vector candidate list and the merge candidate list) based on the same data using the same scheme by the encoding apparatus 100 and the decoding apparatus 200. Here, the same data may include a reconstructed picture and a reconstructed block. Furthermore, in order to specify elements using indexes, the order of elements in the list must be fixed.
Fig. 10 illustrates spatial candidates according to an embodiment.
In fig. 10, the positions of the spatial candidates are shown.
The large block at the center of the graph may represent the target block. Five tiles may represent spatial candidates.
The coordinates of the target block may be (xP, yP), and the size of the target block may be represented by (nPSW, nPSH).
Spatial candidate A 0 May be a block adjacent to the lower left corner of the target block. A is that 0 May be a block that occupies pixels located at coordinates (xP-1, yp+npsh).
Spatial candidate A 1 May be a block adjacent to the left side of the target block. A is that 1 May be the lowest block among the blocks adjacent to the left side of the target block. Alternatively, A 1 Can be with A 0 Top of (2)Adjacent blocks. A is that 1 May be a block that occupies pixels located at coordinates (xP-1, yP+nPSH-1).
Spatial candidate B 0 May be a block adjacent to the upper right corner of the target block. B (B) 0 May be a block that occupies pixels located at coordinates (xp+npsw, yP-1).
Spatial candidate B 1 May be a block adjacent to the top of the target block. B (B) 1 May be the rightmost block among the blocks adjacent to the top of the target block. Alternatively, B 1 Can be with B 0 To the left of the adjacent block. B (B) 1 May be a block that occupies pixels located at coordinates (xP+nPSW-1, yP-1).
Spatial candidate B 2 May be a block adjacent to the upper left corner of the target block. B (B) 2 May be a block that occupies a pixel located at coordinates (xP-1, yp-1).
Determination of availability of spatial and temporal candidates
In order to include motion information of a spatial candidate or motion information of a temporal candidate in a list, it is necessary to determine whether the motion information of the spatial candidate or the motion information of the temporal candidate is available.
Hereinafter, the candidate block may include a spatial candidate and a temporal candidate.
The determination may be performed, for example, by sequentially applying the following steps 1) to 4).
Step 1) when a PU including a candidate block is located outside the boundary of a picture, the availability of the candidate block may be set to "false". The expression "availability is set to false" may have the same meaning as "set to unavailable".
Step 2) when the PU including the candidate block is located outside the boundary of the stripe, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different stripes, the availability of the candidate block may be set to "false".
Step 3) when the PU including the candidate block is located outside the boundary of the parallel block, the availability of the candidate block may be set to "false". When the target block and the candidate block are located in different parallel blocks, the availability of the candidate block may be set to "false".
Step 4) when the prediction mode of the PU including the candidate block is an intra prediction mode, the availability of the candidate block may be set to "false". When the PU including the candidate block does not use inter prediction, the availability of the candidate block may be set to "false".
Fig. 11 illustrates a sequence of adding motion information of a spatial candidate to a merge list according to an embodiment.
As shown in fig. 11, when a plurality of pieces of motion information of space candidates are added to a merge list, a may be used 1 、B 1 、B 0 、A 0 And B 2 Is a sequence of (a). That is, according to A 1 、B 1 、B 0 、A 0 And B 2 A plurality of pieces of motion information of available spatial candidates is added to the merge list.
Method for deriving a merge list in merge mode and skip mode
As described above, the maximum number of merge candidates in the merge list may be set. "N" may be used to indicate the maximum number of settings. The set number may be transmitted from the encoding apparatus 100 to the decoding apparatus 200. The head of the tape may comprise N. In other words, the maximum number of merge candidates in the merge list for the target block of the stripe may be set by the stripe header. For example, the value of N may be substantially 5.
The pieces of motion information (i.e., merge candidates) may be added to the merge list in the order of the following steps 1) to 4).
Step 1)Among the spatial candidates, available spatial candidates may be added to the merge list. The pieces of motion information of the available spatial candidates may be added to the merge list in the order shown in fig. 10. Here, when the motion information of the available spatial candidate overlaps with other motion information already existing in the merge list, the motion information of the available spatial candidate may not be added to the merge list. The operation of checking whether the corresponding motion information overlaps with other motion information present in the list may be simply referred to as "overlap check".
The maximum number of motion information added may be N.
Step 2)When the number of pieces of motion information in the merge list is less than N and a time candidate is available, the motion information of the time candidate may be added to the merge list. Here, when the motion information of the available time candidate overlaps with other motion information already existing in the merge list, the motion information of the available time candidate may not be added to the merge list.
Step 3)When the number of pieces of motion information in the merge list is less than N and the type of the target slice is "B", combined motion information generated by combining bi-prediction (bi-prediction) may be added to the merge list.
The target stripe may be a stripe that includes a target block.
The combined motion information may be a combination of L0 motion information and L1 motion information. The L0 motion information may be motion information referring to only the reference picture list L0. The L1 motion information may be motion information referring to only the reference picture list L1.
In the merge list, there may be one or more pieces of L0 motion information. Further, in the merge list, there may be one or more pieces of L1 motion information.
The combined motion information may include one or more pieces of combined motion information. When generating the combined motion information, L0 motion information and L1 motion information to be used for the step of generating the combined motion information among the one or more pieces of L0 motion information and the one or more pieces of L1 motion information may be predefined. One or more pieces of combined motion information may be generated in a predefined order via combined bi-prediction using a pair of different motion information in the merge list. One piece of motion information of the pair of different motion information may be L0 motion information, and the other piece of motion information of the pair of different motion information may be L1 motion information.
For example, the combined motion information added with the highest priority may be a combination of L0 motion information having a merge index 0 and L1 motion information having a merge index 1. When the motion information having the merge index 0 is not L0 motion information or when the motion information having the merge index 1 is not L1 motion information, the combination motion information may be neither generated nor added. Next, the combined motion information added with the next priority may be a combination of L0 motion information having a merge index 1 and L1 motion information having a merge index 0. The detailed combinations that follow may be consistent with other combinations in the video encoding/decoding arts.
Here, when the combined motion information overlaps with other motion information already existing in the merge list, the combined motion information may not be added to the merge list.
Step 4)When the number of pieces of motion information in the merge list is less than N, motion information of a zero vector may be added to the merge list.
The zero vector motion information may be motion information in which the motion vector is a zero vector.
The number of zero vector motion information may be one or more. The reference picture indexes of one or more pieces of zero vector motion information may be different from each other. For example, the value of the reference picture index of the first zero vector motion information may be 0. The value of the reference picture index of the second zero vector motion information may be 1.
The number of zero vector motion information pieces may be the same as the number of reference pictures in the reference picture list.
The reference direction of the zero vector motion information may be bi-directional. Both motion vectors may be zero vectors. The number of zero vector motion information may be the smaller one of the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1. Alternatively, when the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1 are different from each other, the reference direction as one direction may be used for a reference picture index applicable to only a single reference picture list.
The encoding apparatus 100 and/or the decoding apparatus 200 may then add zero vector motion information to the merge list while changing the reference picture index.
When the zero vector motion information overlaps with other motion information already present in the merge list, the zero vector motion information may not be added to the merge list.
The order of steps 1) to 4) described above is merely exemplary and may be changed. Furthermore, some of the above steps may be omitted according to predefined conditions.
Method for deriving a list of predicted motion vector candidates in AMVP mode
The maximum number of predicted motion vector candidates in the predicted motion vector candidate list may be predefined. The predefined maximum number may be indicated with N. For example, the predefined maximum number may be 2.
A plurality of pieces of motion information (i.e., predicted motion vector candidates) may be added to the predicted motion vector candidate list in the order of the following steps 1) to 3).
Step 1)Available spatial candidates among the spatial candidates may be added to the predicted motion vector candidate list. The spatial candidates may include a first spatial candidate and a second spatial candidate.
The first spatial candidate may be A 0 、A 1 Scaled A 0 And scaled A 1 One of which is a metal alloy. The second spatial candidate may be B 0 、B 1 、B 2 Scaled B 0 Scaled B 1 And scaled B 2 One of which is a metal alloy.
The plurality of pieces of motion information of the available spatial candidates may be added to the predicted motion vector candidate list in the order of the first spatial candidate and the second spatial candidate. In this case, when the motion information of the available spatial candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available spatial candidate may not be added to the predicted motion vector candidate list. In other words, when the value of N is 2, if the motion information of the second spatial candidate is the same as the motion information of the first spatial candidate, the motion information of the second spatial candidate may not be added to the predicted motion vector candidate list.
The maximum number of motion information added may be N.
Step 2)When the number of pieces of motion information in the predicted motion vector candidate list is less than N and a temporal candidate is available, the motion information of the temporal candidate may be added to the predicted motion vector candidate list. In this case, when the motion information of the available time candidate overlaps with other motion information already existing in the predicted motion vector candidate list, the motion information of the available time candidate may not be added to the predicted motion vector candidate list.
Step 3)When the number of pieces of motion information in the predicted motion vector candidate list is less than N, zero vector motion information may be added to the predicted motion vector candidate list.
The zero vector motion information may include one or more pieces of zero vector motion information. The reference picture indexes of the one or more pieces of zero vector motion information may be different from each other.
The encoding apparatus 100 and/or the decoding apparatus 200 may sequentially add a plurality of pieces of zero vector motion information to the predicted motion vector candidate list while changing the reference picture index.
When the zero vector motion information overlaps with other motion information already present in the predicted motion vector candidate list, the zero vector motion information may not be added to the predicted motion vector candidate list.
The description of zero vector motion information made in connection with the merge list above is also applicable to zero vector motion information. A repetitive description thereof will be omitted.
The order of steps 1) to 3) described above is merely exemplary and may be changed. Furthermore, some of the steps may be omitted according to predefined conditions.
Fig. 12 illustrates a transform and quantization process according to an example.
As shown in fig. 12, the level of quantization may be generated by performing a transform and/or quantization process on the residual signal.
The residual signal may be generated as a difference between the original block and the predicted block. Here, the prediction block may be a block generated via intra prediction or inter prediction.
The residual signal may be transformed into a signal in the frequency domain by a transformation process that is part of the quantization process.
The transform kernels for the transforms may include various DCT kernels, such as Discrete Cosine Transform (DCT) type 2 (DCT-II) and Discrete Sine Transform (DST) kernels.
These transform kernels may perform separable transforms or two-dimensional (2D) non-separable transforms on the residual signal. The separable transform may be a transform that indicates that a one-dimensional (1D) transform is performed on the residual signal in each of the horizontal and vertical directions.
The DCT types and DST types adaptively used for 1D transform may include DCT-V, DCT-VIII, DST-I, and DST-VII in addition to DCT-II, as shown in each of the following tables 3 and 4.
TABLE 3 Table 3
Figure BDA0004133786430000651
Figure BDA0004133786430000661
TABLE 4 Table 4
Transform set Transform candidates
0 DST-VII,DCT-VIII,DST-I
1 DST-VII,DST-I,DCT-VIII
2 DST-VII,DCT-V,DST-I
As shown in tables 3 and 4, when deriving the DCT type or DST type to be used for transformation, a transformation set may be used. Each transformation set may include a plurality of transformation candidates. Each transform candidate may be of the DCT type or the DST type.
Table 5 below shows examples of a transform set to be applied to a horizontal direction and a transform set to be applied to a vertical direction according to an intra prediction mode.
TABLE 5
Intra prediction mode 0 1 2 3 4 5 6 7 8 9
Vertical transform set 2 1 0 1 0 1 0 1 0 1
Level transformation set 2 1 0 1 0 1 0 1 0 1
Intra prediction mode 10 11 12 13 14 15 16 17 18 19
Vertical transform set 0 1 0 1 0 0 0 0 0 0
Level transformation set 0 1 0 1 2 2 2 2 2 2
Intra prediction mode 20 21 22 23 24 25 26 27 28 29
Vertical transform set 0 0 0 1 0 1 0 1 0 1
Level transformation set 2 2 2 1 0 1 0 1 0 1
Intra prediction mode 30 31 32 33 34 35 36 37 38 39
Vertical transform set 0 1 0 1 0 1 0 1 0 1
Level transformation set 0 1 0 1 0 1 0 1 0 1
Intra prediction mode 40 41 42 43 44 45 46 47 48 49
Vertical transform set 0 1 0 1 0 1 2 2 2 2
Level transformation set 0 1 0 1 0 1 0 0 0 0
Intra prediction mode 50 51 52 53 54 55 56 57 58 59
Vertical transform set 2 2 2 2 2 1 0 1 0 1
Level transformation set 0 0 0 0 0 1 0 1 0 1
Intra prediction mode 60 61 62 63 64 65 66
Vertical transform set 0 1 0 1 0 1 0
Level transformation set 0 1 0 1 0 1 0
In table 5, the numbers of the vertical transform set and the horizontal transform set to be applied to the horizontal direction of the residual signal according to the intra prediction mode of the target block are shown.
As illustrated in fig. 4 and 5, a set of transforms to be applied to the horizontal direction and the vertical direction may be predefined according to an intra prediction mode of a target block. The encoding apparatus 100 may perform transformation and inverse transformation on the residual signal using the transforms included in the transform set corresponding to the intra prediction mode of the target block. Further, the decoding apparatus 200 may perform inverse transformation on the residual signal using transforms included in a transform set corresponding to the intra prediction mode of the target block.
In the transformation and inverse transformation, as illustrated in tables 3, 4, and 5, a transformation set to be applied to the residual signal may be determined and may not be signaled. The transformation indicating information may be signaled from the encoding apparatus 100 to the decoding apparatus 200. The transform instruction information may be information indicating which one of a plurality of transform candidates included in a transform set to be applied to the residual signal is used.
For example, when the size of the target block is 64×64 or less, transform sets each having three transforms may be configured according to the intra prediction mode. The optimal transformation method may be selected from a total of nine transformation methods resulting from a combination of three transformations in the horizontal direction and three transformations in the vertical direction. By such an optimal transformation method, the residual signal may be encoded and/or decoded, and thus encoding efficiency may be improved.
Here, the information indicating which of the plurality of transforms belonging to each transform set has been used for at least one of the vertical transform and the horizontal transform may be entropy encoded and/or entropy decoded. Here, truncated unary binarization may be used to encode and/or decode such information.
As described above, a method using various transforms may be applied to a residual signal generated via intra prediction or inter prediction.
The transformation may include at least one of a first transformation and a secondary transformation. The transform coefficients may be generated by performing a first transform on the residual signal, and the secondary transform coefficients may be generated by performing a secondary transform on the transform coefficients.
The first transformation may be referred to as a "primary transformation". Further, the first transform may also be referred to as an "adaptive multi-transform (AMT) scheme". As described above, AMT may represent the application of different transformations to the respective 1D directions (i.e., vertical and horizontal directions).
The secondary transform may be a transform for increasing the energy concentration of transform coefficients generated by the first transform. Similar to the first transformation, the secondary transformation may be a separable transformation or a non-separable transformation. Such an inseparable transformation may be an inseparable secondary transformation (NSST).
The first transformation may be performed using at least one of a predefined plurality of transformation methods. For example, the predefined plurality of transform methods may include Discrete Cosine Transform (DCT), discrete Sine Transform (DST), karhunen-Loeve transform (KLT), and the like.
Further, the first transform may be a transform having various types according to a kernel function defining a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST).
For example, the transformation type may be determined based on at least one of: 1) a prediction mode (e.g., one of intra prediction and inter prediction) of the target block, 2) a size of the target block, 3) a shape of the target block, 4) an intra prediction mode of the target block, 5) a component (e.g., one of a luminance component and a chrominance component) of the target block, and 6) a partition type (e.g., one of a quadtree, a binary tree, and a trigeminal tree) applied to the target block.
For example, according to the transform kernel presented in Table 6 below, the first transform may include transforms such as DCT-2, DCT-5, DCT-7, DST-1, DST-8, and DCT-8. In table 6 below, various transformation types and transformation kernel functions for transformation selection (MTS) are illustrated.
MTS may refer to the selection of a combination of one or more DCT and/or DST kernels to transform the residual signal in the horizontal and/or vertical directions.
TABLE 6
Figure BDA0004133786430000681
In Table 6, i and j may be integer values equal to or greater than 0 and less than or equal to N-1.
The secondary transform may be performed on transform coefficients generated by performing the first transform.
As in the first transform, a set of transforms may also be defined in the secondary transform. The method for deriving and/or determining the set of transforms described above may be applied not only to the first transform but also to the secondary transform.
The first transform and the secondary transform may be determined for a particular objective.
For example, the first transform and the secondary transform may be applied to signal components corresponding to one or more of a luminance (luma) component and a chrominance (chroma) component. Whether to apply the first transform and/or the secondary transform may be determined according to at least one of the encoding parameters for the target block and/or the neighboring blocks. For example, whether to apply the first transform and/or the secondary transform may be determined according to the size and/or shape of the target block.
In the encoding apparatus 100 and the decoding apparatus 200, transformation information indicating a transformation method to be used for a target may be derived by using the specification information.
For example, the transform information may include a transform index to be used for the primary transform and/or the secondary transform. Alternatively, the transformation information may indicate that the primary transformation and/or the secondary transformation is not used.
For example, when the target of the primary transform and the secondary transform is a target block, a transform method indicated by the transform information to be applied to the primary transform and/or the secondary transform may be determined according to at least one of the encoding parameters for the target block and/or blocks adjacent to the target block.
Alternatively, transformation information indicating a transformation method for a specific target may be signaled from the encoding apparatus 100 to the decoding apparatus 200.
For example, for a single CU, whether to use a primary transform, an index indicating the primary transform, whether to use a secondary transform, and an index indicating the secondary transform may be derived as transform information by the decoding apparatus 200. Optionally, for a single CU, transformation information may be signaled indicating: whether to use a primary transform, an index indicating a primary transform, whether to use a secondary transform, and an index indicating a secondary transform.
Quantized transform coefficients (i.e., quantized levels) may be generated by performing quantization on a result generated by performing the first transform and/or the secondary transform or performing quantization on a residual signal.
Fig. 13 illustrates a diagonal scan according to an example.
Fig. 14 illustrates a horizontal scan according to an example.
Fig. 15 illustrates a vertical scan according to an example.
The quantized transform coefficients may be scanned via at least one of (upper right) diagonal scan, vertical scan, and horizontal scan according to at least one of an intra prediction mode, a block size, and a block shape. The block may be a Transform Unit (TU).
Each scan may be initiated at a particular start point and may terminate at a particular end point.
For example, quantized transform coefficients may be changed into a 1D vector form by scanning coefficients of a block using the diagonal scan of fig. 13. Alternatively, the horizontal scan of fig. 14 or the vertical scan of fig. 15 may be used according to the size of a block and/or an intra prediction mode, instead of using a diagonal scan.
The vertical scanning may be an operation of scanning the 2D block type coefficients in the column direction. The horizontal scanning may be an operation of scanning the 2D block type coefficients in the row direction.
In other words, which of the diagonal scan, the vertical scan, and the horizontal scan is to be used may be determined according to the size of the block and/or the inter prediction mode.
As shown in fig. 13, 14, and 15, the quantized transform coefficients may be scanned in a diagonal direction, a horizontal direction, or a vertical direction.
The quantized transform coefficients may be represented by a block shape. Each block may include a plurality of sub-blocks. Each sub-block may be defined according to a minimum block size or a minimum block shape.
In the scanning, a scanning order according to the type or direction of scanning may be first applied to the sub-blocks. Further, a scan order according to a scan direction may be applied to quantized transform coefficients in each sub-block.
For example, as shown in fig. 13, 14 and 15, when the size of the target block is 8×8, quantized transform coefficients may be generated by first transforming, secondary transforming and quantizing the residual signal of the target block. Thus, one of three types of scan orders may be applied to four 4×4 sub-blocks, and quantized transform coefficients may also be scanned for each 4×4 sub-block according to the scan order.
The encoding apparatus 100 may generate entropy-encoded quantized transform coefficients by performing entropy encoding on the scanned quantized transform coefficients, and may generate a bitstream including the entropy-encoded quantized transform coefficients.
The decoding apparatus 200 may extract the entropy-encoded quantized transform coefficients from the bitstream, and may generate the quantized transform coefficients by performing entropy decoding on the entropy-encoded quantized transform coefficients. The quantized transform coefficients may be arranged in the form of 2D blocks via inverse scanning. Here, as a method of inverse scanning, at least one of upper right diagonal scanning, vertical scanning, and horizontal scanning may be performed.
In the decoding apparatus 200, inverse quantization may be performed on the quantized transform coefficients. The secondary inverse transform may be performed on a result generated by performing the inverse quantization according to whether the secondary inverse transform is performed. Further, the first inverse transform may be performed on a result generated by performing the secondary inverse transform according to whether the first inverse transform is to be performed. The reconstructed residual signal may be generated by performing a first inverse transform on a result generated via performing a secondary inverse transform.
For a luma component reconstructed via intra prediction or inter prediction, inverse mapping with dynamic range may be performed before loop filtering.
The dynamic range may be divided into 16 equal segments and the mapping function of the corresponding segments may be signaled. Such mapping functions may be signaled at the stripe level or parallel block group level.
An inverse mapping function for performing inverse mapping may be derived based on the mapping function.
Loop filtering, storage of reference pictures, and motion compensation may be performed in the inverse mapping region.
A prediction block generated via inter prediction may be transformed into a mapping region by mapping using a mapping function, and the transformed prediction block may be used to generate a reconstructed block. However, since intra prediction is performed in the mapping region, a prediction block generated via intra prediction may be used to generate a reconstructed block without mapping and/or inverse mapping.
For example, when the target block is a residual block of a chrominance component, the residual block may be transformed to an inverse mapping region by scaling the chrominance component of the mapping region.
Whether scaling is available may be signaled at the stripe level or parallel block group level.
For example, scaling may be applied only to the case where the mapping is available for the luma component and the partitions of the chroma component follow the same tree structure.
Scaling may be performed based on an average of values of samples in the luma prediction block corresponding to the chroma prediction block. Here, when the target block uses inter prediction, the luminance prediction block may represent a mapped luminance prediction block.
The value required for scaling may be derived by referencing a look-up table using the index of the segment to which the average of the sample values of the luma prediction block belongs.
The residual block may be transformed to the inverse mapping region by scaling the residual block using the finally derived value. Thereafter, for blocks of the chrominance component, reconstruction, intra prediction, inter prediction, loop filtering, and storage of reference pictures may be performed in the inverse mapping region.
For example, information indicating whether a mapping and/or inverse mapping of the luma component and the chroma component is available may be signaled through the sequence parameter set.
A prediction block of the target block may be generated based on the block vector. The block vector may indicate a displacement between the target block and the reference block. The reference block may be a block in the target image.
In this way, the prediction mode in which the prediction block is generated by referring to the target image may be referred to as an "Intra Block Copy (IBC) mode".
The IBC mode may be applied to CUs having a specific size. For example, the IBC mode may be applied to mxn CUs. Here, M and N may be less than or equal to 64.
IBC mode may include skip mode, merge mode, AMVP mode, and the like. In the case of the skip mode or the merge mode, a merge candidate list may be configured and a merge index may be signaled, and thus a single merge candidate may be designated among the merge candidates existing in the merge candidate list. The block vector of the specified merge candidate may be used as the block vector of the target block.
In the case of AMVP mode, differential block vectors may be signaled. Further, the prediction block vector may be derived from a left neighboring block and an upper neighboring block of the target block. Furthermore, an index indicating which neighboring block is to be used may be signaled.
The prediction block in IBC mode may be included in the target CTU or left CTU and may be limited to a block within the previously reconstructed region. For example, the value of the block vector may be limited such that the predicted block of the target block is located in a specific region. The specific region may be a region defined by three 64×64 blocks encoded and/or decoded before the 64×64 blocks including the target block. Limiting the values of the block vectors in this way may therefore reduce memory consumption and device complexity caused by implementation of IBC mode.
Fig. 16 is a configuration diagram of an encoding apparatus according to an embodiment.
The encoding apparatus 1600 may correspond to the encoding apparatus 100 described above.
The encoding apparatus 1600 may include a processing unit 1610, a memory 1630, a User Interface (UI) input device 1650, a UI output device 1660, and a storage 1640 in communication with each other through a bus 1690. The encoding device 1600 may also include a communication unit 1620 connected to the network 1699.
The processing unit 1610 may be a Central Processing Unit (CPU) or semiconductor device for executing processing instructions stored in memory 1630 or memory 1640. The processing unit 1610 may be at least one hardware processor.
The processing unit 1610 may generate and process signals, data, or information input to the encoding apparatus 1600, output from the encoding apparatus 1600, or used in the encoding apparatus 1600, and may perform checking, comparing, determining, etc. related to the signals, data, or information. In other words, in embodiments, the generation and processing of data or information, as well as the checking, comparing, and determining related to the data or information, may be performed by the processing unit 1610.
The processing unit 1610 may include an inter prediction unit 110, an intra prediction unit 120, a switcher 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy coding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
At least some of the inter prediction unit 110, the intra prediction unit 120, the switcher 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy encoding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190 may be program modules and may communicate with external devices or systems. The program modules may be included in the encoding device 1600 in the form of an operating system, application program modules, or other program modules.
The program modules may be physically stored in various types of well known storage devices. Furthermore, at least some of the program modules may also be stored in a remote storage device capable of communicating with the encoding apparatus 1600.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations according to embodiments or for implementing abstract data types according to embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of the encoding apparatus 1600.
The processing unit 1610 may run instructions or codes in the inter prediction unit 110, the intra prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy coding unit 150, the inverse quantization unit 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190.
The memory unit may represent the memory 1630 and/or the storage 1640. Each of memory 1630 and storage 1640 may be any of various types of volatile or non-volatile storage media. For example, memory 1630 may include at least one of Read Only Memory (ROM) 1631 and Random Access Memory (RAM) 1632.
The storage unit may store data or information for operation of the encoding apparatus 1600. In an embodiment, data or information of the encoding apparatus 1600 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
The encoding apparatus 1600 may be implemented in a computer system that includes a computer readable storage medium.
The storage medium may store at least one module required for operation of the encoding apparatus 1600. The memory 1630 may store at least one module and may be configured to cause the at least one module to be executed by the processing unit 1610.
Functions related to communication of data or information of the encoding apparatus 1600 may be performed by the communication unit 1620.
For example, the communication unit 1620 may transmit a bit stream to a decoding apparatus 1700 to be described later.
Fig. 17 is a configuration diagram of a decoding apparatus according to an embodiment.
The decoding apparatus 1700 may correspond to the decoding apparatus 200 described above.
Decoding apparatus 1700 may include a processing unit 1710, a memory 1730, a User Interface (UI) input device 1750, a UI output device 1760, and a memory 1740 in communication with each other via a bus 1790. Decoding device 1700 may also include a communication unit 1720 connected to a network 1799.
The processing unit 1710 may be a Central Processing Unit (CPU) or a semiconductor device for executing processing instructions stored in the memory 1730 or the memory 1740. The processing unit 1710 may be at least one hardware processor.
The processing unit 1710 may generate and process a signal, data, or information input to the decoding apparatus 1700, output from the decoding apparatus 1700, or used in the decoding apparatus 1700, and may perform checking, comparing, determining, or the like related to the signal, data, or information. In other words, in an embodiment, the generation and processing of data or information, as well as the checking, comparing, and determining related to the data or information, may be performed by the processing unit 1710.
The processing unit 1710 may include an entropy decoding unit 210, an inverse quantization unit 220, an inverse transformation unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, a filter unit 260, and a reference picture buffer 270.
At least some of the entropy decoding unit 210, the inverse quantization unit 220, the inverse transformation unit 230, the intra prediction unit 240, the inter prediction unit 250, the adder 255, the switch 245, the filter unit 260, and the reference picture buffer 270 of the decoding apparatus 200 may be program modules and may communicate with external devices or systems. The program modules may be included in the decoding device 1700 in the form of an operating system, application program modules, or other program modules.
Program modules may be physically stored in various types of well known storage devices. Furthermore, at least some of the program modules may also be stored in a remote storage device capable of communicating with the decoding apparatus 1700.
Program modules may include, but are not limited to, routines, subroutines, programs, objects, components, and data structures for performing functions or operations according to embodiments or for implementing abstract data types according to embodiments.
The program modules may be implemented using instructions or code executed by at least one processor of decoding device 1700.
The processing unit 1710 may run instructions or codes in the entropy decoding unit 210, the inverse quantization unit 220, the inverse transformation unit 230, the intra prediction unit 240, the inter prediction unit 250, the switch 245, the adder 255, the filter unit 260, and the reference picture buffer 270.
The memory unit may represent the memory 1730 and/or the memory 1740. Each of memory 1730 and memory 1740 may be any of various types of volatile or non-volatile storage media. For example, memory 1730 may include at least one of ROM 1731 and RAM 1732.
The storage unit may store data or information for the operation of the decoding apparatus 1700. In an embodiment, data or information of the decoding apparatus 1700 may be stored in a storage unit.
For example, the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bitstreams, and the like.
Decoding device 1700 may be implemented in a computer system that includes a computer-readable storage medium.
The storage medium may store at least one module required for the operation of the decoding apparatus 1700. The memory 1730 may store at least one module and may be configured to cause the at least one module to be executed by the processing unit 1710.
Functions related to communication of data or information of the decoding apparatus 1700 may be performed by the communication unit 1720.
For example, the communication unit 1720 may receive a bitstream from the encoding device 1700.
Hereinafter, the processing unit may represent the processing unit 1610 of the encoding device 1600 and/or the processing unit 1710 of the decoding device 1700. For example, with respect to the prediction related functionality, the processing unit may represent switch 115 and/or switch 245. Regarding functions related to inter prediction, the processing unit may represent the inter prediction unit 110, the subtractor 125, and the adder 175, and may represent the inter prediction unit 250 and the adder 255. Regarding functions related to intra prediction, the processing unit may represent the intra prediction unit 120, the subtractor 125, and the adder 175, and may represent the intra prediction unit 240 and the adder 255. Regarding functions related to transformation, the processing unit may represent the transformation unit 130 and the inverse transformation unit 170, and may represent the inverse transformation unit 230. Regarding functions related to quantization, the processing unit may represent the quantization unit 140 and the inverse quantization unit 160, and may instruct the inverse quantization unit 220. With respect to functions related to entropy encoding and/or entropy decoding, the processing unit may represent entropy encoding unit 150 and/or entropy decoding unit 210. With respect to functions related to filtering, the processing unit may represent the filter unit 180 and/or the filter unit 260. Regarding functions related to reference pictures, the processing unit may instruct the reference picture buffer 190 and/or the reference picture buffer 270.
Fig. 18 illustrates a partitioning method for a block according to an example.
The block 1800 may be partitioned using a Quadtree (QT) partitioning method, a Binary Tree (BT) partitioning method, or a Trigeminal Tree (TT) partitioning method. Further, the BT partition method and the TT partition method may be applied horizontally or vertically.
Block 1800 may be a Coding Tree Unit (CTU) or a Coding Unit (CU).
In fig. 18, a block 1810 partitioned using the QT partition method, a block 1820 partitioned using the horizontal BT partition method, a block 1830 partitioned using the vertical BT partition method, a block 1840 partitioned using the horizontal TT partition method, and a block 1850 partitioned using the vertical TT partition method are shown.
With the above-described partitioning, not only square blocks but also rectangular blocks can be generated and used.
The above-described partitioning method can be applied recursively. In other words, CTUs and CUs may be partitioned recursively. Hereinafter, a block, CTU, CU, etc. may be referred to as a target block.
Fig. 19 shows an angular distribution in a geometric partitioning pattern according to an example.
In fig. 19, an angular distribution that the encoding device 1600 and the decoding device 1700 may select among modes included in a Geometric Partition Mode (GPM) is illustrated.
The Geometric Partition Mode (GPM) may be a mode in which a target block is partitioned into two sub-blocks using geometric partitioning, and (different) predictions are applied to the two sub-blocks obtained by the partitioning, respectively. Each prediction may be intra-prediction and/or inter-prediction.
Here, each sub-block may have a rectangular shape, a triangular shape, or a trapezoidal shape.
In general, a target block may be partitioned into two sub-blocks having a rectangular shape. In the GPM, a target block may be partitioned into 1) two rectangular sub-blocks, 2) a trapezoid sub-block and a triangle sub-block, or 3) two trapezoid sub-blocks according to a mode of the GPM.
The target block may be partitioned along the boundary. In other words, the target block may be partitioned into two sub-blocks along a partition boundary. Alternatively, the boundary between two sub-blocks may define a partition.
Two different predictors may be generated for the two sub-blocks, respectively. The two predictors may be generated using different pieces of motion information. One predictor may be generated by a weighted sum of the two predictors.
For example, each predictor may be generated using neighboring blocks of the target block and/or reference blocks for the target block, as described above in the aforementioned intra prediction mode, merge mode, AMVP mode, etc.
For example, as described above in the aforementioned intra prediction mode, merge mode, AMVP mode, etc., each predictor may refer to a (middle) prediction block generated using a neighboring block of the target block and/or a reference block for the target block, or may refer to information used to generate such a prediction block.
Different predictions may be used to generate the two predictors for the target block. The two predictors may include a first predictor and a second predictor.
Of the two predictors, the first predictor may be generated using intra prediction. For intra prediction of the first predictor, an intra prediction mode of the first predictor may be determined. Information for determining the intra prediction mode of the first predictor may be signaled through a bitstream.
For each of the two predictors, information specifying a prediction type for the corresponding predictor (e.g., a flag indicating one of intra-prediction and inter-prediction) may be signaled via a bitstream.
In an embodiment, when intra prediction is used for the respective prediction factor, an intra prediction mode available for intra prediction may be predefined. For example, a predefined intra prediction mode that may be used in intra prediction of a corresponding predictor may be limited according to a type of GPM for a target block.
For example, the predefined intra prediction mode may be determined according to the shape of the geometric partition. The predefined intra prediction mode may be determined by partition lines of a geometric partition.
For example, the predefined intra-prediction modes may include 1) modes parallel to the geometric partition and/or partition lines of the partition, and 2) modes perpendicular to the partition lines. Alternatively, the predefined intra prediction modes may include 1) modes parallel to geometric partitions and/or partition lines of partitions, 2) modes perpendicular to partition lines, and 3) planar modes.
The geometric partition and/or the partition line of the partition may refer to a boundary for partitioning the target block into two blocks. The terms "partition line" and "boundary" may be used interchangeably with each other.
In an embodiment, when one of the predefined intra prediction modes for a predictor is used, information specifying the intra prediction mode of the predictor among the predefined intra prediction modes may be signaled through a bitstream.
Of the two predictors, inter prediction may be used to generate a second predictor. For inter prediction of the second predictor, an inter prediction mode (e.g., merge mode, AMVP mode, etc.) of the second predictor may be determined. Information for determining the inter prediction mode of the second predictor may be signaled through a bitstream.
Alternatively, two predictors may be generated separately using intra prediction.
Alternatively, two predictors may be generated separately using inter prediction.
The geometric partitions in the GPM may be specified by θ (Theta) and ρ (Rho).
θ may be an angle. θ may be an angle between a line passing through the center of the target block and perpendicular to the boundary and a horizontal line passing through the center of the target block. Hereinafter, the term "angle" may refer to θ.
ρ may be the (shortest) distance between the center of the target block and the boundary. Alternatively, ρ may be a distance between a point indicating the center of the target block in a line passing through the center of the target block and perpendicular to the boundary, and a point intersecting the boundary. Hereinafter, the term "distance" may refer to ρ.
θ may be one of 20 predefined angles.
In fig. 19, a distribution of 20 predefined angles is shown. The distribution of predefined angles in fig. 19 is merely an example, and the predefined angles may be different from those illustrated in fig. 19.
ρ may be one of four predefined distances. The predefined distance may vary depending on the size of the target block. Alternatively, the predefined distance may be determined according to the size of the target block.
A predefined number of modes in the GPM may be determined from θ and ρ. For example, 64 patterns in the GPM may be defined and used according to 20 predefined angles of θ and four predefined distances of ρ.
In an embodiment, the distribution of angles may be used in a predefined form by the encoding device 1600 and the decoding device 1700.
In an embodiment, the encoding device 1600 may derive an optimal distribution. Distribution information indicating the derived distribution may be signaled from the encoding device 1600 to the decoding device 1700. When signaling is applied, the distribution information may be encoded by the encoding device 1600, and the encoded distribution information may be signaled from the encoding device 1600 to the decoding device 1700. The encoded distribution information may be decoded by the decoding apparatus 1700 into original distribution information.
Hereinafter, the term "mode" of the GPM may have the same meaning as "shape" of the GPM or "boundary" of the GPM, and the terms "mode", "shape" and "boundary" may be used interchangeably with each other.
Hereinafter, the term "pattern" of a GPM may refer to values and/or indices for determining and/or identifying the shape and/or boundaries of the GPM.
The partitioning in the GPM may be conceptual. At least some regions of sub-blocks in the GPM may overlap each other.
In fact, the target block and each sub-block may be equal in size to each other during the processing of the encoding apparatus 1600 and the decoding apparatus 1700.
The target block may be a weighted sum of the first sub-block and the second sub-block.
The weight of the specific region corresponding to at least part of the first sub-block may be 0. In addition, the weight of the specific region corresponding to at least part of the second sub-block may be 1. Accordingly, only the value of the second sub-block may be reflected in a specific region corresponding to at least part of the target block, and the value of the first sub-block may not be reflected therein. In this regard, the first block may be considered to be partitioned from the target block such that the first block does not include the particular region corresponding to the at least part.
On the other hand, the weight of the specific region corresponding to at least part of the second sub-block may be 0. In addition, the weight of the specific region corresponding to at least part of the first sub-block may be 1. Thus, only the value of the first sub-block may be reflected in a specific region corresponding to at least part of the target block, and the value of the second sub-block may not be reflected therein. In this sense, the second block may be considered to be partitioned from the target block such that the second block does not include the particular region corresponding to the at least part.
In a specific pixel of the target block, the first weight of the first sub-block corresponding to the specific pixel may be k. k may be a natural number equal to or greater than 0 and less than or equal to 1. The second weight of the second sub-block corresponding to the specific pixel may be 1-k.
Alternatively, in a specific pixel of the target block, the sum of the first weight of the first sub-block corresponding to the specific pixel and the second weight of the second sub-block corresponding to the specific pixel may be 1.
The first and second weights of the target block corresponding to each pixel may be determined based on the boundary of the GPM. For example, the first weight and the second weight of the target block corresponding to each pixel may be determined according to a distance from the boundary of the GPM to the corresponding pixel. For example, the first weight and the second weight may be 1/2 when a particular pixel of the target block is located at a boundary.
When the distance between the specific pixel of the target block and the boundary of the GPM is equal to or greater than the reference value, one of the first weight and the second weight may be 1, and the other may be 0. In other words, the boundary may be a point at which the value of the first sub-block and the value of the second sub-block are mixed with each other. In the region where the distance from the boundary of the target block is smaller than the reference value, a weighted average of the values of the first sub-block and the second sub-block may be used.
Such a GPM and a partition of a target block according to the GPM may provide more accurate predictions for various angles in boundary portions of objects within an actual image.
Fig. 20 shows a first example of partitioning in a geometric partitioning mode.
Fig. 21 shows a second example of partitioning in geometric partitioning mode.
In fig. 20 and 21, modes and/or boundaries that may be selected in the GPM according to θ and ρ are illustrated. Alternatively, in fig. 20 and 21, θ and ρ of modes that can be selected as GPM are illustrated.
As illustrated in fig. 20 and 21, a partition boundary may be determined according to θ and ρ, and a target block may be partitioned into two sub-blocks along the boundary.
Fig. 22 shows boundaries of geometric partition modes that can be selected by a ρ value depending on one θ in the target block.
Fig. 23 shows boundaries in a geometric partition mode that can be selected by a ρ value that depends on another θ in the target block.
In a single target block (or CU), boundaries may be specified according to a p value corresponding to one θ.
The number of values that ρ can have for a particular θ can be changed. In fig. 22 and 23, five values of ρ for a specific θ are available, and boundaries corresponding to the five available values are shown.
The number of modes of the GPM that can be selected at one angle may be N.
For example, N may be a value predefined by encoding device 1600 and/or decoding device 1700.
For example, the encoding device 1600 may calculate and determine an optimal N. N may be signaled from the encoding device 1600 to the decoding device 1700. When signaling is applied, N may be encoded by encoding device 1600, and the encoded N may be signaled from encoding device 1600 to decoding device 1700. The encoded N may be decoded by decoding device 1700 to the original N.
Fig. 24 is a flowchart of a process of performing determination of partition mode and prediction mode for a target block according to an embodiment.
The target block may be a CTU.
Encoding of the target block may be performed at the following steps 2410, 2420 and 2430.
The following steps 2410, 2420 and 2430 may be performed by a processing unit.
In step 2410, a prediction for the target block may be performed. The target block may also be a block and/or sub-block generated by partitioning the CTU.
The prediction may be intra-prediction and/or inter-prediction.
Two or more predictions may be applied to the target block.
In an embodiment, a target block may be partitioned into multiple sub-blocks. Here, the plurality of numbers may indicate integers of 2, 3, 4, or 5 or more.
Alternatively, the target block may include a plurality of sub-blocks. For example, the plurality of sub-blocks may include a first sub-block and a second sub-block.
One of intra prediction and inter prediction may be applied to each of the plurality of sub-blocks.
The prediction types for the plurality of sub-blocks may be different from each other. For example, inter prediction may be applied to a first sub-block among a plurality of sub-blocks, and intra prediction may be applied to a second sub-block.
Alternatively, different intra predictions (i.e., different intra prediction modes) may be applied to the plurality of sub-blocks, respectively.
Alternatively, different inter predictions (i.e., different merge indexes for merge lists in merge mode) may be applied to the plurality of sub-blocks, respectively.
In an embodiment, the target block may be a weighted sum of a plurality of blocks. For example, the target block may be generated via prediction using the GPM. The plurality of blocks may include a first block and a second block.
The size of each of the plurality of blocks may be equal to the size of the target block. The region of each of the plurality of blocks may be equal to the region of the target block. Alternatively, the region of each of the plurality of blocks may also be a part of the region of the target block.
One of intra prediction and inter prediction may be applied to each of the plurality of blocks.
The prediction types for the plurality of blocks may be different from each other. For example, inter prediction may be applied to a first block among a plurality of blocks, and intra prediction may be applied to a second block.
Alternatively, different intra predictions (i.e., different intra prediction modes) may be applied to the plurality of blocks, respectively.
Alternatively, different inter predictions (i.e., different merge indexes for merge lists in merge mode) may be applied to the plurality of blocks, respectively.
For each of the plurality of blocks, information specifying a prediction type for the corresponding block (e.g., a flag indicating one of intra prediction and inter prediction) may be signaled through the bitstream.
In an embodiment, when intra prediction is used for a sub-block or block, an intra prediction mode available for intra prediction may be predefined.
For example, the number of the cells to be processed, the predefined intra prediction modes that may be used in intra prediction for a sub-block or block may be limited according to the partition of the target block and/or the shape of the GPM.
In an embodiment, when one of the predefined intra prediction modes for a sub-block or block is used, information for specifying the intra prediction mode of the corresponding block in the predefined intra prediction modes may be signaled through the bitstream.
For example, the predefined intra prediction mode may be determined according to a geometric partition and/or a shape of the partition. The predefined intra prediction mode may be determined by a geometric partition and/or a partition line of the partition.
For example, the predefined intra-prediction modes may include 1) modes parallel to the geometric partition and/or partition lines of the partition, and 2) modes perpendicular to the partition lines. Alternatively, the predefined intra prediction modes may include 1) modes parallel to geometric partitions and/or partition lines of partitions, 2) modes perpendicular to partition lines, and 3) planar modes.
The geometric partition and/or the partition line of the partition may refer to a boundary for partitioning the target block into two sub-blocks (or two blocks). The terms "partition line" and "boundary" may be used interchangeably with each other.
In an embodiment, when an inter mode is used for a sub-block or block, information for specifying the inter mode of the block may be signaled through a bitstream.
At step 2420, a determination may be made as to whether a next partition mode exists among the plurality of available partition modes.
For example, the plurality of available partition modes may include 1) QT partition method, 2) horizontal BT partition method, 3) vertical BT partition method, 4) horizontal TT partition method, and 5) vertical TT partition method.
For example, the partition mode in the first execution phase at step 2410 may be a non-partition mode, and the next partition mode in the first execution phase at step 2420 may be a QT partition method.
When the next partition mode exists, the partition may be applied to the target block, and step 2410 may be iterated.
When the next partition mode does not exist, step 2430 may be performed.
In step 2430, when there are no more partition modes, an optimal prediction mode for the target block and/or sub-block may be determined among the plurality of available partition modes.
The optimal prediction mode may be determined based on rate distortion costs according to a plurality of available partition modes. The optimal prediction mode may be a prediction mode having the lowest rate-distortion cost among a plurality of available partition modes.
As will be described later, partition modes that satisfy a specific condition may be excluded from a plurality of available partition modes, and the rate-distortion cost may be calculated only for partition modes that are not excluded.
In steps 2410, 2420 and 2430, a (optimal) partition mode for the CTU may be determined, and a (optimal) prediction mode for the target block and/or sub-block among the determined partition modes may be adaptively determined.
The process for determining the optimal prediction mode may be a process of comparing rate-distortion costs with each other.
Encoding of CTUs may be terminated at step 2410, step 2420 and step 2430.
Fig. 25 is a flowchart of a process of determining a partition mode and a prediction mode of a target block using texture attributes of the target block according to an embodiment.
The encoding apparatus 1600 and the decoding apparatus 1700 may perform intra prediction or inter prediction for a target block, and thereafter partition the target block. In this case, the texture property of the target block may be used to limit the number of partitions of the target block and cases for the partitions.
The encoding device 1600 and the decoding device 1700 may use texture properties of the target block to determine a partition mode for the target block. In other words, texture properties of the target block may be used to exclude some of the (all or available) partition modes from encoding and/or decoding the target block.
Here, excluding may mean excluding the corresponding pattern from the calculation target of the rate-distortion cost. Some partition modes may be excluded from processing of the search rate distortion costs corresponding to encoding and/or decoding the target block. By this exclusion, the rate-distortion cost search may be performed only for a smaller number of partition modes, rather than all partition modes available for the target block.
Alternatively, when the partition mode information indicates the partition mode of the target block, the excluded partition mode may not be included in the target indicated by the partition mode information. Therefore, by excluding, the range of values in the partition mode information can be reduced.
Texture attributes may include directionality in a target block. In addition, the directivity of the target block may also be derived using hough transform, which will be described later.
The directionality can be determined by a straight line detected in the target block.
Texture attributes may include one or more of the values (such as edges, variances, and averages).
Encoding of the target block may be performed at the following steps 2510, 2520, 2525, 2530, 2535, 2540, 2545, 2550, 2555, 2560, 2565, 2570, 2575, 2580 and 2590.
Step 2510 may correspond to step 2410 described above with reference to fig. 24.
At step 2510, a prediction for the target block may be performed. The target block may be a block and/or sub-block generated by partitioning the CTU.
The prediction may be intra-prediction and/or inter-prediction.
Step 2420 described above with reference to fig. 24 may include step 2520, step 2525, step 2530, step 2535, step 2540, step 2545, step 2550, step 2555, step 2560, step 2565, step 2570, step 2575, and step 2580.
At steps 2520, 2525, 2530, 2535, 2540, 2545, 2550, 2555, 2560, 2565, 2570 and 2575, a partition mode of a target block of the plurality of available partition modes may be restricted based on texture properties of the target block.
Here, the texture property of the target block may include a detected edge in the target block and an edge map of the target block detected according to the detected edge.
Here, the restriction in the partition mode may indicate a restriction that enables selection of only at least one specific partition mode from among (all) the plurality of available partition modes, and a restriction that enables exclusion of at least one specific partition mode from the selection.
At step 2520, edges may be detected in the target block.
Texture attributes of a target block may include edges in the target block.
The edges and/or edge maps may be detected by one or more of methods such as 1) Sobel (Sobel) operation, 2) laplace (Laplacian) operation, and 3) Canny edge.
An edge map for the target block may be derived from the detected edges.
Through the use of edge maps, the amount of unnecessary data for straight line detection can be reduced while preserving important attributes and features in the block.
At step 2525, after an edge is detected, the value of the edge may be compared to a threshold T.
For example, the threshold T may be a value predefined by the encoding device 1600 and/or the decoding device 1700.
For example, the encoding device 1600 may calculate and determine an optimal threshold T, which may be signaled from the encoding device 1600 to the decoding device 1700. When signaling is applied, the threshold T may be encoded by the encoding device 1600 and the encoded threshold T may be signaled from the encoding device 1600 to the decoding device 1700. The encoded threshold T may be decoded by the decoding device 1700 to the original threshold T.
The threshold T may be an accumulated frequency threshold detected by the encoding device 1600.
The texture property of the target block may include a comparison between the value of the edge and a threshold T.
When the magnitude of the value of the edge is greater than the threshold T, step 2530 may be performed.
When the magnitude of the value of the edge is less than or equal to the threshold T, step 2590 may be performed without applying the partition to the target block.
A case where the magnitude of the value of the edge is less than or equal to the threshold T may mean that the characteristics of the target block or sub-blocks of the target block are completely similar to each other. Since the characteristics of the target block or sub-blocks of the target block are completely similar to each other, it may not be necessary to partition the target block into sub-blocks. Thus, when the characteristics of the sub-blocks of the target block are completely similar to each other, the partition may not be applied to the target block, and step 2590 may be performed.
In step 2530, a linear search (line search) may be performed on the target block. A line search in the edge map may be performed to detect straight lines in the target block.
As a detection method in the line search, hough transform applied to the edge map of the target block may be used. When detecting a straight line by performing a hough transform on a target block, variables (such as probability and cumulative frequency threshold of the hough transform to be used for detection, and the like) may be adaptively determined.
When the cumulative frequency threshold value is a fixed value and a large value is used for this purpose, even if the target block has directivity, a straight line may not be detected in the target block having a relatively small size. On the other hand, when a small value is used, even if the target block does not have directivity, a straight line may be detected in the target block having a relatively large size due to noise. Accordingly, the cumulative frequency threshold value may be adaptively determined based on the size (or horizontal length and/or vertical length) of the target block, as represented by the following equation 1.
[ equation 1]
((log 2 M+log 2 N)>>1)+1
Here, M may be a horizontal length of the target block. N may be the vertical length of the target block.
The texture property of the target block may include one or more straight lines detected by line search and a type of the one or more straight lines.
One or more detected straight lines may be classified as 1) a horizontal straight line, 2) a vertical straight line, and 3) another straight line other than the vertical straight line and the horizontal straight line, respectively. In other words, the further straight line may be a straight line in other directions which are not horizontal or vertical. The further straight line may be a 45 ° diagonal and/or a 135 ° diagonal.
In step 2535, it may be determined whether a straight line exists in the target block through a line search.
Here, the case where there is a straight line (in a specific direction) may mean that a straight line (in a specific direction) has been detected by a line search. Here, the case where there is no straight line (in a specific direction) may mean that no straight line (in a specific direction) is detected by the line search.
When there is a straight line in the target block, step 2540 may be performed.
When there is no straight line in the target block, step 2575 may be performed.
In step 2540, it may be determined whether there is no vertical line or no horizontal line in the target block.
When at least one of the vertical line or the horizontal line is not detected in the target block, step 2545 may be performed.
When both a vertical line and a horizontal line are detected in the target block, step 2570 may be performed.
In step 2545, it may be determined whether a vertical straight line exists in the target block.
When a vertical straight line is detected in the target block (i.e., when a vertical straight line is detected in the target block and a horizontal straight line is not detected), step 2565 may be performed.
When a vertical straight line is not detected in the target block (i.e., when a vertical straight line is not detected in the target block, but at least one of a horizontal straight line and another straight line is detected), step 2550 may be performed.
In step 2550, it may be determined whether a horizontal straight line exists in the target block.
When a horizontal straight line is detected in the target block (i.e., when a horizontal straight line is detected in the target block and a vertical straight line is not detected), step 2560 may be performed.
When no horizontal straight line is detected in the target block (i.e., when neither horizontal straight line nor vertical straight line is detected in the target block, but another straight line is detected in the target block), step 2555 may be performed.
In step 2555, horizontal and vertical partitions of the target block may be skipped. Due to this skipping, partitioning may be restricted such that (only) QT partitioning methods are used in the target block.
In the case where neither a horizontal line nor a vertical line is detected and another line is detected, it can be considered that both the horizontal directivity and the vertical directivity within the target block are low. Because of this consideration, when neither a horizontal line nor a vertical line is detected in the target block and another line is detected, the horizontal and vertical partitions of the target block may be skipped. In other words, the partitioning may be limited such that, among a plurality of partition modes available for the target block, a horizontal partition method and a vertical partition method are not used, and another partition method other than the horizontal partition method and the vertical partition method is used.
Here, the horizontal partition method may include at least some of a horizontal BT partition method and a horizontal TT partition method. The vertical partition methods may include at least some of a vertical BT partition method and a vertical TT partition method. The additional partitioning method other than the horizontal partitioning method and the vertical partitioning method may be a QT partitioning method.
In step 2560, non-vertical partitions may be applied to target blocks. In other words, the vertical partition of the target block may be skipped.
Since a vertical straight line is not detected in the target block, but only a horizontal straight line is in the target block, the vertical directivity in the target block can be considered to be high. Because of this consideration, when a vertical straight line is not detected in the target block and a horizontal straight line is detected, the vertical partition of the target block may be skipped. In other words, the partitioning may be limited such that, among a plurality of partition modes available for the target block, the vertical partition method is not used for the target block, and another partition method other than the vertical partition method is used.
Here, the vertical partition method may include at least some of a vertical BT partition method and a vertical TT partition method. Additional partitioning methods other than the vertical partitioning method may include at least some of a QT partitioning method, a horizontal BT partitioning method, and a horizontal TT partitioning method.
In step 2565, non-horizontal partitions may be applied to target blocks. In other words, horizontal partitions of the target block may be skipped.
Since a horizontal straight line is not detected in the target block and a vertical straight line is in the target block, the horizontal directivity in the target block can be considered to be high. Because of this consideration, when a horizontal straight line is not detected in the target block and a vertical straight line is detected, the horizontal partition of the target block may be skipped. In other words, the partitioning may be limited such that among a plurality of partition modes available for the target block, a horizontal partitioning method is not used for the target block, and another partitioning method other than the horizontal partitioning method is used.
Here, the horizontal partition method may include at least some of a horizontal BT partition method and a horizontal TT partition method. Additional partitioning methods other than the horizontal partitioning method may include at least some of a QT partitioning method, a vertical BT partitioning method, and a vertical TT partitioning method.
At step 2570, all of the multiple partitioning methods available to the target block may be used.
Since both horizontal and vertical lines are detected in the target block, texture properties in the target block can be considered complex. Thus, partitioning can be performed on the target block without limitation in all directions and partition sizes. Thus, without limiting the partition modes for the target block, all of the multiple partition modes available for the target block may be used.
In step 2575, the partition may not be used for the target block.
When no straight line is detected in the target block, the features of the target block can be considered to be completely similar to each other. Because of this consideration, the partition processing for the target block may not be performed. Thus, the partition may be restricted such that none of the plurality of partition modes is used.
In step 2580, it may be determined whether a next partition mode exists among a plurality of available partition modes.
For example, in 1) QT partition method, 2) horizontal BT partition method, 3) vertical BT partition method, 4) horizontal TT partition method, and 5) vertical TT partition method, the plurality of available partition modes may be partition modes that are not limited at step 2520, step 2525, step 2530, step 2535, step 2540, step 2545, step 2550, step 2555, step 2560, step 2565, step 2570, and step 2575.
For example, the partition mode in the first execution phase at step 2510 may be a non-partition method and the next partition mode in the first execution phase at step 2580 may be a QT partition method.
When a next partition mode exists, the partition may be applied to the target block and step 2510 may be iterated.
When the next partition mode does not exist, step 2590 may be performed.
Step 2590 may correspond to step 2430.
In step 2590, when there are no more partition modes, an optimal prediction mode for the target block and/or sub-block may be determined.
FIG. 26 is a flowchart of a process for determining a GPM mode for a target block using texture attributes of the target block, according to an embodiment.
The encoding device 1600 and the decoding device 1700 may use texture properties of the target block to determine the mode of the GPM of the target block. In other words, texture attributes of the target block may be used to exclude some of the (all or available) modes of the GPM from encoding and/or decoding the target block.
Here, excluding may mean excluding the corresponding pattern from the calculation target of the rate-distortion cost. Some GPM modes may be excluded from processing of the search rate distortion costs corresponding to encoding and/or decoding the target block. By this exclusion, a search for rate distortion costs may be performed only for modes less than 64 GPM modes.
Alternatively, when the mode information of the GPM indicates a mode of the GPM for the target block, the excluded partition mode may not be included in the object indicated by the mode information of the GPM. Therefore, by excluding, the range of values in the mode information of the GPM can be reduced.
Texture attributes may include directionality in a target block. The directivity of the target block may also be derived using a hough transform to be described later.
The directionality can be determined by a straight line detected in the target block.
Texture attributes may include one or more of values such as edges, variance, and average.
Step 2420 described above with reference to fig. 24 may include the following steps 2610, 2620, 2630, 2640, 2650, 2660, and 2670. Alternatively, steps 2610, 2620, 2630, 2640, 2650, 2660, and 2670 may be performed between steps 2410 and 2420 described above with reference to fig. 24.
At step 2610, step 2620, step 2630, step 2640, step 2650, step 2660, and step 2670, a mode of the GPM for the target block may be determined based on texture properties of the target block.
At step 2610, an edge may be detected in the target block.
Edges may be detected using methods such as 1) Sobel (Sobel) operation, 2) laplace (Laplacian) operation, and 3) Canny edge.
An edge map for the target block may be derived from the detected edges.
Through the use of edge maps, the amount of unnecessary data for straight line detection can be reduced while preserving important attributes and features in the block.
At step 2620, the edges and the threshold T may be compared to each other.
For example, the threshold T may be a value predefined by the encoding device 1600 and/or the decoding device 1700.
For example, the encoding device 1600 may calculate and determine an optimal threshold T, which may be signaled from the encoding device 1600 to the decoding device 1700.
When signaling is applied, the threshold T may be encoded by the encoding device 1600 and the encoded threshold T may be signaled from the encoding device 1600 to the decoding device 1700. The encoded threshold T may be decoded by the decoding device 1700 to the original threshold T.
The texture properties of the target block may include a comparison between the edge and a threshold T.
When the magnitude of the value of the edge is greater than the threshold T, step 2630 may be performed.
When the magnitude of the value of the edge is less than or equal to the threshold T, the process may be terminated, or step 2420 may be performed without applying the partition to the target block.
A case where the magnitude of the value of the edge is less than or equal to the threshold T may mean that the directivity of the target block is low. Because the directionality of the target block is low, the process may be terminated or step 2420 may be performed without applying the GPM to the target block.
In step 2630, straight line (line) detection for the target block may be performed.
As a detection method in line detection, hough transform applied to an edge map of a target block may be used. When detecting straight lines by performing hough transform on a target block, variables (such as probability to be used for detection and cumulative frequency threshold) may be adaptively determined.
When the cumulative frequency threshold value is a fixed value and a large value is used for this purpose, even if the target block has directivity, a straight line may not be detected in a target block having a relatively small size. On the other hand, when a small value is used, even if the target block does not have directivity, a straight line may be detected in a relatively large-sized target block due to noise. Accordingly, as described above with reference to equation 1, the cumulative frequency threshold value may be adaptively determined based on the size (or horizontal length and/or vertical length) of the target block.
In step 2640, it may be determined whether a straight line exists in the target block through line detection.
When a straight line is detected in the target block, step 2650 may be performed.
The detected straight lines may include one or more straight lines.
When no straight line is detected in the target block, the process may be terminated, or step 2420 may be performed.
The case where no straight line is detected in the target block may mean that the degree of directionality of the target block is low. Therefore, since the directionality of the target block is low, the GPM may not be applied to the target block, and the application of the mode of the GPM may be skipped.
Steps 2650, 2660, and 2670 may be performed on the detected one or more straight lines.
At step 2650, θ and ρ for the corresponding straight lines may be derived from each of the one or more straight lines.
In step 2660, a rate distortion cost search may be performed for θ and ρ for each line.
The mode of the GPM for each line may be selected according to θ and ρ of the corresponding line, and the rate-distortion cost of the mode for the GPM may be searched.
In an embodiment, when θ and ρ for each of the one or more lines are derived, the modes of the GPM for all of the one or more lines may be selected and the rate-distortion costs for the modes of the selected GPM may be searched.
Based on the derived θ and ρ, a restriction in the distribution of angles selectable by the GPM may be applied. For example, as described above with reference to fig. 19, the rate-distortion cost search process may be performed restrictively only in the mode of the GPM having θ and ρ corresponding to angles in the distribution.
In other words, as described above with reference to fig. 19, 20, 21, 22, and 23, 64 GPM patterns may be available according to 20 predefined angles for θ and four predefined distances for ρ. In an embodiment, θ and ρ for each of one or more straight lines may be obtained when deriving the one or more straight lines from the target block. The rate-distortion cost search may be performed only in modes of the GPM that depend on the θ and ρ pairs obtained for the one or more lines (rather than out of all 64 modes of the GPM).
In an embodiment, a pattern may be identified as a value that is most derived from θ values of one or more lines of the target block. The mode of the GPM may be selected for a ρ value of a line having θ corresponding most to the one or more lines. The rate-distortion costs may be searched in the selected mode of the GPM. In other words, the mode of the GPM searching for the rate-distortion cost may be limited to the GPM mode attributable to the most derived value of θ and the ρ value of the straight line having the most derived value of θ.
In step 2670, an optimal (best) rate-distortion cost may be detected among the rate-distortion costs searched in the selected mode of the GPM. Further, the line with the best rate distortion cost may be detected from one or more lines.
After the rate-distortion cost search process for the current target block is terminated, information about the detected straight line may be used for neighboring blocks adjacent to the target block.
Table 7 below illustrates the modes of GPM selected according to the type of straight line detected in the target block and the number of GPM modes.
TABLE 7
Figure BDA0004133786430000891
/>
Figure BDA0004133786430000901
/>
Figure BDA0004133786430000911
FIG. 27 is a flowchart of a process for determining a mode of a GPM for a target block using texture attributes of the target block and prediction modes of neighboring blocks, according to an embodiment.
The target block may be a CU. The neighboring block may be a CU adjacent to the target block.
Step 2420 described above with reference to fig. 24 may include the following steps 2710, 2720, 2730, 2740, 2750, 2760, 2770, and 2780. Alternatively, steps 2710, 2720, 2730, 2740, 2750, 2760, 2770, and 2780 may be performed between steps 2410 and 2420 described above with reference to fig. 24.
At steps 2710, 2720, 2730, 2740, 2750, 2760, 2770, and 2780, a mode of GPM for a target block may be determined based on texture properties of the target block and prediction modes of neighboring blocks.
At step 2710, edges may be detected in the target block.
Edges may be detected using methods such as 1) Sobel (Sobel) operation, 2) laplace (Laplacian) operation, and 3) Canny edge.
An edge map for the target block may be derived from the detected edges.
Through the use of edge maps, the amount of unnecessary data for straight line detection can be reduced while preserving important attributes and features in the block.
At step 2720, the edges and the threshold T may be compared to each other.
For example, the threshold T may be a value predefined by the encoding device 1600 and/or the decoding device 1700.
For example, the encoding device 1600 may calculate and determine an optimal threshold T, which may be signaled from the encoding device 1600 to the decoding device 1700.
The texture properties of the target block may include a comparison between the edge and a threshold T.
When the magnitude of the value of the edge is greater than the threshold T, step 2730 may be performed.
When the magnitude of the value of the edge is less than or equal to the threshold T, the process may be terminated, or step 2420 may be performed without applying the GPM to the target block.
A case where the magnitude of the value of the edge is less than or equal to the threshold T may mean that the directivity of the target block is low. Because the directionality of the target block is low, the process may be terminated or step 2420 may be performed without applying the GPM to the target block.
At step 2730, a straight line search (line search) may be performed on the target block.
As a detection method in the line search, hough transform applied to the edge map of the target block may be used. When detecting straight lines by performing hough transform on a target block, variables (such as probability to be used for detection and cumulative frequency threshold) may be adaptively determined.
When the cumulative frequency threshold value is a fixed value and a large value is used for this purpose, even if the target block has directivity, a straight line may not be detected in a target block having a relatively small size. On the other hand, when a small value is used, even if the target block does not have directivity, a straight line may be detected in the target block having a relatively large size due to noise. Accordingly, as described above with reference to equation 1, the cumulative frequency threshold value may be adaptively determined based on the size (or horizontal length and/or vertical length) of the target block.
At step 2740, it may be determined whether a straight line exists in the target block through line detection.
When there is a straight line in the target block, step 2650 may be performed.
The detected straight lines may include one or more straight lines.
In the case where there is no straight line in the target block, the process may be terminated, or step 2420 is performed.
The case where no straight line is detected in the target block may mean that the degree of directionality of the target block is low. Therefore, since the directionality of the target block is low, the GPM may not be applied to the target block, and the application of the mode of the GPM may be skipped.
Steps 2750, 2760, 2770, and 2780 may be performed on the detected one or more straight lines.
At step 2750, θ and ρ for the respective straight lines may be derived from each of the one or more straight lines. The mode of the GPM may be selected according to θ and ρ of each line.
In step 2760, previously encoded and/or decoded information of neighboring blocks may be checked.
At step 2770, previously encoded and/or decoded information of neighboring blocks may be used to select a mode of the GPM that will perform a rate-distortion cost search.
The mode of the GPM that will perform the rate-distortion cost search may be determined based on the similarity.
The similarity may be obtained by comparison between pieces of prediction mode information of neighboring blocks. The neighboring blocks may be blocks using GPM. The prediction mode information may include a mode of the GPM.
The similarity may be determined and checked based on pieces of information of edges of neighboring blocks and/or pieces of information of straight lines detected in the neighboring blocks.
In other words, as described above with reference to fig. 19, 20, 21, 22, and 23, a pattern of 64 GPMs may be available according to 20 predefined angles for θ and four predefined distances for ρ.
In an embodiment, θ and ρ for each of one or more straight lines may be obtained when deriving the one or more straight lines from the target block. The rate-distortion cost search may be performed only in a mode of the GPM selected based on previously encoded and/or decoded information of neighboring blocks among modes of the GPM corresponding to θ and ρ pairs obtained from the one or more straight lines (instead of all 64 modes of the GPM).
In step 2780, an optimal (best) rate-distortion cost may be detected among the rate-distortion costs searched in the selected mode of the GPM. Further, the line with the best rate distortion cost may be detected from one or more lines.
After the rate-distortion cost search process for the current target block is terminated, information about the detected straight line may be used for neighboring blocks adjacent to the target block.
Fig. 28 is a flowchart illustrating a method for determining a pattern of a GPM in which a rate-distortion cost search is to be performed on a target block by comparing GPM patterns of neighboring blocks according to an example.
Each neighboring block may be a block encoded and/or decoded using a GPM.
The mode of the GPM that will perform the rate-distortion cost search on the target block may be determined by comparing the modes of the GPMs of the respective neighboring blocks with each other.
Step 2760, described above with reference to fig. 27, may include the following steps 2820, 2830, 2840, 2850 and 2860.
At step 2810, it may be checked whether there is a pattern for GPMs of neighboring blocks. In other words, it may be checked whether there is a block encoded and/or decoded in GPM in the neighboring blocks.
Step 2820 may be performed when there is a mode for GPM of neighboring blocks (i.e., when there is a block encoded and/or decoded in GPM in the neighboring block).
Step 2860 may be performed when there is no GPM mode for the neighboring block (i.e., when there is no block encoded and/or decoded in the GPM in the neighboring block).
In step 2820, it may be checked whether the number of modes for GPM of neighboring blocks is greater than 1. In other words, it may be checked whether there are two or more blocks encoded and/or decoded in GPM in neighboring blocks.
Step 2830 may be performed when the number of modes for GPM of neighboring blocks is greater than 1 (i.e., when the number of blocks encoded and/or decoded in GPM in neighboring blocks is 2 or more).
Step 2850 may be performed when the number of modes for GPM of neighboring blocks is not greater than 1 (i.e., when the number of blocks encoded and/or decoded in GPM in neighboring blocks is 1).
In step 2830, similarities between patterns of GPMs for neighboring blocks may be checked.
In fig. 29, 30, 31 and 32, which will be described later, of the distribution of angles of the GPM described above with reference to fig. 19, the distribution of angles in the angular pattern of the GPM is classified into four categories. In the respective drawings of fig. 29, 30, 31, and 32, the angle indicated by the solid line may indicate one category.
The number of categories may be changed, and the number of angles and/or the types of angles included in each category may also be changed.
The patterns of GPMs that can be used for the target block may be classified into n categories using information of straight lines detected in the target block, information on patterns of GPMs for neighboring blocks, and the like. n may be an integer of 2 or more.
The pattern information may indicate a category to which a pattern used by the target block belongs among n categories. Mode information may be signaled from the encoding device 1600 to the decoding device 1700. When signaling is applied, the mode information may be encoded by the encoding device 1600, and the encoded mode information may be signaled from the encoding device 1600 to the decoding device 1700. The encoded mode information may be decoded into original mode information by the decoding apparatus 1700. With the mode information, the decoding apparatus 1700 may select a category used by the target block from n categories. Thereafter, prediction of the mode of the GPM corresponding to the selected category may be performed.
The categories may be classified into categories similar to the GPM pattern for the neighboring block and categories dissimilar to the GPM pattern of the neighboring block.
When there is a similarity between modes of GPMs for neighboring blocks, angles of modes of GPMs may be classified into n categories by using information on straight lines detected in a target block and information on modes of GPMs for neighboring blocks. The rate-distortion cost search may be performed only in modes of the GPM having angles corresponding to the selected category.
For example, as shown in fig. 29, 30, 31 and 32, modes of GPMs for neighboring blocks may be classified according to angles of the modes. When the pattern of the GPM for the neighboring block belongs to one of the categories shown in fig. 29, 30, 31 and 32, it may be determined that the patterns of the GPM for the neighboring block are similar to each other. When the modes of the GPMs for neighboring blocks do not all belong to the categories shown in fig. 29, 30, 31, and 32 in common, it may be determined that the modes of the GPMs for neighboring blocks are not similar to each other.
In step 2840, it may be checked whether modes of GPMs for neighboring blocks are similar to each other.
When the GPM patterns for neighboring blocks are similar to each other, step 2850 may be performed.
When the GPM patterns for neighboring blocks are not similar to each other, step 2860 may be performed.
At step 2850, a mode of the GPM to perform a rate-distortion cost search on the target block may be determined based on θ and ρ and the modes of the GPM for neighboring blocks.
At step 2860, a mode of the GPM to perform a rate-distortion cost search on the target block may be determined based on θ and ρ. Here, the mode of the GPM for the neighboring block may be independent of the determination of the mode of the GPM that will perform the rate-distortion cost search on the target block.
After performing step 2850 or step 2860, step 2770, described above with reference to fig. 27, may be performed.
Fig. 29 shows a first category of angles for a pattern of GPM according to an example.
Fig. 30 shows a second category of angles for a pattern of GPM according to an example.
Fig. 31 shows a third category of angles for a pattern of GPM according to an example.
Fig. 32 shows a fourth category of angles for a pattern of GPM according to an example.
In the drawings of fig. 29, 30, 31, and 32, the line indicated by the solid line may indicate an angle belonging to one category.
The directivity existing in the target block may be classified into horizontal directivity, vertical directivity, two main diagonal directivities (45 ° diagonal directivity and 135 ° diagonal directivity), and non-directivity.
Based on such classification of directivity, straight lines in the horizontal direction, the vertical direction, the 45 ° diagonal direction, and the 135 ° diagonal direction may be detected using hough transform for obtaining directivity of the target block, and when no straight line is detected, the target block may be regarded as non-directional.
As shown in fig. 29, 30, 31 and 32, the angles of the angle patterns of the GPM may be classified based on the detected angle of the straight line.
After performing hough transform on the target block based on the classification, rate-distortion cost search is performed only in the GPM mode corresponding to the angle of the detected straight line, thereby reducing the time required for encoding and/or decoding. For example, when there is no horizontal straight line in the target block, the levelness of the target block may be considered low, and thus the horizontal mode in the modes of the GPM may be skipped from the rate-distortion cost search.
Fig. 33 is a flowchart illustrating a method for determining a mode of a GPM to perform a rate-distortion cost search on a target block based on prediction mode information of GPMs for neighboring blocks according to an example.
The previously encoded and/or decoded information of neighboring blocks may be used to determine the mode of the GPM that will perform a rate-distortion cost search on the target block. Here, in order to obtain information of neighboring blocks, GPM, which is a prediction method based on an angle and an angle mode among intra prediction modes, may be used.
Step 2760, described above with reference to fig. 27, may include the following steps 3310, 3320, 3330, 3340, 3350, and 3360.
At step 3310, it may be checked whether there are neighboring blocks (at a particular location).
When there are neighboring blocks (at a particular location), step 3320 may be performed.
When there are no neighboring blocks (at a particular location), step 3310 may be iterated for the next location.
In an embodiment, the order of the particular positions of the neighboring blocks may be an order of following the left, upper and upper right positions of the target block. In other words, step 3310 may be iteratively performed in the order of a neighboring block adjacent to the left side of the target block, a neighboring block adjacent to the upper left position (diagonally) of the target block, a neighboring block adjacent to the upper side of the target block, and a neighboring block adjacent to the upper right position (diagonally) of the target block. The next position may be the position of the neighboring block following the neighboring block as the processing target of the current step 3310 in the above-described order.
In an embodiment, the next location may be determined using previously encoded and/or decoded information of the block at another location specified based on the location of the target block.
In an embodiment, the next location may be adaptively determined according to the location of the target block and/or the mode of the neighboring blocks. Alternatively, the neighboring block as the inspection target may be adaptively determined according to the position of the target block or the mode of the neighboring block.
For example, a block disposed at one of left, upper right, and lower left positions of the target block may be selected as a neighboring block (in order), and a pattern of the GPM to perform the rate-distortion cost search may be determined using pieces of information about the selected neighboring block.
In step 3320, it may be checked whether the corresponding neighboring block is a block encoded and/or decoded using GPM.
The above-described neighboring block information may include information indicating whether the neighboring block is a block encoded and/or decoded using GPM.
Step 3350 may be performed when the neighboring block is a block encoded and/or decoded using GPM.
Step 3330 may be performed when the neighboring block is not a block encoded and/or decoded using GPM.
In step 3330, it may be checked whether the corresponding neighboring block is a block encoded and/or decoded by intra prediction.
Step 3340 may be performed when the neighboring block is a block encoded and/or decoded by intra prediction.
When the neighboring block is not a block encoded and/or decoded by intra prediction, step 3310 may be performed iteratively for the next position.
In step 3340, it may be checked whether the neighboring block is a block encoded and/or decoded in an intra-predicted angular mode.
The angular mode may be a directional intra prediction mode. For example, the angle mode may indicate the remaining intra prediction modes other than the plane mode and the DC mode among all the intra prediction modes.
Step 3350 may be performed when the neighboring block is a block encoded and/or decoded in an intra-predicted angular mode.
Step 3310 may be performed iteratively for the next position when the neighboring block is not a block encoded and/or decoded in the intra-predicted angular mode.
At step 3350, mode indexes of neighboring blocks encoded and/or decoded in GPM or in an angular mode of intra prediction may be obtained. Here, the mode index may be information indicating a mode of the GPM or an angle mode of intra prediction.
At step 3360, it is determined whether the iterations of steps 3310, 3320, 3330, 3340, and 3350 for obtaining the mode index are terminated.
When the iteration has not been terminated (i.e., when the check for obtaining the mode indexes of all neighboring blocks has not been terminated), step 3310 may be performed again at the next location.
When the iteration is terminated (i.e., when the check for obtaining the mode indexes of all neighboring blocks is terminated), step 2770, described above with reference to fig. 27, may be performed.
When the iteration is terminated, pieces of information about all neighboring blocks of the target block may be obtained. Here, the information on the neighboring blocks may include a mode index for each neighboring block, and may include a mode of a GPM for the corresponding neighboring block and/or an angle of intra prediction of the corresponding neighboring block.
From the pieces of information obtained about neighboring blocks, the mode of the GPM that will perform the rate-distortion cost search can be determined.
It may be determined whether information on each neighboring block is to be used for rate-distortion cost search according to a difference between a value of an angle of information of a straight line detected in the target block and a value of an angle of a plurality of pieces of information on the neighboring block.
In an embodiment, when a difference between a value of information of a straight line detected in a target block and a value of pieces of information on neighboring blocks is large, the information on the neighboring blocks may not be used for rate distortion cost search.
When the difference between the value of the information of the straight line detected in the target block and the values of the pieces of information on the neighboring blocks is not large, the information on the target block and the information on the neighboring blocks may be used to determine a pattern of the GPM in which the rate-distortion cost search is to be performed.
Here, the information of the straight line may indicate an angle of the straight line. The information about each neighboring block may indicate an angle of a mode of the GPM for the corresponding neighboring block or an angle of intra prediction of the corresponding neighboring block.
For example, when the difference between the value of the angle of the straight line detected in the target block and the value of the angle of the mode of the neighboring block is greater than the threshold J, the information about the neighboring block may not be used.
For example, when the difference between the value of the angle of the straight line detected in the target block and the value of the angle of the pattern of the neighboring block is less than or equal to the threshold J, the information on the target block and the information on the neighboring block are simultaneously used, and thus the pattern of the GPM to which the rate-distortion cost search is to be performed can be determined.
For example, the threshold J may be a value predefined by the encoding device 1600 and/or the decoding device 1700.
For example, the encoding device 1600 may calculate and determine an optimal threshold J, which may be signaled from the encoding device 1600 to the decoding device 1700. When signaling is applied, the threshold J may be encoded by the encoding device 1600, and the encoded threshold J may be signaled from the encoding device 1600 to the decoding device 1700. The threshold J of the encoding candidate may be decoded by the decoding apparatus 1700 into the original threshold J.
After performing step 3360, step 2770, described above with reference to fig. 27, may be performed.
Fig. 34 is a flowchart illustrating a method for determining a mode of a GPM mode in which a rate-distortion cost search is to be performed on a target block using information of straight lines detected in neighboring blocks according to an example.
The previously encoded and/or decoded information of neighboring blocks may be used to determine the mode of the GPM that will perform a rate-distortion cost search on the target block. Here, information on the directionality of the neighboring block itself may be obtained (regardless of the prediction mode of the neighboring block for the target block), and the mode of the GPM in which the rate-distortion cost search is to be performed on the target block may be determined according to the information on the directionality of the neighboring block.
Step 2760, described above with reference to fig. 27, may include the following steps 3410, 3420, 3430, and 3440.
After performing step 3440, step 2770, described above with reference to fig. 27, may be performed.
At step 3410, it may be checked whether there are neighboring blocks (at a particular location).
When there are neighboring blocks (at a specific location), step 3420 may be performed.
When there are no neighboring blocks (at a particular location), step 3410 may be iterated for the next location.
In an embodiment, the order of the particular positions of the neighboring blocks may be an order of following the left, upper and upper right positions of the target block. In other words, step 3310 may be iteratively performed in the order of a neighboring block adjacent to the left side of the target block, a neighboring block adjacent to the upper left position (diagonally) of the target block, a neighboring block adjacent to the upper side of the target block, and a neighboring block adjacent to the upper right position (diagonally) of the target block. The next position may be the position of the neighboring block following the neighboring block as the processing target of the current step 3410 in the above order.
In an embodiment, the next location may be determined using previously encoded and/or decoded information of the block at another location specified based on the location of the target block.
In an embodiment, the next location may be adaptively determined according to the location of the target block and/or the mode of the neighboring blocks. Alternatively, the neighboring blocks as the inspection target may be adaptively determined according to the position of the target block or the mode of the neighboring blocks.
For example, a block disposed at one of left, upper right, and lower left positions of the target block may be selected as a neighboring block (in order), and a pattern of the GPM to perform the rate-distortion cost search may be determined using pieces of information about the selected neighboring block.
In step 3420, it may be checked whether there is straight line information for each neighboring block. The straight line information may indicate directionality of the respective neighboring blocks.
The straight line information of the corresponding neighboring block may be information about a straight line detected in the neighboring block.
When there is straight line information of the neighboring block, step 3430 may be performed.
When there is no straight line information of the neighboring block, step 3410 may be iteratively performed for the next position.
In step 3430, line information for neighboring blocks may be obtained.
In step 3440, it is determined whether the iteration of steps 3410, 3420, and 3430 for obtaining the straight line information of the neighboring blocks is terminated. In other words, it is determined whether pieces of straight line information of all neighboring blocks have been obtained.
When the iteration has not been terminated (i.e., when the check for obtaining the straight line information of all neighboring blocks has not been terminated), step 3410 may be iteratively performed again for the next position.
When the iteration is terminated (i.e., when the check for obtaining the straight line information of all neighboring blocks is terminated), step 2770 described above with reference to fig. 27 may be performed.
The straight line information of each neighboring block may indicate a straight line in the corresponding neighboring block and/or a directionality of the straight line in the corresponding neighboring block.
In an embodiment, only information of straight lines similar to those of the target block among straight lines in the neighboring blocks may be used to determine a pattern of the GPM in which the rate-distortion cost search is to be performed on the target block.
In another embodiment, all straight lines in neighboring blocks may be used to determine the GPM mode that will perform a rate-distortion cost search on the target block.
The determination of whether or not straight lines are similar to each other or the determination of the degree to which straight lines are similar to each other may be adaptively performed according to the size of the target block and/or the corresponding neighboring block.
The straight line may be quantized according to its angle. Alternatively, quantization may be applied to the angle of the corresponding straight line. It may be determined for the quantized lines whether the quantized lines are similar to each other or the degree to which the quantized lines are similar to each other. For example, assuming that two straight lines are quantized, when the two quantized straight lines are identical to each other, it may be determined that the two straight lines are similar to each other.
Fig. 35 shows a motion vector list according to an example.
The Motion Vector (MV) list in fig. 35 may be used to derive an optimal MV for a sub-block in a specific GPM mode.
For example, a MV list may be used to derive MVs for each sub-block in a particular GPM mode.
In an embodiment, the number of Motion Vectors (MVs) included in the MV list may be predefined. For example, the number of MVs in the MV list may be a value predefined by the encoding device 1600 and/or the decoding device 1700.
For example, the encoding apparatus 1600 may calculate and determine the optimal number of MVs. The determined number of MVs may be signaled from the encoding device 1600 to the decoding device 1700.
In an embodiment, a predefined method may be used when configuring the MV list.
In an embodiment, when configuring the MV list, the MV list may be configured based on one or more pieces of information such as 1) a type of a prediction mode of a target block and/or a neighboring block, 2) MVs of the target block and/or the neighboring block, 3) a type of a reference picture for the target block and/or the neighboring block, and a size of the target block and/or the neighboring block.
Fig. 36 is a flowchart of a method for searching for an optimal motion vector for each sub-block in a GPM mode according to an example.
When there are n GPM modes to be checked for costs to allow the encoding device 1600 and/or the decoding device 1700 to find an optimal mode for the GPM, the encoding device 1600 and/or the decoding device 1700 may compare the rate-distortion costs using the optimal MVs of the sub-blocks configured in each mode of the GPM.
In an embodiment, as shown in fig. 35, for each sub-block, the optimal MV may be derived from the configured MV list.
In an embodiment, when deriving the optimal MV for each sub-block, the rate-distortion costs for all MVs illustrated in fig. 35 may be checked.
In an embodiment, based on one or more pieces of information (such as 1) the type of prediction mode of the target block and/or neighboring block, 2) the MVs of the target block and/or neighboring block, 3) the type of reference picture for the target block and/or neighboring block, and the size of the target block and/or neighboring block), only some MVs may be selected from MVs in the MV list of fig. 35, and the cost for the selected MVs may be checked.
For example, only MVs identical to MVs existing in a neighboring block near the target block may be selected from the MV list of fig. 36, and the rate-distortion cost for the selected MVs may be checked.
Alternatively, n MVs ranging from the first element in the MV list in fig. 36 may be selected in addition to overlapping MVs, and the rate-distortion cost may be checked only for the selected MVs.
At step 3610, the MVs in the list may be selected.
At step 3620, it may be determined whether a check for all MVs has been performed.
When the checks for all MVs have been performed, step 3640 may be performed.
When the checking for all MVs is not performed, step 3630 of checking the rate-distortion cost for the next MV may be performed.
At step 3630, the rate-distortion cost for the MV may be checked.
Next, step 3620 may be iterated to check the rate-distortion cost for the next MV.
In step 3640, an optimal MV may be selected from MVs whose cost is checked.
Fig. 37 illustrates a method of configuring the MV list of fig. 35 using Motion Vectors (MVs) of neighboring blocks according to an embodiment.
Pieces of information about MVs existing in the upper left neighboring block, the upper right neighboring block, the left neighboring block, and the lower left neighboring block near the target CU may be used.
In fig. 37, when pieces of MV information of these neighboring blocks are used, positions at which pieces of MV information are extracted from the respective neighboring blocks are illustrated.
For example, MV information of an upper left neighboring block may be extracted from the location TL (i.e., a lower right location of the neighboring block).
For example, MV information of an upper neighboring block may be extracted from a position T (i.e., a lower right position of the neighboring block).
For example, MV information of an upper right neighboring block may be extracted from the position TR (i.e., a lower left position of the neighboring block).
For example, MV information of a left neighboring block may be extracted from a position L (i.e., a lower right position of the neighboring block).
For example, MV information of a lower right neighboring block may be extracted from the position BL (i.e., an upper right position of the neighboring block).
When extracting a plurality of pieces of MV information of one or more of the upper left neighboring block, the upper right neighboring block, the left neighboring block, and the lower right neighboring block, the position of extracting the MV information may be changed according to information on the shape and size of the neighboring blocks.
Fig. 38 is a flowchart of a method for adding MVs of neighboring blocks to an MV list according to an embodiment.
When a target block is encoded and/or decoded in GPM, neighboring blocks may be encoded and/or decoded in GPM, and MVs of the neighboring blocks may be added to the MV list.
In the case of encoding and/or decoding a target block in a GPM when configuring an MV list, MV information of each neighboring block may be adaptively obtained according to a prediction mode, shape, and size of the neighboring block so as to use the MV information of the neighboring block.
For example, when a neighboring block is encoded and/or decoded using a GPM and the neighboring block is located at the left side of the target block or the lower left position of the target block, the MV stored at the upper right position of the neighboring block may be stored A Information on MV stored in lower right position of neighboring block B Is compared with the information of (a) and the information about MV can be determined based on the result of the comparison A Information and/or about MVs B Is added to the MV list.
For example, when a neighboring block is encoded and/or decoded using a GPM and the neighboring block is located at an upper left position, an upper position, or an upper right position of a target block, the MV stored at the lower left position of the neighboring block may be stored A Information on MV stored in lower right position of neighboring block B Is compared with the information of (a) and the information about MV can be determined based on the result of the comparison A Information and/or about MVs B Is added to the MV list.
For example, when MV A And MV (sum MV) B When identical to each other, MV A May be added to the MV list.
For example, when MV A And MV (sum MV) B Are different from each other and MV A And MV (sum MV) B One of them isWhen bi-directional MVs, bi-directional MVs may be added to the MV list.
For example, in MV A And MV (sum MV) B Are different from each other and MV A And MV (sum MV) B In the case of both unidirectional MVs, when MVs are A Reference pictures and MVs of (a) B MV can be set when reference pictures of (a) are different from each other A And MV (sum MV) B Added as bi-directional MVs to the MV list. (in MV A And MV (sum MV) B Are different from each other and MV A And MV (sum MV) B In the case of both unidirectional MVs, when MVs are A Reference pictures and MVs of (a) B When the reference pictures of (a) are different from each other, a bi-directional MV may be added to the MV list. Here, one of the two bi-directional MVs may be an MV A And the other may be MV B )。
For example, in MV A And MV (sum MV) B Are different from each other and MV A And MV (sum MV) B In the case of both unidirectional MVs, when MVs are A Reference pictures and MVs of (a) B MV can be set when the reference pictures of (a) are identical to each other A And MV (sum MV) B Added (individually) to the MV list.
At step 3810, the MV may be checked A And MV (sum MV) B Whether or not they are identical to each other.
When MV is A And MV (sum MV) B When identical to each other, step 3815 may be performed.
When MV is A And MV (sum MV) B When not identical to each other, step 3820 may be performed.
In step 3815, the MV may be set A To the MV list. Alternatively, MVs can be set B To the MV list.
At step 3820, the MV may be checked A Or MV (MV) B Whether it is a bi-directional MV.
When MV is A Or MV (MV) B In the case of bi-directional MVs, step 3825 may be performed.
When MV is A And MV (sum MV) B If not a bi-directional MV, step 3830 may be performed.
In step 3825, the MV may be set A And MV (MV) B The bi-directional MVs in between are added to the MV list.
In step 3830, the MV may be checked A Is of the ginsengExamination picture and MV B Whether the reference pictures are different from each other.
When MV is A Reference pictures and MVs of (1) B When the reference pictures of (a) are different, step 3835 may be performed.
When MV is A Reference pictures and MVs of (a) B When the reference pictures of (a) are identical to each other, step 3840 may be performed.
In step 3835, the MV may be set A And MV (sum MV) B Added to the MV list as bi-directional MVs.
Here, one of the two bi-directional MVs may be an MV A And the other may be MV B
At step 3840, the MV may be set A And MV (sum MV) B Respectively to the MV list.
Fig. 39 illustrates a method for adding MVs of neighboring blocks to an MV list according to an example.
When encoding and/or decoding neighboring blocks using the GPM, MVs of the neighboring blocks may be added to the MV list using the method illustrated in fig. 38.
In the case of encoding and/or decoding a left neighboring block using the GPM, when configuring the MV list of the target block, the motion vector MV of the left neighboring block may be extracted from the position P (i.e., the upper right position of the left neighboring block) and the position L (i.e., the lower right position of the left neighboring block), respectively.
In the case of encoding and/or decoding an upper right neighboring block using the GPM, when the MV list of the target block is configured, the motion vector MV of the upper right neighboring block may be extracted from the position TR (i.e., lower left position of the upper right neighboring block) and the position Y (i.e., lower right position of the upper right neighboring block), respectively.
Fig. 40 shows a transformation of a coordinate system according to a hough transformation according to an example.
The hough transform may be an algorithm capable of rapidly detecting straight lines in a two-dimensional (2D) image.
Fig. 40 shows an example of a hough transform. The left side (a) of fig. 40 may indicate the (x, y) coordinate system. The right side (b) of fig. 40 may indicate the (ρ, θ) coordinate system.
In (a) of fig. 40, an equation of a straight line passing through an arbitrary point (a, b) in the (x, y) coordinate system may be represented by ρ and θ, as shown in equation 2.
[ equation 2]
xcosθ+ysinθ=ρ
Here, as the range of θ, θ∈ [0, pi ] can be used to represent a straight line having all slopes in the (x, y) coordinate system.
When ρ and θ obtained from equation 1 are expressed in the (ρ, θ) coordinate system, a curve (such as shown in (b) of fig. 40) can be obtained. That is, the curve may refer to all straight lines passing through points (a, b) in the (x, y) coordinate system.
After transforming points in the (x, y) coordinate system to the (ρ, θ) coordinate system using the relationship, if the intersection point accumulated at a specific frequency or higher is set as (ρ) accumulateaccumulate ) The intersection point is converted back to the (x, y) coordinate system as shown in equation 3, and thus a straight line indicating the directivity in the corresponding region can be obtained.
[ equation 3]
Figure BDA0004133786430001041
Fig. 41 is a flowchart of an encoding method according to an embodiment.
At step 4110, a prediction mode required for prediction of the target block may be determined.
In step 4120, information about the encoded target block may be generated by performing encoding on the target block using the prediction mode.
When encoding is performed, the information described in the above embodiments may be used.
In an embodiment, the target block may be encoded using GPM. The directionality in the target block may be used to determine the mode of the GPM.
In an embodiment, the texture properties of the target block may be used to determine the partition mode of the target block.
In step 4130, a bitstream may be generated.
The bitstream may include information about the encoded target block. Further, as exemplified in the foregoing embodiments, the information about the encoded target block may include information signaled from the encoding device 1600 to the decoding device 1700.
The bitstream may be transmitted from the encoding device 1600 to the decoding device 1700.
Fig. 42 is a flowchart of a decoding method according to an embodiment.
In step 4210, a bit stream may be obtained.
The bitstream may include information about the encoded target block. Further, as exemplified in the foregoing embodiments, the information about the encoded target block may include information signaled from the encoding device 1600 to the decoding device 1700.
The computer readable storage medium may store a bitstream. The computer readable storage medium may be a non-transitory storage medium.
The bitstream may be transmitted from the encoding device 1600 to the decoding device 1700.
In step 4220, a prediction mode for prediction of the target block may be determined using the bitstream.
In step 4230, decoding may be performed on the target block using the prediction mode and information about the encoded target block.
The above embodiments may be performed by the encoding apparatus 1600 and the decoding apparatus 1700 using the same and/or corresponding methods as each other. Further, for encoding and/or decoding of images, a combination of one or more of the above embodiments may be used.
In the encoding apparatus 1600 and the decoding apparatus 1700, the application order of the embodiments may be different from each other. Alternatively, in the encoding device 1600 and the decoding device 1700, the application order of the embodiments may be (at least partially) identical to each other.
The order of application of the embodiments may be different from each other in the encoding device 1600 and the decoding device 1700, or the order of application of the embodiments may be the same as each other in the encoding device 1600 and the decoding device 1700.
The embodiments may be performed for each of a luminance signal and a chrominance signal. The embodiments may be equally performed on luminance signals and chrominance signals.
The form of the block to which the embodiments of the present disclosure are applied may have a square or non-square shape.
Embodiments of the present disclosure may be applied according to a size of at least one of a target block, a coded block, a predicted block, a transformed block, a current block, a coded unit, a predicted unit, a transformed unit, a unit, and a current unit. Here, the size may be defined as a minimum size and/or a maximum size that allows the embodiment to be applied, and may be defined as a fixed size to which the embodiment is applied. Further, in the embodiments, the first embodiment may be applied to a first size, and the second embodiment may be applied to a second size. That is, the embodiments may be applied compositely according to size. Further, the embodiments of the present disclosure may be applied only to the case where the size is equal to or greater than the minimum size and less than or equal to the maximum size. That is, the embodiment can be applied only to the case where the block size falls within a specific range.
Further, the embodiments of the present disclosure may be applied only to a case where a condition that a size is equal to or larger than a minimum size and a condition that a size is smaller than or equal to a maximum size are satisfied, wherein each of the minimum size and the maximum size may be a size of one of the blocks described in the embodiments and the units described in the embodiments above. That is, the block targeted for the minimum size may be different from the block targeted for the maximum size. For example, the embodiments of the present disclosure may be applied only to a case where the size of the target block is equal to or greater than the minimum size of the block and less than or equal to the maximum size of the block.
For example, the embodiment may be applied only to a case where the size of the target block is equal to or larger than 8×8. For example, the embodiment may be applied only to a case where the size of the target block is equal to or larger than 16×16. For example, the embodiment may be applied only to the case where the size of the target block is equal to or larger than 32×32. For example, the embodiment may be applied only to the case where the size of the target block is equal to or larger than 64×64. For example, the embodiment may be applied only to the case where the size of the target block is equal to or greater than 128×128. For example, the embodiment may be applied only to the case where the size of the target block is 4×4. For example, the embodiment may be applied only to the case where the size of the target block is less than or equal to 8×8. For example, the embodiment may be applied only to the case where the size of the target block is less than or equal to 16×16. For example, the embodiment may be applied only to the case where the size of the target block is equal to or greater than 8×8 and less than or equal to 16×16. For example, the embodiment may be applied only to the case where the size of the target block is equal to or greater than 16×16 and less than or equal to 64×64.
Embodiments of the present disclosure may be applied according to a temporal layer. To identify the temporal layer to which the embodiment is applicable, a separate identifier may be signaled, and the embodiment may be applied to the temporal layer specified by the corresponding identifier. Here, the identifier may be defined as a lowest (bottom) layer and/or a highest (top) layer to which the embodiment is applicable, and may be defined as a specific layer indicating an application embodiment. Furthermore, a fixed temporal layer of the application embodiment may also be defined.
For example, the embodiment may be applied only in the case where the temporal layer of the target image is the lowest layer. For example, the embodiment may be applied only to a case where the temporal layer identifier of the target image is equal to or greater than 1. For example, the embodiment may be applied only in a case where the temporal layer of the target image is the highest layer.
The stripe type or parallel block group type of an embodiment of the present invention to which the embodiment is applied may be defined, and the embodiment of the present invention may be applied according to the corresponding stripe type or parallel block group type.
In the above-described embodiments, it may be explained that, during application of a specific process to a specific target, assuming that a specific condition may be required and the specific process is performed under a specific determination, when it has been described such that a specific condition is satisfied based on a specific encoding parameter determination or a specific determination is made based on a specific encoding parameter, the specific encoding parameter may be replaced with an additional encoding parameter. In other words, the encoding parameters affecting a particular condition or a particular determination may be considered merely exemplary, and it is understood that a combination of one or more additional encoding parameters in addition to the particular encoding parameters is used as the particular encoding parameters.
In the above-described embodiments, although the method has been described based on a flowchart as a series of steps or units, the present disclosure is not limited to the order of the steps, and some steps may be performed in a different order from the order of the steps that have been described or simultaneously with other steps. Furthermore, those skilled in the art will appreciate that: the steps shown in the flowcharts are not exclusive and may include other steps as well, or one or more steps in the flowcharts may be deleted without departing from the scope of the present disclosure.
The above-described embodiments include examples of various aspects. Although not all possible combinations for indicating various aspects are described, one of ordinary skill in the art will appreciate that other combinations are possible in addition to the combinations explicitly described. Accordingly, it is to be understood that the present disclosure includes other alternatives, variations, and modifications, which fall within the scope of the appended claims.
The embodiments according to the present disclosure described above may be implemented as programs capable of being executed by various computer devices, and may be recorded on computer-readable storage media. The computer readable storage medium may include program instructions, data files, and data structures, alone or in combination. The program instructions recorded on the storage medium may be specially designed and configured for the present disclosure, or may be known or available to those having ordinary skill in the computer software arts.
The computer readable storage medium may include information used in embodiments of the present disclosure. For example, the computer-readable storage medium may include a bitstream, and the bitstream may include the information described above in embodiments of the present disclosure.
The computer-readable storage medium may include a non-transitory computer-readable medium.
Examples of computer readable storage media may include all types of hardware devices that are specially configured for recording and executing program instructions, such as magnetic media (such as hard disks, floppy disks, and magnetic tape), optical media (such as Compact Discs (CD) -ROMs, and Digital Versatile Discs (DVDs)), magneto-optical media (such as floppy disks, ROMs, RAMs, and flash memories). Examples of program instructions include both machine code, such as produced by a compiler, and high-level language code that can be executed by the computer using an interpreter. The hardware devices may be configured to operate as one or more software modules to perform the operations of the present disclosure, and vice versa.
As noted above, although the present disclosure has been described based on specific details (such as detailed components and a limited number of embodiments and figures), the specific details are provided only for ease of understanding the entire disclosure, the present disclosure is not limited to these embodiments, and various changes and modifications will be practiced by those of skill in the art in light of the foregoing description.
It is, therefore, to be understood that the spirit of the present embodiments is not to be limited to the above-described embodiments and that the appended claims and equivalents thereof, and modifications thereto, fall within the scope of the present disclosure.

Claims (20)

1. A method of encoding, comprising:
determining a prediction mode required for prediction of the target block; and
information on the encoded target block is generated by performing encoding on the target block using the prediction mode.
2. The encoding method of claim 1, wherein:
encoding the target block using geometric partition mode GPM, and
the directionality in the target block is used to determine a mode of the GPM.
3. The encoding method of claim 1, wherein the partition mode of the target block is determined using directionality in the target block.
4. A coding method according to claim 3, wherein the directionality is derived using a hough transform.
5. The encoding method of claim 4, wherein the hough transform is applied to an edge map of the target block.
6. The encoding method of claim 5, wherein edges in the edge map are detected using one or more of a sobel operation, a laplace operation, and a Canny operation.
7. The encoding method of claim 6, wherein the partition of the target block is not applied when the magnitude of the value of the edge is less than or equal to a threshold.
8. The encoding method of claim 4, wherein the cumulative frequency threshold value of the hough transform is determined based on a size of the target block.
9. The encoding method of claim 4, wherein the hough transform is used to detect straight lines in a horizontal direction, a vertical direction, a 45 ° diagonal direction, and a 135 ° diagonal direction.
10. The encoding method of claim 4, wherein the directionality is determined by a straight line detected in the target block.
11. The encoding method according to claim 1, wherein a part of available partition modes is excluded from encoding the target block based on a straight line detected in the target block.
12. The encoding method of claim 11, wherein the partition mode is not used when no straight line is detected in the target block.
13. The encoding method of claim 11, wherein when a horizontal straight line and a vertical straight line are not detected in the target block, a horizontal partitioning method and a vertical partitioning method are not applied to the target block.
14. The encoding method of claim 11, wherein when a vertical straight line is not detected in the target block, a vertical partition method is not applied to the target block.
15. The encoding method of claim 14, wherein the vertical partitioning method includes a vertical binary tree BT partitioning method and a vertical trigeminal tree TT partitioning method.
16. The encoding method of claim 1, wherein a texture property of the target block is used to determine a partition mode of the target block.
17. The encoding method of claim 16, wherein a portion of available partition modes are excluded from encoding the target block based on texture properties of the target block.
18. The encoding method of claim 16, wherein the texture attributes comprise one or more of edges, variances, and averages.
19. A decoding method, comprising:
determining a prediction mode required for prediction of the target block using the bitstream; and
decoding is performed on the target block using the prediction mode and information about the encoded target block.
20. A computer-readable storage medium storing a bitstream for image decoding, the bitstream comprising information about an encoded target block, wherein:
the prediction mode required for the prediction of the target block is determined using the bitstream, an
Decoding is performed on the target block using the prediction mode and information about the encoded target block.
CN202180064175.7A 2020-07-20 2021-07-20 Method, apparatus and recording medium for encoding/decoding image by using geometric partition Pending CN116325730A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR10-2020-0089813 2020-07-20
KR20200089813 2020-07-20
KR10-2020-0118551 2020-09-15
KR20200118551 2020-09-15
PCT/KR2021/009340 WO2022019613A1 (en) 2020-07-20 2021-07-20 Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning

Publications (1)

Publication Number Publication Date
CN116325730A true CN116325730A (en) 2023-06-23

Family

ID=80049593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180064175.7A Pending CN116325730A (en) 2020-07-20 2021-07-20 Method, apparatus and recording medium for encoding/decoding image by using geometric partition

Country Status (2)

Country Link
KR (1) KR20220011107A (en)
CN (1) CN116325730A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023195762A1 (en) * 2022-04-05 2023-10-12 한국전자통신연구원 Method, apparatus, and recording medium for image encoding/decoding
WO2023224279A1 (en) * 2022-05-16 2023-11-23 현대자동차주식회사 Method and apparatus for video coding using geometric motion prediction

Also Published As

Publication number Publication date
KR20220011107A (en) 2022-01-27

Similar Documents

Publication Publication Date Title
CN110463201B (en) Prediction method and apparatus using reference block
CN110121884B (en) Method and apparatus for filtering
CN109314785B (en) Method and apparatus for deriving motion prediction information
CN110476425B (en) Prediction method and device based on block form
CN111567045A (en) Method and apparatus for using inter prediction information
US20220078485A1 (en) Bidirectional intra prediction method and apparatus
KR20230018505A (en) Method and apparatus for encoding and decoding image using prediction network
CN111699682A (en) Method and apparatus for encoding and decoding using selective information sharing between channels
US20220321890A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
US20230319271A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
KR20190107581A (en) Method and apparatus for derivation of intra prediction mode
CN114450946A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
CN116325730A (en) Method, apparatus and recording medium for encoding/decoding image by using geometric partition
CN111919448A (en) Method and apparatus for image encoding and image decoding using temporal motion information
CN111684801A (en) Bidirectional intra prediction method and apparatus
CN114270865A (en) Method, apparatus and recording medium for encoding/decoding image
US20220295059A1 (en) Method, apparatus, and recording medium for encoding/decoding image by using partitioning
KR20220089685A (en) Method, apparatus and recording medium for encoding/decoding image using partitioning
CN114270828A (en) Method and apparatus for image encoding and image decoding using block type-based prediction
CN114556922A (en) Method, apparatus and recording medium for encoding/decoding image by using partition
CN114342388A (en) Method and apparatus for image encoding and image decoding using region segmentation
KR20220057437A (en) Method, apparatus and recording medium for encoding/decoding image
CN115066895A (en) Method and apparatus for encoding/decoding image by using palette mode, and recording medium
KR20230144972A (en) Method, apparatus and recording medium for encoding/decoding image
KR20230108527A (en) Method, apparatus and recording medium for encoding/decoding image using prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination