WO2019117645A1 - Procédé et dispositif de codage et de décodage d'image utilisant un réseau de prédiction - Google Patents

Procédé et dispositif de codage et de décodage d'image utilisant un réseau de prédiction Download PDF

Info

Publication number
WO2019117645A1
WO2019117645A1 PCT/KR2018/015844 KR2018015844W WO2019117645A1 WO 2019117645 A1 WO2019117645 A1 WO 2019117645A1 KR 2018015844 W KR2018015844 W KR 2018015844W WO 2019117645 A1 WO2019117645 A1 WO 2019117645A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
prediction
target block
unit
network
Prior art date
Application number
PCT/KR2018/015844
Other languages
English (en)
Korean (ko)
Inventor
조승현
이주영
김연희
석진욱
임웅
김종호
이대열
정세윤
김휘용
최진수
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to US16/772,443 priority Critical patent/US11166014B2/en
Priority claimed from KR1020180160775A external-priority patent/KR102262554B1/ko
Publication of WO2019117645A1 publication Critical patent/WO2019117645A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists

Definitions

  • the embodiments described below relate to a decoding method, a decoding apparatus, a coding method, and an encoding apparatus for video, and a decoding method, a decoding apparatus, a coding method, and an encoding apparatus using the prediction network.
  • HDTV High Definition TV
  • FHD Full HD
  • UHD ultra high definition
  • An apparatus and method for encoding / decoding an image includes an inter prediction technique, an intra prediction technique, and an entropy coding technique to perform encoding / decoding on high resolution and high image quality images Can be used.
  • the inter prediction technique may be a technique of predicting a value of a pixel included in a target picture temporally using a previous picture and / or a temporally subsequent picture.
  • the intra prediction technique may be a technique of predicting a value of a pixel included in a target picture by using information of a pixel in the target picture.
  • the entropy coding technique may be a technique of allocating a short code to a symbol having a high appearance frequency and allocating a long code to a symbol having a low appearance frequency.
  • One embodiment of the present invention can provide a coding apparatus, a coding method, a decoding apparatus, and a decoding method for performing prediction on a target block using a prediction network.
  • One embodiment of the present invention can provide an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method for performing learning of a predictive network using specific information generated during encoding and / or decoding of an image.
  • Generating a prediction block for the target block by performing prediction on the target block using the prediction network; And generating a reconstructed block for the target block based on the prediction block.
  • the prediction network may be an intra prediction network that performs prediction on the target block using a spatial reference block.
  • the prediction network may be an inter prediction network that performs prediction on the target block using a temporal reference block.
  • the inter prediction network may perform a prediction on the target block using a spatial reference block.
  • the prediction network may be plural.
  • the plurality of prediction networks may be used for a plurality of color channels, a plurality of block sizes, or a plurality of quantization parameters, respectively.
  • the prediction network may generate the prediction block by performing prediction on the target block using a reference block.
  • Pre-processing may be performed on the sample of the reference block before the sample of the reference block is input in the prediction network.
  • a post-process which is an inverse process of the pre-process, may be performed on the sample of the prediction block.
  • the preprocessing may be an average subtraction, normalization, principal component analysis or whitening.
  • Learning of the prediction network may be performed for one image.
  • the learning of the prediction network can be performed for the placement.
  • a layout normalization layer may be inserted at the input end of the hidden layer of the prediction network.
  • the loss function for learning the prediction network may be defined based on the prediction image and the original image.
  • the loss function may be defined based on the squared difference between the predicted image and the original image.
  • the loss function can be defined based on the absolute value of the difference between the predicted image and the original image.
  • the loss function for on-line update of the prediction network may be determined based on the residual block.
  • the loss function for online update of the prediction network may be determined based on the reconstructed block.
  • the online updating of the network parameters of the prediction network can be continuously performed during the decoding of the video.
  • the network parameters of the prediction network may be initialized for a specified object.
  • the specified object may be a picture having a different identifier than a slice, a picture, or a temporal identifier of a previous picture.
  • Generating a prediction block for the target block by performing prediction on the target block using the prediction network; And generating a reconstructed block for the target block based on the prediction block.
  • a computer-readable medium storing a bitstream for decoding an image, the bitstream including information about a target block, and using the prediction network to predict a target block A prediction block for the target block is generated, and a reconstructed block for the target block is generated based on the information about the target block and the prediction block.
  • an encoding apparatus an encoding method, a decoding apparatus, and a decoding method for performing prediction on a target block using a prediction network.
  • an encoding apparatus an encoding method, a decoding apparatus, and a decoding method for performing learning of a predictive network using specified information generated during encoding and / or decoding of an image.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus to which the present invention is applied.
  • FIG. 2 is a block diagram illustrating a configuration of a decoding apparatus to which the present invention is applied.
  • FIG. 3 is a diagram schematically showing a division structure of an image when coding and decoding an image.
  • Fig. 4 is a diagram showing a form of a prediction unit (PU) that a coding unit (CU) can include.
  • FIG. 5 is a diagram showing a form of a conversion unit (TU) which can be included in a coding unit (CU).
  • TU conversion unit
  • CU coding unit
  • Figure 6 shows the partitioning of a block according to an example.
  • FIG. 7 is a diagram for explaining an embodiment of an intra prediction process.
  • FIG. 8 is a view for explaining the positions of reference samples used in the intra prediction process.
  • FIG. 9 is a diagram for explaining an embodiment of the inter prediction process.
  • Figure 10 shows spatial candidates according to an example.
  • FIG. 11 shows an order of addition of motion information of a spatial candidate to a merge list according to an example.
  • FIG. 12 illustrates a process of transforming and quantizing according to an example.
  • FIG 13 illustrates diagonal scanning according to an example.
  • 16 is a structural diagram of an encoding apparatus according to an embodiment.
  • 17 is a structural diagram of a decoding apparatus according to an embodiment.
  • FIG. 18 is a flowchart of a method of generating a reconstructed block according to an embodiment.
  • 19 is a flowchart of a method of encoding an image according to an embodiment.
  • 20 is a flowchart of a method of decoding an image according to an embodiment.
  • FIG. 21 shows the generation of a prediction block using an intra prediction network according to an example.
  • FIG. 22 shows reference blocks for an intra prediction network according to an example.
  • FIG 23 shows reference blocks for an intra prediction network according to an example.
  • FIG. 24 shows the generation of a prediction block using an inter prediction network according to an example.
  • 25 shows temporal reference blocks for an inter prediction network according to an example.
  • 26 shows an intra prediction network based on a fully connected layer according to an example.
  • FIG. 27 illustrates an intra prediction network based on a composite product layer according to an example.
  • first, second, etc. in the present invention may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
  • / or < / RTI &gt includes any combination of a plurality of related listed items or any of a plurality of related listed items.
  • a component When it is mentioned that a component is “connected” or “connected” to another component, the two components may be directly connected or connected to each other, It is to be understood that other components may be present in the middle of the components. When a component is referred to as being “directly connected” or “directly connected” to another component, it should be understood that no other component is present in the middle of the two components.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is composed of separate hardware or one software constituent unit. That is, each component is listed as a separate component for convenience of explanation. At least two components of each component are combined to form one component, or one component is divided into a plurality of components, And the integrated embodiments and the separate embodiments of each of these components are also included in the scope of the present invention unless they depart from the essence of the present invention.
  • the description of "comprising" a specific configuration does not exclude a configuration other than the specific configuration, and the additional configuration is not limited to the implementation of the exemplary embodiments or the technical idea of the exemplary embodiments. Range. ≪ / RTI >
  • an image may denote a picture constituting a video, or may represent a video itself.
  • "encoding and / or decoding of an image” may mean “encoding and / or decoding of video ", which means” encoding and / or decoding of one of the images constituting a video " It is possible.
  • video and “motion picture” may be used interchangeably and may be used interchangeably.
  • the target image may be a coding target image to be coded and / or a decoding target image to be decoded.
  • the target image may be an input image input to the encoding device or an input image input to the decoding device.
  • image image
  • picture picture
  • frame and “screen” may be used interchangeably and may be used interchangeably.
  • the target block may be a current block to be coded and / or a current block to be decoded.
  • the target block may be the current block that is the current encoding and / or decoding target.
  • the terms "object block” and "current block” may be used interchangeably and may be used interchangeably.
  • block and “unit” may be used interchangeably and may be used interchangeably. Or “block” may represent a particular unit.
  • a specific signal may be a signal indicating a specific block.
  • an original signal may be a signal representing a target block.
  • the prediction signal may be a signal representing a prediction block.
  • the residual signal may be a signal representing the residual block.
  • each of the specified information, data, flags and elements, attributes, etc. may have a value.
  • the value "0" of information, data, flags and element, attribute, etc. may represent a logical false or a first predefined value. That is to say, the values “0 ", False, Logical False, and First Default values can be used interchangeably.
  • the value "1" of information, data, flags and elements, attributes, etc. may represent a logical true or a second predefined value. That is to say, the values "1 ", " true ", " logical "
  • i When a variable such as i or j is used to represent a row, column or index, the value of i may be an integer greater than or equal to 0 and may be an integer greater than or equal to one. In other words, in the embodiments, rows, columns, indexes, etc. may be counted from 0 and counted from 1.
  • Encoder An apparatus that performs encoding.
  • Decoder A device that performs decoding.
  • a unit may represent a unit of encoding and decoding of an image.
  • the terms “unit” and “block” may be used interchangeably and may be used interchangeably.
  • the unit may be an MxN array of samples.
  • M and N may be positive integers, respectively.
  • a unit can often refer to an array of two-dimensional samples.
  • a unit may be an area generated by the division of one image. That is to say, a unit may be a specified area in one image.
  • One image may be divided into a plurality of units.
  • a unit may mean the divided portion when one image is divided into subdivided portions and when encoding or decoding is performed on the subdivided portions.
  • predetermined processing on the unit may be performed depending on the type of unit.
  • the type of unit may be a Macro Unit, a Coding Unit (CU), a Prediction Unit (PU), a Residual Unit and a Transform Unit (TU) . ≪ / RTI >
  • the unit may include a block, a macroblock, a Coding Tree Unit, a Coding Tree Block, a Coding Unit, a Coding Block, A prediction unit, a prediction block, a residual unit, a residual block, a transform unit, and a transform block.
  • a unit may refer to information comprising a luma component block and its corresponding chroma component block, and a syntax element for each block, to distinguish it from a block.
  • the size and shape of the unit may vary.
  • the unit may have various sizes and shapes.
  • the shape of the unit may include not only squares but also geometric figures that can be expressed in two dimensions, such as rectangles, trapezoids, triangles, and pentagons.
  • the unit information may include at least one of a unit type, a unit size, a unit depth, a unit encoding order, and a unit decoding order.
  • the type of unit may refer to one of CU, PU, residual unit, and TU.
  • one unit may be further subdivided into smaller units with a smaller size than the unit.
  • Depth can mean the degree of division of a unit. Unit depth can also indicate the level at which a unit is present when the unit is represented in a tree structure.
  • the unit partition information may include a depth for the depth of the unit.
  • the depth may indicate the number and / or the number of times the unit is divided.
  • the depth of the root node is the shallowest and the depth of the leaf node is the deepest.
  • a unit may be hierarchically divided into a plurality of subunits with depth information based on a tree structure. That is to say, the unit and the lower unit generated by the division of the unit can correspond to the node and the child node of the node, respectively. Each divided subunit may have a depth. Since the depth indicates the number and / or degree of division of the unit, the division information of the lower unit may include information on the size of the lower unit.
  • the top node may correspond to the first unit that has not been partitioned.
  • the superordinate node may be referred to as a root node.
  • the uppermost node may have a minimum depth value. At this time, the uppermost node can have a level 0 depth.
  • a node with a depth of level 1 can represent a unit created as the first unit is once partitioned.
  • a node with a depth of level 2 may represent a unit created as the first unit is divided twice.
  • a node with a depth of level n can represent a unit created as the first unit is divided n times.
  • the leaf node may be the lowest node, and may be a node that can not be further divided.
  • the depth of the leaf node may be the maximum level.
  • the default value of the maximum level may be three.
  • - QT depth can indicate depth for quad split.
  • the BT depth can represent the depth for binary segmentation.
  • the TT depth can represent the depth for the ternary splitting.
  • a sample can be a base unit that makes up a block.
  • the samples can be represented as values from 0 to 2 Bd- 1, depending on the bit depth (Bd).
  • the sample may be a pixel or a pixel value.
  • pixel In the following, the terms “pixel”, “pixel” and “sample” may be used interchangeably and may be used interchangeably.
  • a CTU can consist of one luma component (Y) coding tree block and two chroma component (Cb, Cr) coding tree blocks related to the luma component coding tree block have.
  • the CTU may also include the above blocks and the syntax elements for each block of the above blocks.
  • Each coding tree unit may be configured as a quad tree (QT), a binary tree (BT), and a ternary tree (TT) to construct a lower unit such as a coding unit, May be divided using one or more division methods.
  • each coding tree unit may be partitioned using a MultiType Tree (MTT) using one or more partitioning schemes.
  • QT quad tree
  • BT binary tree
  • TT ternary tree
  • MTT MultiType Tree
  • the CTU can be used as a term to refer to a pixel block, which is a processing unit in the process of decoding and encoding an image, as in the segmentation of an input image.
  • a coding tree block can be used as a term for designating any one of a Y coding tree block, a Cb coding tree block, and a Cr coding tree block.
  • a neighboring block may mean a block adjacent to a target block.
  • a neighboring block may mean a restored neighboring block.
  • peripheral block and “adjacent block” may be used interchangeably and may be used interchangeably.
  • a spatial neighbor block may be a block spatially adjacent to a target block.
  • the neighboring block may include a spatial neighboring block.
  • the target block and the spatial neighboring block may be included in the target picture.
  • a spatial neighboring block may refer to a block that is bounded to a target block or a block located within a predetermined distance from a target block.
  • a spatial neighboring block may mean a block adjacent to the vertex of the target block.
  • a block adjacent to a vertex of a target block may be a block vertically adjacent to a neighboring block horizontally adjacent to the target block or a block horizontally adjacent to a neighboring block vertically adjacent to the target block.
  • Temporal neighbor block The temporal neighbor block may be temporally adjacent to the target block.
  • the neighboring blocks may include temporal neighboring blocks.
  • the temporal neighboring block may include a co-located block (col block).
  • the call block may be a block in a co-located picture (col picture).
  • the position of the call block in the call picture may correspond to the position in the target picture of the target block.
  • the position of the call block in the call picture may be the same as the position in the target picture of the target block.
  • the call picture may be a picture included in the reference picture list.
  • the temporal neighboring block may be a block temporally adjacent to the spatial neighboring block of the target block.
  • Prediction unit It can mean a base unit for prediction such as inter prediction, intra prediction, inter-compensation, intra compensation, and motion compensation.
  • one prediction unit may be divided into a plurality of partitions or lower prediction units having a smaller size.
  • the plurality of partitions may also be a base unit in performing prediction or compensation.
  • the partition generated by the division of the prediction unit may also be a prediction unit.
  • Prediction unit partition may mean a type in which a prediction unit is divided.
  • the reconstructed neighboring unit may be a unit that has already been decoded and reconstructed around the target unit.
  • the reconstructed neighboring unit may be a spatial adjacent unit or a temporal adjacent unit for the target unit.
  • the reconstructed spatial surrounding unit may be a unit in the target picture and a unit already reconstructed through coding and / or decoding.
  • the reconstructed temporal neighboring unit may be a unit in the reference image and a unit already reconstructed through coding and / or decoding.
  • the position in the reference picture of the reconstructed temporal neighboring unit may be the same as the position in the target picture of the target unit or may correspond to the position in the target picture of the target unit.
  • a parameter set may correspond to header information among structures in a bitstream.
  • the parameter set includes a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), and an adaptation parameter set (APS) And the like.
  • VPS video parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • APS adaptation parameter set
  • the parameter set may include slice header information and tile header information.
  • Rate-distortion optimization An encoding apparatus uses rate-distortion optimization to provide a high coding efficiency using a combination of a coding unit size, a prediction mode, a prediction unit size, motion information, Distortion optimization can be used.
  • the rate-distortion optimization scheme can calculate the rate-distortion cost of each combination to select the optimal combination from among the combinations above.
  • the rate-distortion cost can be calculated using Equation 1 below.
  • the combination in which the rate-distortion cost is minimized can be selected as the optimum combination in the rate-distortion optimization method.
  • D may be the mean square error of the difference values between the original transform coefficients and the reconstructed transform coefficients in the transform unit.
  • R can represent the bit rate using related context information.
  • R may include coding parameter information such as a prediction mode, motion information, and coded block flag, as well as bits generated by coding the transform coefficients.
  • the encoder may perform inter prediction and / or intra prediction, conversion, quantization, entropy coding, inverse quantization, inverse transform, etc. to calculate the correct D and R. These processes can greatly increase the complexity in the encoding apparatus.
  • Bitstream may be a bit string containing encoded image information.
  • a parameter set may correspond to header information among structures in a bitstream.
  • the parameter set may comprise at least one of a video parameter set, a sequence parameter set, a picture parameter set and an adaptation parameter set.
  • the parameter set may also include information of a slice header and information of a tile header.
  • Parsing may entropy-decode the bitstream to determine the value of a syntax element. Or, parsing may mean entropy decoding itself.
  • Symbol may include at least one of a syntax element, a coding parameter, and a transform coefficient of a coding target unit and / or a target unit to be decoded.
  • the symbol may mean a target of entropy encoding or a result of entropy decoding.
  • a reference picture may refer to an image that a unit refers to for inter prediction or motion compensation.
  • the reference picture may be an image including a reference unit referred to by the target unit for inter prediction or motion compensation.
  • reference picture and “reference picture” may be used interchangeably and may be used interchangeably.
  • Reference picture list may be a list including one or more reference pictures used for inter-prediction or motion compensation.
  • the types of the reference picture list include a list combination (LC), a list 0 (L0), a list 1 (L1), a list 2 (L2), and a list 3 ) And the like.
  • One or more reference picture lists may be used for inter prediction.
  • An inter prediction indicator may indicate the direction of inter prediction for a target unit.
  • the inter prediction may be one of a unidirectional prediction and a bidirectional prediction.
  • the inter prediction indicator may indicate the number of reference images used when generating the prediction unit of the target unit.
  • the inter prediction indicator may mean the number of prediction blocks used for inter prediction or motion compensation for the target unit.
  • the reference picture index may be an index indicating a specific reference picture in the reference picture list.
  • Motion Vector A motion vector may be a two-dimensional vector used in inter prediction or motion compensation.
  • a motion vector may mean an offset between a target image and a reference image.
  • MV can be expressed in the form (mv x , mv y ).
  • mv x may represent a horizontal component
  • mv y may represent a vertical component.
  • the search area may be a two-dimensional area where an MV search is performed during inter prediction.
  • the size of the search area may be MxN.
  • M and N may be positive integers, respectively.
  • Motion vector candidate may mean a motion vector of a block, which is a prediction candidate or a prediction candidate, when a motion vector is predicted.
  • the motion vector candidate may be included in the motion vector candidate list.
  • a motion vector candidate list may refer to a list constructed using one or more motion vector candidates.
  • Motion vector candidate index may indicate an indicator indicating a motion vector candidate in a motion vector candidate list.
  • the motion vector candidate index may be an index of a motion vector predictor.
  • Motion information includes motion picture information, reference picture list information, reference pictures, motion vector candidates, motion vector candidate indexes, merge candidates, and merge indices, as well as motion vectors, reference picture indexes and inter prediction indicators Quot; and " information "
  • a merge candidate list can mean a list constructed using merge candidates.
  • a merge candidate can mean a spatial merge candidate, a temporal merge candidate, a combined merge candidate, a combined bi-prediction merge candidate, and a zero merge candidate.
  • the merge candidate may include motion type information such as prediction type information, a reference picture index for each list, and a motion vector.
  • the merge index may be an indicator that indicates the merge candidate in the merge candidate list.
  • the merge index may indicate a reconstructed unit spatially adjacent to the target unit and a reconstructed unit that derives the merge candidate out of the reconstructed units temporally adjacent to the target unit.
  • the merge index may indicate at least one of the motion information of the merge candidate.
  • the transform unit can be a base unit in residual signal coding and / or residual signal decoding such as transform, inverse transform, quantization, inverse quantization, transform coefficient coding and transform coefficient decoding.
  • One conversion unit can be divided into a plurality of conversion units of a smaller size.
  • Scaling can refer to the process of multiplying the transform coefficient level by an argument.
  • Scaling may also be referred to as dequantization.
  • the quantization parameter may refer to a value used when generating a transform coefficient level for a transform coefficient in quantization.
  • the quantization parameter may mean a value used when generating the transform coefficient by scaling the transform coefficient level in the inverse quantization.
  • the quantization parameter may be a value mapped to a quantization step size.
  • a delta quantization parameter means a predicted quantization parameter and a differential value of the quantization parameter of the target unit.
  • a scan may mean a method of arranging the order of coefficients within a unit, block, or matrix. For example, arranging a two-dimensional array in a one-dimensional array form can be referred to as a scan. Alternatively, arranging the one-dimensional arrays in the form of a two-dimensional array may be referred to as a scan or an inverse scan.
  • the transform coefficient may be a coefficient value generated as a result of performing the transform in the encoding apparatus.
  • the transform coefficient may be a coefficient value generated by performing at least one of entropy decoding and inverse quantization in the decoding apparatus.
  • the quantized level or quantized transform coefficient level generated by applying the quantization to the transform coefficients or residual signals may also be included in the meaning of the transform coefficients.
  • a quantized level may mean a value generated by performing a quantization on a transform coefficient or a residual signal in an encoding apparatus.
  • the quantized level may be a value to be subjected to inverse quantization in performing inverse quantization in the decoding apparatus.
  • the quantized transform coefficient levels resulting from the transform and quantization can also be included in the meaning of the quantized levels.
  • Non-zero transform coefficient may mean a transform coefficient having a value other than zero or a transform coefficient level having a non-zero value.
  • the non-zero transform coefficient may mean a transform coefficient whose magnitude is not zero or a transform coefficient level whose magnitude is not zero.
  • a quantization matrix may mean a matrix used in a quantization process or a dequantization process to improve the subjective or objective image quality of an image.
  • the quantization matrix may also be referred to as a scaling list.
  • Quantization matrix coefficient may refer to each element in the quantization matrix.
  • the quantization matrix coefficient may also be referred to as a matrix coefficient.
  • the base matrix may be a predefined quantization matrix in the encoder and decoder.
  • Non-default matrix The non-default matrix may be a non-default quantization matrix in the encoder and decoder.
  • the non-default matrix may be signaled from the encoder to the decoder.
  • the MPM may indicate an intra prediction mode that is likely to be used for intra prediction of a target block.
  • the encoding device and the decoding device may determine one or more MPMs based on coding parameters related to the object block and attributes of the object related to the object block.
  • the encoder and decoder may determine one or more MPMs based on the intra prediction mode of the reference block.
  • the reference block may be plural.
  • the plurality of reference blocks may include a spatial neighboring block to the left of the target block and a spatial neighboring block to the top of the target block. That is to say, one or more different MPMs may be determined depending on which intra prediction modes are used for the reference blocks.
  • One or more MPMs may be determined in the same manner in the encoder and decoder. That is to say, the encoder and decoder can share an MPM list that includes the same one or more MPMs.
  • the MPM list may be a list containing one or more MPMs.
  • the number of one or more MPMs in the MPM list may be predetermined.
  • the MPM indicator can indicate the MPM used for intraprediction of the target block of one or more MPMs in the MPM list.
  • the MPM indicator may be an index to the MPM list.
  • the MPM list is determined in the same manner in the encoder and the decoder, the MPM list itself may not need to be transmitted from the encoder to the decoder.
  • the MPM indicator may be signaled from the encoder to the decoder. As the MPM indicator is signaled, the decoding device may determine the MPM to be used for intra prediction of the target block among the MPMs in the MPM list.
  • the MPM Utilization Indicator can indicate whether the MPM use mode is to be used for prediction of the target block.
  • the MPM usage mode may be a mode for determining an MPM to be used for intra prediction on a target block using the MPM list.
  • the MPM usage indicator may be signaled from the encoding device to the decryption device.
  • Signaling may indicate that information is sent from the encoding device to the decoding device.
  • signaling may mean including information in a bitstream or recording medium.
  • the information signaled by the encoding apparatus may be used by the decoding apparatus.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus to which the present invention is applied.
  • the encoding apparatus 100 may be an encoder, a video encoding apparatus, or an image encoding apparatus.
  • the video may include one or more images.
  • the encoding apparatus 100 may sequentially encode one or more images of the video.
  • an encoding apparatus 100 includes an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, An inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
  • the encoding apparatus 100 may perform encoding of a target image using an intra mode and / or an inter mode.
  • the encoding apparatus 100 can generate a bitstream including encoding information through encoding of a target image, and output the generated bitstream.
  • the generated bit stream can be stored in a computer-readable recording medium and can be streamed through a wired / wireless transmission medium.
  • the switch 115 When the intra mode is used as the prediction mode, the switch 115 can be switched to intra. When the inter mode is used as the prediction mode, the switch 115 can be switched to the inter.
  • the encoding apparatus 100 may generate a prediction block for the target block. Also, after the prediction block is generated, the encoding device 100 can code the residual of the target block and the prediction block.
  • the intra prediction unit 120 can use the pixels of the already coded / decoded block around the target block as a reference sample.
  • the intra prediction unit 120 can perform spatial prediction of a target block using a reference sample and generate prediction samples of a target block through spatial prediction.
  • the inter prediction unit 110 may include a motion prediction unit and a motion compensation unit.
  • the motion predicting unit can search for the best matched region from the reference block in the motion estimation process, derive the motion vector for the target block and the searched region using the searched region, can do.
  • the reference picture may be stored in the reference picture buffer 190 and may be stored in the reference picture buffer 190 when the coding and / or decoding of the reference picture has been processed.
  • the motion compensation unit may generate a prediction block for a target block by performing motion compensation using a motion vector.
  • the motion vector may be a two-dimensional vector used for inter prediction.
  • the motion vector may also indicate an offset between the target image and the reference image.
  • the motion prediction unit and the motion compensation unit can generate a prediction block by applying an interpolation filter to a part of the reference image when the motion vector has a non-integer value.
  • a method of motion prediction and motion compensation of a PU included in a CU based on a CU is called a skip mode, a merge mode, an advanced motion vector prediction (AMVP) mode and a current picture reference mode, and inter prediction or motion compensation may be performed according to each mode.
  • the subtracter 125 may generate a residual block which is a difference between the target block and the prediction block.
  • the residual block may be referred to as a residual signal.
  • the residual signal may mean a difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by transforming, quantizing, or transforming and quantizing the difference between the original signal and the prediction signal.
  • the residual block may be a residual signal for a block unit.
  • the transforming unit 130 may perform a transform on the residual block to generate a transform coefficient, and output the generated transform coefficient.
  • the transform coefficient may be a coefficient value generated by performing a transform on the residual block.
  • the conversion unit 130 may use one of a plurality of predetermined conversion methods for performing the conversion.
  • the predetermined plurality of conversion methods may include Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), and Karhunen-Loeve Transform (KLT) have.
  • DCT Discrete Cosine Transform
  • DST Discrete Sine Transform
  • KLT Karhunen-Loeve Transform
  • the transform method used for transforming the residual block may be determined according to at least one of the coding parameters for the object block and / or the surrounding block.
  • the transformation method may be determined based on at least one of an inter prediction mode for the PU, an intra prediction mode for the PU, a size of the TU, and a type of the TU.
  • conversion information indicating the conversion method may be signaled from the encoding device 100 to the decryption device 200.
  • the transforming unit 130 may omit the transform for the residual block.
  • a quantized transform coefficient level or a quantized level can be generated by applying quantization to the transform coefficients.
  • the quantized transform coefficient level and the quantized level may also be referred to as a transform coefficient.
  • the quantization unit 140 may generate a quantized transform coefficient level (i.e., a quantized level or a quantized coefficient) by quantizing the transform coefficient in accordance with the quantization parameter.
  • the quantization unit 140 may output the generated quantized transform coefficient levels. At this time, the quantization unit 140 can quantize the transform coefficient using the quantization matrix.
  • the entropy encoding unit 150 can generate a bitstream by performing entropy encoding according to the probability distribution based on the values calculated by the quantization unit 140 and / or the coding parameter values calculated in the encoding process .
  • the entropy encoding unit 150 may output the generated bitstream.
  • the entropy encoding unit 150 may perform entropy encoding on information about pixels of an image and information for decoding an image.
  • the information for decoding the image may include a syntax element or the like.
  • entropy coding When entropy coding is applied, a small number of bits can be assigned to a symbol having a high probability of occurrence, and a large number of bits can be assigned to a symbol having a low probability of occurrence. As the symbol is represented through this allocation, the size of the bit string for the symbols to be encoded can be reduced. Therefore, the compression performance of the image encoding can be improved through the entropy encoding.
  • the entropy encoding unit 150 may use an exponential golomb, a context-adaptive variable length coding (CAVLC), and a context-adaptive binary arithmetic coding Arithmetic Coding (CABAC), and the like can be used.
  • the entropy encoding unit 150 may perform entropy encoding using a Variable Length Coding / Code (VLC) table.
  • VLC Variable Length Coding / Code
  • the entropy encoding unit 150 may derive a binarization method for a target symbol.
  • the entropy encoding unit 150 may derive a probability model of a target symbol / bin.
  • the entropy encoding unit 150 may perform arithmetic encoding using the derived binarization method, the probability model, and the context model.
  • the entropy encoding unit 150 may change coefficients of a form of a two-dimensional block into a form of a one-dimensional vector through a transform coefficient scanning method to encode the quantized transform coefficient levels.
  • the coding parameters may be information required for coding and / or decoding.
  • the coding parameters may include information that is encoded in the encoding apparatus 100 and transferred from the encoding apparatus 100 to the decoding apparatus, and may include information that can be inferred in the encoding or decoding process. For example, as information transmitted to the decoding apparatus, there is a syntax element.
  • Coding parameters may include not only information (or flags, indexes, etc.) encoded in a coding apparatus such as syntax elements and signaled from a coding apparatus to a decoding apparatus, but also information derived from a coding process or a decoding process have.
  • the coding parameters may include information required in coding or decoding an image.
  • the primary transformation selection information may represent a primary transformation applied to the target block.
  • the secondary transformation selection information may represent a quadratic transformation applied to the target block.
  • the residual signal may represent a difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by transforming the difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by converting and quantizing the difference between the original signal and the prediction signal.
  • the residual block may be a residual signal for the block.
  • Signaling a flag or an index may be performed by encoding the entropy-encoded flag or the entropy-encoded index generated by performing entropy encoding on a flag or an index in a bitstream in the encoding apparatus 100
  • the decryption apparatus 200 may mean to obtain a flag or an index by performing entropy decoding on an entropy-encoded flag extracted from the bitstream or an entropy-encoded index .
  • the encoded target image can be used as a reference image for another image (s) to be processed later. Accordingly, the encoding apparatus 100 can reconstruct or decode the encoded target image again, and store the reconstructed or decoded image as a reference image in the reference picture buffer 190. [ The inverse quantization and inverse transform of the encoded object image for decoding can be processed.
  • the quantized level may be inversely quantized in the inverse quantization unit 160 and may be inversely transformed in the inverse transformation unit 170.
  • the inverse quantization unit 160 may generate inverse quantized coefficients by performing inverse quantization on the quantized levels.
  • the inverse transform unit 170 performs inverse transform on the inversely quantized coefficients so that the reconstructed residual block can be generated. That is to say, the reconstructed residual block may be inverse quantized and inverse transformed coefficients.
  • the dequantized and inverse transformed coefficients may be combined with a prediction block via an adder 175.
  • a reconstructed block may be generated by summing the dequantized and / or inverse transformed coefficients and the prediction block.
  • the dequantized and / or inverse transformed coefficient may mean a coefficient on which at least one of dequantization and inverse-transformation is performed, and may mean a reconstructed residual block.
  • the reconstructed block may pass through filter portion 180.
  • the filter unit 180 includes at least one of a deblocking filter, a sample adaptive offset (SAO), an adaptive loop filter (ALF), and a non-local filter (NLF) One or more can be applied to reconstructed blocks or reconstructed pictures.
  • the filter unit 180 may be referred to as an in-loop filter.
  • the deblocking filter can remove block distortion occurring at the boundary between the blocks. To determine whether to apply a deblocking filter, it may be determined whether to apply a deblocking filter to a target block based on the number of columns or pixels (or pixels) included in the block.
  • the applied filter may differ depending on the strength of the required deblocking filtering. In other words, a filter determined according to the strength of deblocking filtering among different filters can be applied to the target block.
  • a deblocking filter is applied to a target block, one of a strong filter and a weak filter may be applied to the target block according to the strength of the required deblocking filtering.
  • horizontal filtering and vertical filtering can be processed in parallel.
  • SAO may add an appropriate offset to the pixel value of the pixel to compensate for coding errors.
  • SAO can perform correction using an offset with respect to a difference between an original image and an image to which deblocking is applied, in units of pixels, for an image to which deblocking is applied.
  • a method of dividing pixels included in an image into a predetermined number of regions, determining an area to be offset of the divided areas, and applying an offset to the determined area may be used And a method of applying an offset in consideration of edge information of each pixel of the image may be used.
  • ALF can perform filtering based on the comparison of the reconstructed image and the original image. After dividing the pixels included in the image into predetermined groups, a filter to be applied to each divided group can be determined, and different filtering can be performed for each group. For a luma signal, information related to whether or not to apply an adaptive loop filter may be signaled per CU. The shape and filter coefficients of the ALF to be applied to each block may be different for each block. Alternatively, regardless of the characteristics of the block, a fixed form of ALF may be applied to the block.
  • the non-local filter can perform filtering based on reconstructed blocks similar to the target block.
  • a region similar to the target block can be selected, and the filtering of the target block can be performed using the statistical properties of the selected similar regions.
  • Information related to whether or not to apply a non-local filter may be signaled to the CU.
  • the shapes of the non-local filters to be applied to the blocks and the filter coefficients may differ from each other depending on the blocks.
  • the reconstructed block or reconstructed image through the filter unit 180 may be stored in the reference picture buffer 190.
  • the reconstructed block through the filter unit 180 may be part of the reference picture. That is to say, the reference picture may be a reconstructed picture composed of reconstructed blocks via the filter unit 180.
  • the stored reference picture can then be used for inter prediction.
  • FIG. 2 is a block diagram illustrating a configuration of a decoding apparatus to which the present invention is applied.
  • the decoding apparatus 200 may be a decoder, a video decoding apparatus, or an image decoding apparatus.
  • the decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, An adder 255, a filter unit 260, and a reference picture buffer 270.
  • the decoding apparatus 200 can receive the bit stream output from the encoding apparatus 100. [ The decoding apparatus 200 can receive a bitstream stored in a computer-readable recording medium and can receive a bitstream streamed through a wired / wireless transmission medium.
  • the decoding apparatus 200 may perform decoding of an intra mode and / or an inter mode with respect to a bit stream.
  • the decoding apparatus 200 can generate a reconstructed image or a decoded image through decoding, and output the reconstructed image or the decoded image.
  • the switch to the intra mode or the inter mode according to the prediction mode used for decoding may be performed by the switch 245.
  • the prediction mode used for decoding is the intra mode
  • the switch 245 can be switched to intra.
  • the prediction mode used for decoding is the inter mode
  • the switch 245 can be switched to the inter.
  • the decoding apparatus 200 can obtain a reconstructed residual block by decoding the input bitstream, and can generate a prediction block. Once the reconstructed residual block and the prediction block are obtained, the decoding apparatus 200 can generate the reconstructed block to be decoded by adding the reconstructed residual block and the prediction block.
  • the entropy decoding unit 210 may generate the symbols by performing entropy decoding on the bitstream based on the probability distribution of the bitstream.
  • the generated symbols may include symbols in the form of a quantized transform coefficient level (i.e., a quantized level or a quantized coefficient).
  • the entropy decoding method may be similar to the above-described entropy encoding method.
  • the entropy decoding method may be the inverse of the above-described entropy encoding method.
  • the entropy decoding unit 210 may change the coefficient of the one-dimensional vector form into a two-dimensional block form through a transform coefficient scanning method to decode the quantized transform coefficient levels.
  • the coefficients may be changed to a two-dimensional block form by scanning the coefficients of the block using the upper-right diagonal scan.
  • it may be determined which of the upper right diagonal scan, the vertical scan and the horizontal scan will be used.
  • the quantized coefficients may be inversely quantized in the inverse quantization unit 220.
  • the inverse quantization unit 220 may generate inverse quantized coefficients by performing inverse quantization on the quantized coefficients.
  • the inverse quantized coefficient may be inversely transformed by the inverse transform unit 230.
  • the inverse transform unit 230 may generate the reconstructed residual block by performing an inverse transform on the inversely quantized coefficient.
  • the reconstructed residual block can be generated.
  • the inverse quantization unit 220 may apply the quantization matrix to the quantized coefficients in generating the reconstructed residual block.
  • the intraprediction unit 240 can generate a prediction block by performing spatial prediction using the pixel value of the already decoded block around the target block.
  • the inter prediction unit 250 may include a motion compensation unit.
  • the inter prediction unit 250 may be named as a motion compensation unit.
  • the motion compensation unit may generate a prediction block by performing motion compensation using a motion vector and a reference image stored in the reference picture buffer 270.
  • the motion compensation unit can apply an interpolation filter to a part of the reference image and generate a prediction block using the reference image to which the interpolation filter is applied.
  • the motion compensation unit may determine which of the skip mode, the merge mode, the AMVP mode, and the current picture reference mode is used for the PU included in the CU based on the CU to perform motion compensation, To perform motion compensation.
  • the reconstructed residual block and the prediction block may be added through an adder 255.
  • the adder 255 may generate the reconstructed block by adding the reconstructed residual block and the prediction block.
  • the reconstructed block may pass through filter portion 260.
  • the filter unit 260 may apply at least one of the deblocking filter, SAO, ALF, and non-local filter to the reconstructed block or the reconstructed image.
  • the reconstructed image may be a picture including a reconstructed block.
  • the reconstructed image through the filter unit 260 can be output by the encoding apparatus 100 and can be used by the encoding apparatus 100.
  • the reconstructed image through the filter unit 260 can be stored in the reference picture buffer 270 as a reference picture.
  • the reconstructed block through the filter unit 260 may be part of the reference picture. That is to say, the reference picture may be an image composed of reconstructed blocks through the filter unit 260.
  • the stored reference picture may then be used for inter prediction.
  • FIG. 3 is a diagram schematically showing a division structure of an image when coding and decoding an image.
  • 3 schematically shows an example in which one unit is divided into a plurality of lower units.
  • a unit may be a term collectively referred to as 1) a block containing image samples and 2) a syntax element.
  • “division of a unit” may mean “division of a block corresponding to a unit ".
  • the CU can be used as a base unit of image encoding / decoding. Also, the CU can be used as a unit to which one of the intra mode and the inter mode is applied in image encoding / decoding. That is to say, in the image coding / decoding, it is possible to determine which of intra mode and inter mode is applied to each CU.
  • the CU may also be a base unit for prediction, transform, quantization, inverse transform, dequantization, and encoding / decoding of transform coefficients.
  • an image 300 may be sequentially divided into units of a Largest Coding Unit (LCU).
  • LCU Largest Coding Unit
  • a partition structure can be determined.
  • the LCU can be used in the same sense as a coding tree unit (CTU).
  • CTU coding tree unit
  • the division of a unit may mean division of a block corresponding to the unit.
  • the block partitioning information may include depth information about the depth of the unit.
  • the depth information may indicate the number and / or the number of times the unit is divided.
  • One unit may be hierarchically subdivided with depth information based on a tree structure. Each divided subunit may have depth information.
  • the depth information may be information indicating the size of the CU. Depth information can be stored for each CU.
  • Each CU can have depth information. If the CU is partitioned, the CUs generated by partitioning may have an increased depth by one in the depth of the partitioned CU.
  • the divided structure may mean a distribution of CUs for efficiently encoding an image in the LCU 310. [ This distribution can be determined depending on whether or not to divide one CU into a plurality of CUs.
  • the number of divided CUs may be two or more positive integers including 2, 4, 8, and 16, and so on.
  • the horizontal size and the vertical size of the CU generated by the division may be smaller than the horizontal size and the vertical size of the CU before division according to the number of CUs generated by the division.
  • the divided CUs can be recursively divided into a plurality of CUs in the same manner.
  • the size of at least one of the horizontal and vertical sizes of the partitioned CUs can be reduced compared to at least one of the horizontal and vertical sizes of the CUs before partitioning.
  • the partitioning of the CU can be done recursively up to a predetermined depth or a predetermined size.
  • the depth of the CU may have a value from 0 to 3.
  • the size of the CU may range from 64x64 to 8x8 depending on the depth of the CU.
  • the depth of the LCU may be zero, and the depth of the Smallest Coding Unit (SCU) may be a predetermined maximum depth.
  • the LCU may be a CU having a maximum coding unit size as described above, and the SCU may be a CU having a minimum coding unit size.
  • the partitioning may be started from the LCU 310 and the depth of the CU may increase by one each time the horizontal and / or vertical size of the CU is reduced by partitioning.
  • the unpartitioned CU may have a size of 2Nx2N.
  • a CU having a size of 2Nx2N can be divided into four CUs having an NxN size. The size of N can be reduced by half each time the depth is increased by one.
  • a LCU having a depth of 0 may be 64x64 pixels or 64x64 block. 0 may be the minimum depth.
  • An SCU with a depth of 3 may be 8x8 pixels or 8x8 block. 3 may be the maximum depth.
  • the CU of the 64x64 block, which is the LCU can be represented by the depth 0.
  • the CU of a 32x32 block can be represented by a depth of one.
  • the CU of a 16x16 block can be represented by a depth of two.
  • the CU of an 8x8 block that is an SCU can be represented by a depth of 3.
  • the division information may be 1-bit information. All CUs except SCU can contain partition information. For example, the value of the partition information of the unpartitioned CU may be 0, and the value of the partition information of the partitioned CU may be 1.
  • the horizontal size and the vertical size of each CU of the four CUs generated by the division are respectively half of the horizontal size and half of the vertical size of the CU before division .
  • the sizes of the 4 divided CUs may be 16x16.
  • the horizontal size or the vertical size of each CU of the two CUs generated by the division is respectively one half of the horizontal size of the CU before the division, .
  • the sizes of the two divided CUs may be 16x32.
  • the sizes of the two CUs may be 32x16.
  • both a quad-tree type partition and a binary-tree type partition are applied.
  • a 64.times.64 size Coding Tree Unit can be divided into a smaller number of CUs by a recursive quad-crree structure.
  • One CU may be divided into four CUs having the same sizes.
  • CUs can be recursively partitioned, and each CU can have a quadtree structure.
  • Fig. 4 is a diagram showing a form of a prediction unit (PU) that a coding unit (CU) can include.
  • a CU that is not further divided among the CUs divided from the LCU may be divided into one or more Prediction Units (PUs).
  • PUs Prediction Units
  • the PU can be a base unit for prediction.
  • the PU may be coded and decoded in either a skip mode, an inter mode, or an intra mode.
  • the PU can be divided into various forms according to each mode.
  • the target block described above with reference to FIG. 1 and the target block described above with reference to FIG. 2 may be a PU.
  • the CU may not be divided into PUs. If the CU is not partitioned into PUs, the size of the CU and the size of the PU may be the same.
  • the skip mode there may be no division in the CU.
  • the 2Nx2N mode 410 having the same sizes of PU and CU without division can be supported.
  • inter mode eight subdivided forms within the CU can be supported.
  • 2Nx2N mode 410, 2NxN mode 415, Nx2N mode 420, NxN mode 425, 2NxnU mode 430, 2NxnD mode 435, nLx2N mode 440, Mode 445 may be supported.
  • 2Nx2N mode 410 and NxN mode 425 may be supported.
  • a PU of size 2Nx2N may be encoded.
  • a PU of size 2Nx2N can mean a PU of the same size as a CU.
  • a PU of size 2Nx2N may have a size of 64x64, 32x32, 16x16, or 8x8.
  • the PU of the size NxN can be encoded.
  • the size of the PU when the size of the PU is 8x8, four divided PUs can be encoded.
  • the size of the partitioned PU may be 4x4.
  • the PU When the PU is encoded by the intra mode, the PU may be encoded using one of the plurality of intra prediction modes.
  • the High Efficiency Video Coding (HEVC) technique may provide 35 intra prediction modes, and the PU may be coded into one of the 35 intra prediction modes.
  • HEVC High Efficiency Video Coding
  • the mode in which the PU is encoded by the 2Nx2N mode 410 and the NxN mode 425 can be determined by the rate-distortion cost.
  • the encoding apparatus 100 can perform the encoding operation on the 2Nx2N size PU.
  • the encoding operation may be to encode the PU in each of a plurality of intra prediction modes that the encoding apparatus 100 can use.
  • the optimal intra prediction mode for the 2Nx2N size PU can be derived through the encoding operation.
  • the optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost is incurred for encoding 2Nx2N sized PUs among a plurality of intra prediction modes available for use by the encoding apparatus 100.
  • the encoding apparatus 100 can sequentially perform encoding operations on each PU of PUs divided into NxN.
  • the encoding operation may be to encode the PU in each of a plurality of intra prediction modes that the encoding apparatus 100 can use.
  • An optimal intra prediction mode for an NxN size PU can be derived through an encoding operation.
  • the optimal intra prediction mode may be an intra prediction mode in which a minimum rate-distortion cost is incurred for encoding of NxN-sized PUs among a plurality of intra prediction modes available for use by the encoding apparatus 100.
  • the encoding apparatus 100 may determine which of 2Nx2N sized PU and NxN sized PUs to encode based on a comparison of the rate-distortion cost of the 2Nx2N sized PU and the rate-distortion costs of the NxN sized PUs.
  • One CU may be divided into one or more PUs, and a PU may be divided into a plurality of PUs.
  • the horizontal size and the vertical size of each PU of the four PUs generated by the division are respectively one half of the horizontal size of the PU before division and one half of the vertical size .
  • the sizes of the 4 divided PUs may be 16x16.
  • the horizontal size or the vertical size of each PU of the two PUs generated by the division is a half of the horizontal size or half of the vertical size of the PU before the division, .
  • the sizes of the two divided PUs may be 16x32.
  • the sizes of the two divided PUs may be 32x16.
  • FIG. 5 is a diagram showing a form of a conversion unit (TU) which can be included in a coding unit (CU).
  • TU conversion unit
  • CU coding unit
  • a Transform Unit can be a basic unit used for transform, quantization, inverse transform, inverse quantization, entropy coding, and entropy decoding processes in a CU.
  • the TU may have a square shape or a rectangular shape.
  • the form of the TU may be determined depending on the size and / or shape of the CU.
  • the CUs that are no longer divided into CUs may be divided into one or more TUs.
  • the partition structure of the TU may be a quad-tree structure.
  • one CU 510 may be divided one or more times according to the quad-tree structure.
  • one CU 510 can be composed of TUs of various sizes.
  • the CU can be considered to be recursively partitioned.
  • partitioning one CU can be composed of TUs with various sizes.
  • one CU may be divided into one or more TUs based on the number of vertical and / or horizontal lines dividing the CU.
  • the CU may be divided into symmetric TUs and may be divided into asymmetric TUs.
  • information about the size and / or type of the TU may be signaled from the encoding device 100 to the decoding device 200.
  • the size and / or shape of the TU may be derived from information about the size and / or shape of the CU.
  • the CU may not be divided into TUs. If the CU is not divided into TUs, the size of the CU and the size of the TU may be the same.
  • One CU may be divided into one or more TUs, and a TU may be divided into a plurality of TUs.
  • the horizontal size and the vertical size of each TU of the four TUs generated by the division are respectively one half of the horizontal size of the TU before the division and one half of the vertical size .
  • the sizes of the 4 TUs divided may be 16x16.
  • the horizontal size or vertical size of each TU of the two TUs generated by the partition may be half of the horizontal size of the TU before the partition, .
  • the sizes of the two divided TUs may be 16x32.
  • the sizes of the two TUs divided may be 32x16.
  • the CU may be divided in a manner other than that shown in Fig.
  • one CU may be divided into three CUs.
  • the horizontal size or the vertical size of the three divided CUs may be 1/4, 1/2 and 1/4 of the horizontal size or vertical size of the CU before division, respectively.
  • the sizes of the three divided CUs may be 8x32, 16x32, and 8x32, respectively.
  • the CU is divided into the form of a triad tree.
  • One of the partitioning of the illustrated quad tree type, the partitioning of the binary tree type and the partitioning of the triple tree can be applied for partitioning of the CU, and a plurality of partitioning schemes may be combined for partitioning the CU .
  • a case where a plurality of division methods are used in combination is referred to as a division of the form of a composite tree.
  • Figure 6 shows the partitioning of a block according to an example.
  • the target block may be divided as shown in FIG.
  • an indicator indicating the division information may be signaled from the coding apparatus 100 to the decoding apparatus 200.
  • the partition information may be information indicating how the target block is divided.
  • quadtree_flag a binary tree flag
  • QB_flag quad-binary flag
  • Btype_flag a binary type flag
  • the split_flag may be a flag indicating whether or not the block is divided. For example, the value 1 of split_flag may indicate that the block is partitioned. A value of 0 in split_flag may indicate that the block is not partitioned.
  • QB_flag may be a flag indicating whether the block is divided into a quad tree form or a binary tree form. For example, a value of 0 in QB_flag may indicate that the block is partitioned into a quadtree form. A value of 1 in QB_flag may indicate that the block is partitioned into a binary tree. Alternatively, a value of 0 in QB_flag may indicate that the block is partitioned into a binary tree form. A value of 1 in QB_flag may indicate that the block is partitioned into a quadtree form.
  • the quadtree_flag may be a flag indicating whether the block is divided into a quad tree form. For example, a value of 1 in quadtree_flag may indicate that the block is partitioned into a quadtree form. A value of 0 in the quadtree_flag may indicate that the block is not partitioned into quadtrees.
  • the binarytree_flag may be a flag indicating whether the block is divided into a binary tree form. For example, a value of 1 in the binarytree_flag may indicate that the block is partitioned into a binary tree. A value of binarytree_flag of 0 may indicate that the block is not partitioned into a binary tree.
  • Btype_flag may be a flag indicating whether the block is divided into a vertical division or a horizontal division when the block is divided into a binary tree form. For example, a value of 0 for Btype_flag may indicate that the block is divided horizontally. A value of 1 for Btype_flag may indicate that the block is vertically partitioned. Alternatively, a value of 0 for Btype_flag may indicate that the block is vertically split. A value of 1 for Btype_flag may indicate that the block is split horizontally.
  • the partition information for the block of FIG. 6 can be derived by signaling at least one of quadtree_flag, binarytree_flag, and Btype_flag as shown in Table 1 below.
  • the partition information for the block of FIG. 6 can be derived by signaling at least one of split_flag, QB_flag, and Btype_flag as shown in Table 2 below.
  • the partitioning method may be limited to a quadtree only according to the size and / or type of the block, or may be limited to a binary tree only.
  • the split_flag may be a flag indicating whether to split into a quadtree form or a flag indicating whether to divide it into a binary tree form.
  • the size and shape of the block may be derived according to the depth information of the block, and the depth information may be signaled from the encoding device 100 to the decoding device 200.
  • the specified range may be defined by at least one of a maximum block size and a minimum block size that can be partitioned only in the form of a quadtree.
  • the information indicating the maximum block size and / or the minimum block size that can be divided only in the quadtree form can be signaled from the encoding apparatus 100 to the decoding apparatus 200 through the bit stream.
  • such information may be signaled for at least one of a video, a sequence, a picture, and a slice (or segment).
  • the maximum block size and / or minimum block size may be a fixed size predefined in the encoding device 100 and the decoding device 200. For example, if the block size is greater than or equal to 64x64, and less than or equal to 256x256, only a quadtree-type partition may be possible.
  • the split_flag may be a flag indicating whether the split_flag is divided into a quadtree form.
  • the specified range may be defined by at least one of a maximum block size and a minimum block size that can be partitioned only in a binary tree form.
  • Information indicating the maximum block size and / or minimum block size that can be divided only in the binary tree form can be signaled from the encoding apparatus 100 to the decoding apparatus 200 through the bit stream. This information may also be signaled for at least one of a sequence, a picture, and a slice (or segment).
  • the maximum block size and / or minimum block size may be a fixed size predefined in the encoding device 100 and the decoding device 200. For example, if the block size is greater than or equal to 8x8 and less than or equal to 16x16, then only a binary tree-like partition may be possible.
  • the split_flag may be a flag indicating whether the split tree is divided into a binary tree form.
  • the partitioning of the block can be limited by the previous partitioning. For example, when a block is divided into a binary tree form to generate a plurality of divided blocks, each divided block can be further divided into a binary tree form only.
  • the aforementioned indicator may not be signaled.
  • FIG. 7 is a diagram for explaining an embodiment of an intra prediction process.
  • the arrows from the center to the outline of the graph of FIG. 7 may indicate the prediction directions of the intra-prediction modes.
  • the number indicated close to the arrow may represent an example of the mode value assigned to the prediction direction of the intra-prediction mode or the intra-prediction mode.
  • Intra coding and / or decoding may be performed using reference samples of the units around the target block.
  • the surrounding block may be the reconstructed block around.
  • intra-coding and / or decoding may be performed using values or coding parameters of reference samples included in the reconstructed neighboring blocks.
  • the encoding apparatus 100 and / or the decoding apparatus 200 can generate a prediction block by performing intra prediction on a target block based on information of samples in the target image.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may generate a prediction block for a target block by performing intra prediction based on information of samples in the target image.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may perform directional prediction and / or non-directional prediction based on at least one reconstructed reference sample.
  • the prediction block may refer to a block generated as a result of performing intra prediction.
  • the prediction block may correspond to at least one of CU, PU, and TU.
  • the unit of the prediction block may be at least one of CU, PU, and TU.
  • the prediction block may have the form of a square having a size of 2Nx2N or a size of NxN.
  • the size of NxN may include 4x4, 8x8, 16x16, 32x32 and 64x64.
  • the prediction block may be a block in the form of a square having a size of 2x2, 4x4, 8x8, 16x16, 32x32, or 64x64, or may be a rectangular block having a size of 2x8, 4x8, 2x16, 4x16, have.
  • Intra prediction may be performed according to the intra prediction mode for the target block.
  • the number of intra prediction modes that a target block may have may be a predetermined fixed value and may be a value determined differently depending on the property of the prediction block.
  • the attributes of the prediction block may include the size of the prediction block and the type of the prediction block.
  • the number of intra prediction modes can be fixed to 35 irrespective of the size of the prediction block.
  • the number of intra prediction modes may be 3, 5, 9, 17, 34, 35 or 36, and so on.
  • the intra prediction mode may be a non-directional mode or a directional mode.
  • the intra prediction mode may include two non-directional modes and 33 directional modes as shown in FIG.
  • the two non-directional modes may include a DC mode and a Planar mode.
  • the directional modes may be a prediction mode having a specific direction or a specific angle.
  • the intra prediction mode may be represented by at least one of a mode number, a mode value, and a mode angle.
  • the number of intra prediction modes may be M. M may be at least one. That is to say, the intra prediction mode may be M numbers including the number of non-directional modes and the number of directional modes.
  • the number of intra prediction modes may be fixed to M, regardless of the size of the block and / or the color component.
  • the number of intra prediction modes can be fixed to either 35 or 67 regardless of the size of the block.
  • the number of intra prediction modes may differ depending on the size of the block and / or the type of color component.
  • the larger the block size the larger the number of intra prediction modes.
  • the larger the block size the smaller the number of intra prediction modes. If the block size is 4x4 or 8x8, then the number of intra prediction modes may be 67. If the size of the block is 16x16, the number of intra prediction modes may be 35. If the block size is 32x32, the number of intra prediction modes may be 19. If the block size is 64x64, the number of intra prediction modes may be 7.
  • the number of intra prediction modes may be different depending on whether the color component is a luma signal or a chroma signal.
  • the number of intra prediction modes of the luma component block may be greater than the number of intra prediction modes of the chroma component block.
  • the prediction can be performed in the vertical direction based on the pixel value of the reference sample.
  • prediction can be performed in the horizontal direction based on the pixel value of the reference sample.
  • the encoding apparatus 100 and the decoding apparatus 200 can perform intra prediction on a target unit using a reference sample according to an angle corresponding to the directional mode even in the directional mode other than the above-described mode.
  • the intra prediction mode located on the right side of the vertical mode may be referred to as a vertical-right mode.
  • the intra prediction mode located at the lower end of the horizontal mode may be named a horizontal-below mode.
  • the intra prediction modes in which the mode value is one of 27, 28, 29, 30, 31, 32, 33, and 34 may be vertical right modes 613.
  • Intra prediction modes where the mode value is one of 2, 3, 4, 5, 6, 7, 8, and 9 may be horizontal lower modes 616.
  • the non-directional mode may include a DC mode and a planar mode.
  • the mode value of the DC mode may be one.
  • the mode value of the planner mode may be zero.
  • the directional mode may include an angular mode.
  • the remaining modes except for the DC mode and the planar mode may be the directional mode.
  • a prediction block may be generated based on an average of pixel values of a plurality of reference samples. For example, the value of a pixel of a prediction block may be determined based on an average of pixel values of a plurality of reference samples.
  • the number of intra prediction modes described above and the mode value of each intra prediction mode may be exemplary only.
  • the number of intra prediction modes described above and the mode value of each intra prediction mode may be differently defined according to the embodiment, implementation and / or necessity.
  • a step of checking whether samples included in the restored neighboring block can be used as a reference sample of the target block to perform intra prediction on the target block can be performed.
  • a value generated by copying and / or interpolation using at least one sample value of samples included in the reconstructed neighboring block if there is a sample that is not available as a reference sample of the target block among the samples of the neighboring block It can be replaced with the sample value of the sample which is not available as a reference sample. If the value generated by copying and / or interpolation is replaced with the sample value of the sample, the sample may be used as a reference sample of the target block.
  • a filter may be applied to at least one of a reference sample or a prediction sample based on at least one of an intra prediction mode and a size of a target block.
  • the kind of filter applied to at least one of the reference sample and the prediction sample may be different depending on at least one of an intra prediction mode of a target block, a size of a target block, and a shape of a target block.
  • the type of the filter can be classified according to one or more of the number of filter taps, the value of the filter coefficient, and the filter strength.
  • the intra prediction mode is the planar mode
  • the upper reference sample of the target sample, the left reference sample of the target sample, and the upper right reference sample of the target block And a weighted sum of the lower left reference samples of the target block may be used to generate a sample value of the prediction target sample.
  • the intra prediction mode is the DC mode
  • the average value of the upper reference samples and the left reference samples of the target block may be used.
  • filtering may be performed using the values of the reference samples for the specified rows or specified columns in the object block.
  • the specified rows may be one or more top rows adjacent to the reference sample.
  • the specified columns may be one or more left columns adjacent to the reference sample.
  • a prediction block can be generated using the upper reference sample, the left reference sample, the upper right reference sample, and / or the lower left reference sample of the target block.
  • Real-unit interpolation may be performed to generate the prediction samples described above.
  • the intra prediction mode of the target block can be predicted from the intra prediction mode of the neighboring block of the target block, and the information used for prediction can be entropy encoded / decoded.
  • the intraprediction modes of the target block and the neighboring blocks are the same, it can be signaled that the intraprediction modes of the target block and the neighboring block are the same using the predetermined flag.
  • an indicator indicating an intra prediction mode that is the same as the intra prediction mode of the target block among the intra prediction modes of a plurality of neighboring blocks may be signaled.
  • the information of the intra prediction mode of the target block can be encoded and / or decoded using entropy encoding and / or decoding.
  • FIG. 8 is a view for explaining the positions of reference samples used in the intra prediction process.
  • a reconstructed reference sample used for intra prediction of a target block includes lower-left reference samples 831, left reference samples 833, an upper- left corner reference sample 835, top reference samples 837 and top-right reference samples 839, and the like.
  • the left reference samples 833 may refer to reconstructed reference pixels adjacent to the left of the target block.
  • Top reference samples 837 may refer to a reconstructed reference pixel adjacent the top of the target block.
  • the upper left corner reference sample 835 may refer to a reconstructed reference pixel located at the upper left corner of the object block.
  • the lower left reference samples 831 may refer to a reference sample located at the lower end of the left sample line among the samples located on the same line as the left sample line composed of the left reference samples 833.
  • Upper right reference samples 839 may refer to reference samples located on the right side of the upper pixel line among samples located on the same line as the upper sample line composed of upper reference samples 837.
  • the lower left reference samples 831, the left reference samples 833, the upper reference samples 837, and the upper right reference samples 839 may be N, respectively.
  • a prediction block can be generated through intraprediction of a target block.
  • the generation of the prediction block may include determining the value of the pixels of the prediction block.
  • the size of the target block and the size of the prediction block may be the same.
  • the reference sample used for intra prediction of the target block may be changed depending on the intra prediction mode of the target block.
  • the direction of the intra-prediction mode may indicate a dependency between the reference samples and the pixels of the prediction block.
  • the value of the specified reference sample may be used as the value of one or more specified pixels of the prediction block.
  • the specified reference sample and the specified one or more pixels of the prediction block may be samples and pixels designated by a straight line in the direction of the intra prediction mode. That is to say, the value of the specified reference sample can be copied to the value of the pixel located in the reverse direction of the intra prediction mode.
  • the value of the pixel of the prediction block may be the value of the reference sample located in the direction of the intra-prediction mode with respect to the position of the pixel.
  • the intra prediction mode of the target block is a vertical mode with a mode value of 26
  • upper reference samples 837 may be used for intra prediction.
  • the intra prediction mode is the vertical mode
  • the value of the pixel of the prediction block may be the value of the reference sample vertically positioned with respect to the position of the pixel.
  • top reference samples 837 that are near the top of the target block may be used for intra prediction.
  • the values of the pixels of a row of the prediction block may be the same as the values of the upper reference samples 837.
  • the left reference samples 833 can be used for intra prediction.
  • the value of the pixel of the prediction block may be the value of the reference sample located horizontally on the left side of the pixel.
  • the left reference samples 833 to the left of the target block may be used for intra prediction.
  • the values of the pixels in a column of the prediction block may be the same as the values of the left reference samples 833.
  • the mode value of the intra prediction mode of the target block is 18, at least a part of the left reference samples 833, at least a part of the upper left corner reference sample 835 and the upper reference samples 837, Can be used. If the mode value of the intra prediction mode is 18, the value of the pixel of the prediction block may be a value of the reference sample located diagonally to the left of the upper side with respect to the pixel.
  • At least some of the upper right reference samples 839 may be used for intra prediction.
  • At least a part of the lower left reference samples 831 may be used for intra prediction.
  • upper left corner reference sample 835 may be used for intra prediction.
  • the reference sample used to determine the pixel value of one pixel of the prediction block may be one, or may be two or more.
  • the pixel value of the pixel of the prediction block as described above can be determined according to the position of the pixel and the position of the reference sample indicated by the direction of the intra prediction mode. If the position of the pixel and the position of the reference sample pointed by the direction of the intra prediction mode is an integer position, the value of one reference sample pointed to by the integer position can be used to determine the pixel value of the pixel of the prediction block.
  • an interpolated reference sample may be generated based on the two reference samples closest to the location of the reference sample have.
  • the value of the interpolated reference sample may be used to determine the pixel value of the pixel of the prediction block. That is, when the position of the pixel of the prediction block and the position of the reference sample pointed by the direction of the intra prediction mode indicate between the two reference samples, an interpolated value is generated based on the values of the two samples .
  • the prediction block generated by the prediction may not be the same as the original target block. That is, there may be a prediction error which is a difference between the target block and the prediction block, and a prediction error may exist between the pixels of the target block and the pixels of the prediction block.
  • Filtering for the prediction block may be used to reduce the prediction error.
  • the filtering may be adaptively applying a filter to an area of the prediction block that is considered to have a large prediction error.
  • the region considered as having a large prediction error may be the boundary of the prediction block.
  • an area regarded as having a large prediction error among the prediction blocks may be different, and the characteristics of the filter may be different.
  • FIG. 9 is a diagram for explaining an embodiment of the inter prediction process.
  • the rectangle shown in FIG. 9 may represent an image (or a picture).
  • arrows may indicate the prediction direction. That is, the image can be encoded and / or decoded according to the prediction direction.
  • Each image can be classified into an I picture (Intra Picture), a P picture (Uni-prediction Picture), and a B picture (Bi-prediction Picture) according to the coding type.
  • Each picture can be coded and / or decoded according to the coding type of each picture.
  • the object image to be encoded is an I-picture
  • the object image can be encoded using data in the image itself without inter-prediction referring to other images.
  • an I-picture can be encoded only by intra prediction.
  • the target image When the target image is a P picture, the target image can be encoded by inter prediction using only reference pictures existing in a unidirection.
  • the unidirectional may be forward or reverse.
  • the target image When the target image is a B picture, the target image can be encoded by inter prediction using bi-directional reference pictures or inter prediction using reference pictures existing in one direction of forward and backward directions. Here, both directions may be forward and backward.
  • P-pictures and B-pictures that are encoded and / or decoded using reference pictures can be regarded as pictures in which inter-prediction is used.
  • Inter prediction can be performed using motion information.
  • the encoding apparatus 100 can perform inter prediction and / or motion compensation on a target block.
  • the decoding apparatus 200 may perform inter-prediction and / or motion compensation corresponding to inter-prediction and / or motion compensation in the encoding apparatus 100 with respect to a target block.
  • the motion information for the target block can be derived during inter-prediction by the encoding apparatus 100 and the decoding apparatus 200, respectively.
  • the motion information may be derived using motion information of the restored neighboring block, motion information of the call block, and / or motion information of a block adjacent to the call block.
  • the coding apparatus 100 or the decoding apparatus 200 may perform prediction and / or motion compensation by using motion information of a spatial candidate and / or a temporal candidate as motion information of a target block Can be performed.
  • the target block may refer to a PU and / or PU partition.
  • the spatial candidate may be a reconstructed block spatially adjacent to the target block.
  • the temporal candidate may be a reconstructed block corresponding to a target block in a collocated picture (col picture) that has already been reconstructed.
  • the coding apparatus 100 and the decoding apparatus 200 can improve coding efficiency and decoding efficiency by using motion information of spatial candidates and / or temporal candidates.
  • the motion information of the spatial candidate may be referred to as spatial motion information.
  • the temporal candidate motion information may be referred to as temporal motion information.
  • the motion information of the spatial candidate may be the motion information of the PU including the spatial candidate.
  • the motion information of the temporal candidate may be the motion information of the PU including the temporal candidate.
  • the motion information of the candidate block may be the motion information of the PU including the candidate block.
  • Inter prediction can be performed using a reference picture.
  • the reference picture may be at least one of a previous picture of a target picture or a subsequent picture of a target picture.
  • the reference picture may refer to an image used for prediction of a target block.
  • an area in a reference picture can be specified by using a reference picture index (or refIdx) indicating a reference picture and a motion vector or the like to be described later.
  • the specified area in the reference picture may indicate a reference block.
  • Inter prediction can select a reference picture and can select a reference block corresponding to a target block in a reference picture.
  • the inter prediction can generate a prediction block for a target block using the selected reference block.
  • the motion information may be derived during inter-prediction by the encoding apparatus 100 and the decoding apparatus 200, respectively.
  • the spatial candidate may be 1) existing in the target picture, 2) already reconstructed through encoding and / or decoding, and 3) adjacent to the target block or a block located at the corner of the target block.
  • a block located at a corner of a target block may be a block vertically adjacent to a neighboring block that is laterally adjacent to the target block, or a block that is laterally adjacent to a neighboring block vertically adjacent to the target block.
  • the "block located at the corner of the target block” may have the same meaning as "the block adjacent to the corner of the target block ".
  • the "block located at the corner of the target block” may be included in the "block adjacent to the target block ".
  • the spatial candidate may be a reconstructed block located on the left side of the target block, a reconstructed block located on the top of the target block, a reconstructed block located in the lower left corner of the target block, The reconstructed block or the reconstructed block located in the upper left corner of the target block.
  • Each of the encoding apparatus 100 and the decoding apparatus 200 can identify a block existing in a position spatially corresponding to a target block in a col picture.
  • the position of the target block in the target picture and the position of the identified block in the call picture can correspond to each other.
  • Each of the encoding apparatus 100 and the decoding apparatus 200 can determine a col block existing at a predetermined relative position with respect to the identified block as a temporal candidate.
  • the predetermined relative position may be a position inside the identified block and / or an outside position.
  • the call block may include a first call block and a second call block.
  • the first call block may be a block located in the coordinates (xP + nPSW, yP + nPSH).
  • the second call block may be a block located in the coordinates (xP + (nPSW >> 1), yP + (nPSH >> 1).
  • the second call block may optionally be used when the first call block is unavailable.
  • the motion vector of the target block may be determined based on the motion vector of the call block.
  • Each of the encoding apparatus 100 and the decoding apparatus 200 can scale a motion vector of a call block.
  • a scaled motion vector of the call block can be used as a motion vector of the target block.
  • the motion vector of the temporal candidate motion information stored in the list may be a scaled motion vector.
  • the ratio of the motion vector of the target block and the motion vector of the call block may be the same as the ratio of the first distance and the second distance.
  • the first distance may be a distance between a reference picture and a target picture of a target block.
  • the second distance may be the distance between the reference picture of the call block and the call picture.
  • the derivation method of the motion information can be changed according to the inter prediction mode of the target block.
  • an inter-prediction mode applied for inter prediction such as an Advanced Motion Vector Predictor (AMVP) mode
  • AMVP Advanced Motion Vector Predictor
  • merge mode may also be referred to as a motion merge mode.
  • the encoding apparatus 100 can search for similar blocks in the vicinity of the target block.
  • the encoding apparatus 100 can obtain a prediction block by performing prediction on a target block using motion information of a similar similar block.
  • the encoding apparatus 100 may encode a residual block which is a difference between the target block and the prediction block.
  • each of the encoding apparatus 100 and the decoding apparatus 200 can generate a predicted motion vector candidate list using a spatial candidate motion vector, a temporal motion motion vector, and a zero vector have.
  • the predicted motion vector candidate list may include one or more predicted motion vector candidates. At least one of a spatial candidate motion vector, a temporal candidate motion vector, and a zero vector may be determined and used as a predicted motion vector candidate.
  • predicted motion vector (candidate) and “motion vector (candidate)” may be used interchangeably and may be used interchangeably.
  • predicted motion vector candidate and “AMVP candidate” can be used interchangeably and can be used interchangeably.
  • predicted motion vector candidate list and “AMVP candidate list” may be used interchangeably and may be used interchangeably.
  • Spatial candidates may include reconstructed spatial neighboring blocks.
  • the motion vector of the reconstructed neighboring block may be referred to as a spatial prediction motion vector candidate.
  • the temporal candidate may include a call block and a block adjacent to the call block.
  • a motion vector of a call block or a block adjacent to a call block may be referred to as a temporal prediction motion vector candidate.
  • the zero vector may be a (0, 0) motion vector.
  • the predicted motion vector candidate may be a motion vector predictor for predicting the motion vector. Also, in the encoding apparatus 100, the predicted motion vector candidate may be a motion vector initial search position.
  • the encoding apparatus 100 may use the predicted motion vector candidate list to determine a motion vector to be used for encoding the target block within the search range. Also, the encoding apparatus 100 can determine a predicted motion vector candidate to be used as a predicted motion vector of a target block among predicted motion vector candidates of the predicted motion vector candidate list.
  • a motion vector to be used for coding a target block may be a motion vector that can be encoded at a minimum cost.
  • the encoding apparatus 100 can determine whether to use the AMVP mode in encoding the target block.
  • the encoding apparatus 100 can generate a bitstream including inter prediction information required for inter prediction.
  • the decoding apparatus 200 may perform inter prediction on a target block using inter prediction information of a bit stream.
  • the inter prediction information includes 1) mode information indicating whether the AMVP mode is used, 2) a predicted motion vector index, 3) a motion vector difference (MVD), 4) a reference direction, and 5) can do.
  • predicted motion vector index and “AMVP index” may be used interchangeably and may be used interchangeably.
  • the inter prediction information may include a residual signal.
  • the decoding apparatus 200 can obtain the predicted motion vector index, the motion vector difference, the reference direction, and the reference picture index from the bitstream through entropy decoding.
  • the predicted motion vector index may indicate a predicted motion vector candidate used for predicting a target block among the predicted motion vector candidates included in the predicted motion vector candidate list.
  • the decoding apparatus 200 can derive a predicted motion vector candidate using the predicted motion vector candidate list and determine the motion information of the target block based on the derived predicted motion vector candidate.
  • the decoding apparatus 200 can determine a motion vector candidate for a target block from among the predicted motion vector candidates included in the predicted motion vector candidate list using the predicted motion vector index.
  • the decoding apparatus 200 can select a predicted motion vector candidate pointed to by the predicted motion vector index among the predicted motion vector candidates included in the predicted motion vector candidate list as a predicted motion vector of the target block.
  • the motion vector to be actually used for inter prediction of the target block may not coincide with the predicted motion vector.
  • the MVD may be used to represent the difference between the motion vector to be actually used for inter prediction of the target block and the predicted motion vector.
  • the encoding apparatus 100 can derive a predictive motion vector similar to a motion vector to be actually used for inter prediction of a target block in order to use an MVD as small as possible.
  • the MVD may be a difference between a motion vector of a target block and a predicted motion vector.
  • the encoding apparatus 100 can calculate the MVD and entropy-encode the MVD.
  • the MVD may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream.
  • the decoding apparatus 200 can decode the received MVD.
  • the decoding apparatus 200 can derive a motion vector of a target block by adding the decoded MVD and the predicted motion vector.
  • the motion vector of the target block derived from the decoding apparatus 200 may be the sum of the entropy-decoded MVD and the motion vector candidate.
  • the reference direction may indicate a reference picture list used for predicting a target block.
  • the reference direction may indicate one of the reference picture list L0 and the reference picture list L1.
  • the reference direction may refer to a reference picture list used for prediction of a target block, but may not indicate that the directions of the reference pictures are limited in a forward direction or a backward direction. That is to say, each of the reference picture list L0 and the reference picture list L1 may include forward and / or backward pictures.
  • the reference direction being uni-directional may mean that one reference picture list is used.
  • the bi-directional reference direction may mean that two reference picture lists are used. That is to say, the reference direction may indicate that only the reference picture list L0 is used, only the reference picture list L1 is used, and one of the two reference picture lists.
  • the reference picture index may indicate a reference picture used for prediction of a target block among reference pictures of the reference picture list.
  • the reference picture index can be entropy-encoded by the encoding apparatus 100.
  • the entropy encoded reference picture index may be signaled from the encoding apparatus 100 to the decoding apparatus 200 through the bit stream.
  • two reference picture lists are used for prediction of a target block.
  • One reference picture index and one motion vector may be used for each reference picture list.
  • two prediction blocks can be specified for a target block. For example, a (final) prediction block of a target block may be generated through an average or a weighted sum of two prediction blocks for a target block.
  • the motion vector of the target block can be derived by the predicted motion vector index, the MVD, the reference direction, and the reference picture index.
  • the decoding apparatus 200 may generate a prediction block for a target block based on the derived motion vector and the reference picture index.
  • the prediction block may be a reference block pointed to by the derived motion vector in the reference picture pointed to by the reference picture index.
  • the amount of bits to be transmitted from the encoding apparatus 100 to the decoding apparatus 200 can be reduced by encoding the predicted motion vector index and the MVD without encoding the motion vector of the target block and the encoding efficiency can be improved.
  • Motion information of the reconstructed neighboring blocks may be used for the target block.
  • the encoding apparatus 100 may not separately encode the motion information on the target block.
  • the motion information of the target block is not coded and other information capable of deriving the motion information of the target block through motion information of the reconstructed neighboring block can be encoded instead.
  • the amount of bits to be transmitted to the decoding apparatus 200 can be reduced, and the coding efficiency can be improved.
  • a skip mode and / or a merge mode may be an inter prediction mode in which motion information of the target block is not directly encoded.
  • the encoding apparatus 100 and the decoding apparatus 200 may use an identifier and / or index indicating which one of the reconstructed neighboring units is used as motion information of the target unit.
  • a merge may mean a merging of movements for a plurality of blocks.
  • Merging may mean applying motion information of one block to another block as well.
  • the merge mode may mean a mode in which motion information of a target block is derived from motion information of a neighboring block.
  • the encoding apparatus 100 can predict motion information of a target block using motion information of a spatial candidate and / or motion information of a temporal candidate.
  • the spatial candidate may include reconstructed spatial neighboring blocks spatially adjacent to the object block.
  • the spatial neighboring block may include a left adjacent block and an upper adjacent block.
  • the temporal candidate may include a call block.
  • the encoding apparatus 100 can obtain a prediction block through prediction.
  • the encoding apparatus 100 can encode a residual block that is a difference between the target block and the prediction block.
  • each of the encoding apparatus 100 and the decoding apparatus 200 can generate a merge candidate list using motion information of a spatial candidate and / or motion information of a temporal candidate.
  • the motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction.
  • the reference direction may be unidirectional or bidirectional.
  • the merge candidate list may include merge candidates.
  • the merge candidate may be motion information. That is to say, the merge candidate list may be a list in which motion information is stored.
  • the merge candidates may be motion information such as temporal candidates and / or spatial candidates.
  • the merge candidate list may include a new merge candidate generated by a combination of merge candidates already present in the merge candidate list. That is, the merge candidate list may include new motion information generated by a combination of motion information already present in the merge candidate list.
  • the merge candidates may be specified modes for deriving inter prediction information.
  • the merge candidate may be information indicating a specified mode for deriving inter prediction information.
  • the inter prediction information of the target block may be derived according to the specified mode indicated by the merge candidate.
  • the specified mode may include a process of deriving a series of inter prediction information.
  • This specified mode may be an inter prediction information induction mode or a motion information induction mode.
  • the inter prediction information of the target block may be derived according to the mode indicated by the merge candidate selected by the merge index among the merge candidates in the merge candidate list.
  • the motion information derivation modes in the merge candidate list may be at least one of 1) a motion information derivation mode in sub-block units, and 2) an affine motion information derivation mode.
  • the merge candidate list may include motion information of a zero vector.
  • Zero vectors may also be called zero-merge candidates.
  • the motion information in the merge candidate list includes: 1) motion information of a spatial candidate, 2) motion information of a temporal candidate, 3) motion information generated by a combination of motion information existing in an already existing candidate list, 4) Lt; / RTI >
  • the motion information may include 1) a motion vector, 2) a reference picture index, and 3) a reference direction.
  • the reference direction may be referred to as an inter prediction indicator.
  • the reference direction may be unidirectional or bidirectional.
  • the unidirectional reference direction may represent L0 prediction or L1 prediction.
  • the merge candidate list can be generated before the prediction by merge mode is performed.
  • the number of merge candidates in the merge candidate list can be predetermined.
  • the encoding apparatus 100 and the decrypting apparatus 200 may add a merge candidate to the merge candidate list according to the predefined manner and the predefined rank so that the merge candidate list has a predetermined number of merge candidates.
  • the merge candidate list of the encoding apparatus 100 and the merge candidate list of the decryption apparatus 200 may be the same through the predetermined scheme and the default rank.
  • the merge can be applied in CU units or PU units.
  • the encoding apparatus 100 may transmit the bitstream including the predetermined information to the decoding apparatus 200.
  • the predefined information may include: 1) information indicating whether to perform a merge by block partitions, 2) a block to be merged with any block among the blocks that are spatial candidates and / or temporal candidates for the target block And information about whether or not it is possible.
  • the encoding apparatus 100 can determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 may use the merge candidates of the merge candidate list to perform predictions on the target block, and generate residual blocks for the merge candidates. The encoding apparatus 100 can use a merge candidate that requires a minimum cost in prediction and encoding of the residual block for encoding the target block.
  • the encoding apparatus 100 can determine whether to use the merge mode in encoding the target block.
  • the encoding apparatus 100 can generate a bitstream including inter prediction information required for inter prediction.
  • the encoding apparatus 100 may generate entropy-encoded inter prediction information by performing entropy encoding on the inter prediction information, and may transmit the bit stream including the entropy-encoded inter prediction information to the decoding apparatus 200.
  • entropy-encoded inter prediction information can be signaled from the encoding apparatus 100 to the decoding apparatus 200.
  • the decoding apparatus 200 may perform inter prediction on a target block using inter prediction information of a bit stream.
  • the inter prediction information may include 1) mode information indicating whether the merge mode is used, and 2) a merge index.
  • the inter prediction information may include a residual signal.
  • the decoding apparatus 200 can obtain a merge index from the bit stream only when the mode information indicates that the merge mode is used.
  • the mode information may be a merge flag.
  • the unit of mode information may be a block.
  • the information about the block may include mode information, and the mode information may indicate whether the merge mode is applied to the block.
  • the merge index may indicate a merge candidate used for predicting a target block among merge candidates included in the merge candidate list.
  • the merge index may indicate which of the neighboring blocks spatially or temporally adjacent to the target block is merged with.
  • the encoding apparatus 100 can select the merge candidate having the highest encoding capability among the merge candidates included in the merge candidate list and set the merge index value to point to the selected merge candidate.
  • the decoding apparatus 200 can perform the prediction on the target block using merge candidates indicated by the merge index among merge candidates included in the merge candidate list.
  • the motion vector of the target block may be specified by the motion vector of the merge candidate pointed to by the merge index, the reference picture index, and the reference direction.
  • the skip mode may be a mode in which motion information of a spatial candidate or motion information of a temporal candidate is directly applied to a target block.
  • the skip mode may be a mode in which the residual signal is not used. That is to say, when the skip mode is used, the reconstructed block may be a prediction block.
  • the difference between the merge mode and the skip mode may be the transmission or use of the residual signal. That is to say, the skip mode may be similar to the merge mode, except that the residual signal is not transmitted or used.
  • the encoding apparatus 100 transmits information indicating which of the blocks, which are spatial candidates or temporal candidates, is to be used as motion information of the target block, to the decoding apparatus 200 through the bit stream Lt; / RTI >
  • the encoding apparatus 100 may generate entropy-encoded information by performing entropy encoding on the information, and may signal entropy-encoded information to the decoding apparatus 200 through the bitstream.
  • the encoding apparatus 100 may not transmit other syntax element information such as MVD to the decryption apparatus 200.
  • the encoding apparatus 100 may not signal a syntax element related to at least one of the MVC, the coded block flag, and the transform coefficient level to the decoding apparatus 200.
  • the merge candidate list can be used in both merge mode and skip mode.
  • the merge candidate list may be named a “skip candidate list” or a “merge / skip candidate list ".
  • the skip mode may use a separate candidate list different from the merge mode.
  • the merge candidate list and merge candidate in the following description can be replaced with a skip candidate list and a skip candidate, respectively.
  • the merge candidate list can be generated before the prediction by the skip mode is performed.
  • the encoding apparatus 100 can determine a merge candidate to be used for encoding the target block. For example, the encoding apparatus 100 can perform predictions on a target block using merge candidates of a merge candidate list. The encoding apparatus 100 can use the merge candidate requiring minimum cost in prediction for encoding the target block.
  • the encoding apparatus 100 can determine whether to use the skip mode in encoding the target block.
  • the encoding apparatus 100 can generate a bitstream including inter prediction information required for inter prediction.
  • the decoding apparatus 200 may perform inter prediction on a target block using inter prediction information of a bit stream.
  • the inter prediction information may include 1) mode information indicating whether a skip mode is used, and 2) a skip index.
  • the skip index may be the same as the merge index described above.
  • the target block can be encoded without a residual signal.
  • the inter prediction information may not include the residual signal.
  • the bitstream may not include the residual signal.
  • the decoding apparatus 200 can acquire the skip index from the bit stream only when the mode information indicates that the skip mode is used. As described above, the merge index and the skip index may be the same. The decoding apparatus 200 can acquire the skip index from the bit stream only when the mode information indicates that the merge mode or the skip mode is used.
  • the skip index may indicate a merge candidate used for predicting a target block among merge candidates included in the merge candidate list.
  • the decoding apparatus 200 can perform prediction on the target block using merge candidates indicated by the skip index among merge candidates included in the merge candidate list.
  • the motion vector of the target block may be specified by the motion vector of the merge candidate pointed to by the skip index, the reference picture index, and the reference direction.
  • the current picture reference mode may mean a prediction mode using the preexisting reconstructed region in the target picture to which the target block belongs.
  • a motion vector may be used to specify the periodically reconstructed region. Whether or not the target block is coded in the current picture reference mode can be determined using the reference picture index of the target block.
  • a flag or an index indicating whether the target block is a block coded in the current picture reference mode may be signaled from the coding apparatus 100 to the decoding apparatus 200. [ Alternatively, whether the target block is a block coded in the current picture reference mode may be inferred through the reference picture index of the target block.
  • the target picture When the target block is coded in the current picture reference mode, the target picture may be in a fixed position or an arbitrary position in the reference picture list for the target block.
  • the fixed position may be the position where the value of the reference picture index is 0 or the last position.
  • a separate reference picture index indicating this arbitrary position may be signaled from the coding apparatus 100 to the decoding apparatus 200.
  • motion information to be used for prediction of a target block among the motion information in the list can be specified through the index for the list.
  • the coding apparatus 100 can signal only the index of the element causing the minimum cost in the inter prediction of the target block among the elements of the list.
  • the encoding apparatus 100 can encode an index and signal the encoded index.
  • the above-described lists may have to be derived in the same manner based on the same data in the encoding apparatus 100 and the decrypting apparatus 200.
  • the same data may include reconstructed pictures and reconstructed blocks.
  • the order of the elements in the list may have to be constant.
  • Figure 10 shows spatial candidates according to an example.
  • a large block in the middle can represent a target block.
  • the five small blocks may represent spatial candidates.
  • the coordinates of the target block may be (xP, yP), and the size of the target block may be (nPSW, nPSH).
  • the spatial candidate A 0 may be a block adjacent to the lower left corner of the target block.
  • a 0 may be a block occupying a pixel of coordinates (xP - 1, yP + nPSH + 1).
  • the spatial candidate A 1 may be a block adjacent to the left of the target block.
  • a 1 may be the lowermost block among the blocks adjacent to the left of the target block.
  • a 1 may be a block adjacent to the top of A 0 .
  • a 1 may be a block occupying a pixel of coordinates (xP - 1, yP + nPSH).
  • the spatial candidate B 0 may be a block adjacent to the upper right corner of the target block.
  • B 0 may be a block occupying a pixel of coordinates (xP + nPSW + 1, yP-1).
  • the spatial candidate B 1 may be a block adjacent to the top of the target block.
  • B 1 may be the rightmost block among the blocks adjacent to the top of the target block.
  • B 1 may be a block adjacent to the left of B 0 .
  • B 1 may be a block occupying the pixels of the coordinates (xP + nPSW, yP-1).
  • the spatial candidate B 2 may be a block adjacent to the upper left corner of the target block.
  • B 2 may be a block occupying pixels of coordinates (xP-1, yP-1).
  • the candidate block may include spatial candidates and temporal candidates.
  • the above determination can be made by sequentially applying the following steps 1) to 4).
  • Step 1) If the PU including the candidate block is outside the boundary of the picture, the availability of the candidate block may be set to false. "Availability is set to false” may be synonymous with "set to unavailable ".
  • Step 2 If the PU including the candidate block is outside the boundary of the slice, the availability of the candidate block may be set to false. If the target block and the candidate block are located in different slices, the availability of the candidate block may be set to false.
  • Step 3 If the PU including the candidate block is outside the boundary of the tile, the availability of the candidate block may be set to false. If the target block and the candidate block are located in different tiles, the availability of the candidate block may be set to false.
  • Step 4 If the prediction mode of the PU including the candidate block is the intra prediction mode, the availability of the candidate block may be set to false. If the PU including the candidate block does not use inter prediction, the availability of the candidate block may be set to false.
  • FIG. 11 shows an order of addition of motion information of a spatial candidate to a merge list according to an example.
  • the order of A 1 , B 1 , B 0 , A 0, and B 2 may be used. That is, the motion information of available spatial candidates can be added to the merged list in the order of A 1 , B 1 , B 0 , A 0, and B 2 .
  • the maximum number of merge candidates in the merge list can be set.
  • the set maximum number is denoted by N.
  • the set number can be transferred from the encoding apparatus 100 to the decoding apparatus 200.
  • the slice header of the slice may contain N.
  • the maximum number of merge candidates of the merge list for the target block of the slice can be set by the slice header.
  • basically the value of N may be 5.
  • the motion information (i.e., merge candidate) may be added to the merge list in the following order of steps 1) to 4) below.
  • Step 1) Available spatial candidates among the spatial candidates can be added to the merged list.
  • the motion information of the available spatial candidates may be added to the merge list in the order shown in FIG. At this time, if the motion information of the available spatial candidate overlaps with other motion information already existing in the merge list, the motion information may not be added to the merge list.
  • Checking whether or not to overlap with other motion information present in the list can be outlined as "redundancy check ".
  • the motion information to be added may be a maximum of N pieces.
  • Step 2 If the number of motion information in the merge list is smaller than N and temporal candidates are available, motion information of temporal candidates may be added to the merge list. At this time, if the available temporal candidate motion information overlaps with other motion information already existing in the merge list, the motion information may not be added to the merge list.
  • Step 3 If the number of pieces of motion information in the merge list is smaller than N and the type of the target slice is "B ", the combined motion information generated by the combined bi-prediction is added to the merge list .
  • the target slice may be a slice containing the target block.
  • the combined motion information may be a combination of L0 motion information and L1 motion information.
  • the L0 motion information may be motion information that refers to only the reference picture list L0.
  • the L1 motion information may be motion information referring to only the reference picture list L1.
  • the L0 motion information may be one or more.
  • the L1 motion information may be one or more.
  • the combined motion information may be one or more. In generating the combined motion information, it is possible to determine which L0 motion information and which L1 motion information to use among one or more L0 motion information and one or more L1 motion information.
  • the one or more combined motion information may be generated in a predetermined order by combined bidirectional prediction using a pair of different motion information in the merge list. One of the pairs of different motion information may be the L0 motion information and the other may be the L1 motion information.
  • the highest combined motion information may be a combination of L0 motion information having a merge index of 0 and L1 motion information having a merge index of 1. If the motion information whose merge index is 0 is not L0 motion information, or the motion information whose merge index is 1 is not L1 motion information, the combined motion information may not be generated and added.
  • the next motion information may be a combination of L0 motion information having a merge index of 1 and L1 motion information having a merge index of 0. [ The following specific combinations may follow other combinations of fields of video encoding / decoding.
  • the combined motion information may not be added to the merge list.
  • the zero vector motion information may be motion information whose motion vector is a zero vector.
  • the zero vector motion information may be one or more.
  • the reference picture indexes of one or more zero vector motion information may be different from each other.
  • the value of the reference picture index of the first zero vector motion information may be zero.
  • the value of the reference picture index of the second zero vector motion information may be one.
  • the number of zero vector motion information may be equal to the number of reference pictures in the reference picture list.
  • the reference direction of the zero vector motion information may be bi-directional.
  • the two motion vectors may all be zero vectors.
  • the number of zero vector motion information may be the smaller of the number of reference pictures in the reference picture list L0 and the number of reference pictures in the reference picture list L1.
  • a unidirectional reference direction can be used for a reference picture index that can be applied to only one reference picture list.
  • the coding apparatus 100 and / or the decoding apparatus 200 can sequentially add the zero vector motion information to the merged list while changing the reference picture index.
  • the zero vector motion information may not be added to the merge list.
  • steps 1) to 4) is merely exemplary, and the order between the steps may be mutually exclusive. In addition, some of the steps may be omitted depending on the predefined conditions.
  • the maximum number of predicted motion vector candidates in the predicted motion vector candidate list can be predetermined.
  • the default maximum number is denoted by N.
  • the default maximum number may be two.
  • the motion information (i.e., the predicted motion vector candidate) may be added to the predicted motion vector candidate list in the order of the following steps 1) to 3).
  • Step 1) Available spatial candidates among the spatial candidates can be added to the predicted motion vector candidate list.
  • the spatial candidates may include a first spatial candidate and a second spatial candidate.
  • the first spatial candidate may be one of A 0 , A 1 , scaled A 0, and scaled A 1 .
  • the second spatial candidate may be one of B 0 , B 1 , B 2 , Scaled B 0 , Scaled B 1, and Scaled B 2 .
  • the motion information of the available spatial candidates may be added to the predicted motion vector candidate list in the order of the first spatial candidate and the second spatial candidate.
  • the motion information of the available spatial candidate is already overlapped with other motion information existing in the predicted motion vector candidate list, the motion information may not be added to the predicted motion vector candidate list. That is, if the value of N is 2 and the motion information of the second spatial candidate is the same as the motion information of the first spatial candidate, the motion information of the second spatial candidate may not be added to the predicted motion vector candidate list.
  • the motion information to be added may be a maximum of N pieces.
  • Step 2 If the number of pieces of motion information in the predicted motion vector candidate list is smaller than N and temporal candidates are available, temporal motion information may be added to the predicted motion vector candidate list. In this case, if the motion information of the available temporal candidate overlaps with other motion information already present in the predicted motion vector candidate list, the motion information may not be added to the predicted motion vector candidate list.
  • Step 3 If the number of motion information in the predicted motion vector candidate list is smaller than N, zero vector motion information may be added to the predicted motion vector candidate list.
  • the zero vector motion information may be one or more.
  • the reference picture indexes of one or more zero vector motion information may be different from each other.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may sequentially add the zero vector motion information to the predicted motion vector candidate list while changing the reference picture index.
  • the zero vector motion information may not be added to the predicted motion vector candidate list if the zero vector motion information overlaps with other motion information already present in the predicted motion vector candidate list.
  • FIG. 12 illustrates a process of transform and quantization according to an example.
  • a quantized level may be generated by performing a conversion and / or quantization process on the residual signal as shown in FIG.
  • the residual signal can be generated as a difference between the original block and the prediction block.
  • the prediction block may be a block generated by intra prediction or inter prediction.
  • the residual signal can be transformed into the frequency domain through a transform process that is part of the quantization process.
  • the transformation kernel used for the transformation may include various DCT kernels and Discrete Sine Transform (DST) kernels such as Discrete Cosine Transform (DCT) type 2 (DCT-II) .
  • DCT Discrete Cosine Transform
  • transform kernels may perform a separable transform or a two-dimensional (2D) non-separable transform on the residual signal.
  • the detachable transform may be a transform that performs a one-dimensional (1D) transform on the residual signal in each of the horizontal and vertical directions.
  • the DCT type and DST type adaptively used for 1D conversion can include DCT-V, DCT-VIII, DST-I and DST-VII in addition to DCT-II as shown in Tables 3 and 4, have.
  • a transform set may be used in deriving the DCT type or DST type to be used for the transform.
  • Each transform set may include a plurality of transform candidates.
  • Each conversion candidate may be a DCT type or a DST type.
  • Table 5 below shows an example of a transform set applied in the horizontal direction and a transform set applied in the vertical direction according to the intra-prediction mode.
  • the transform sets applied in the horizontal direction and the vertical direction may be predetermined depending on the intra-prediction mode of the target block.
  • the encoding apparatus 100 can perform the transform and inverse transform on the residual signal using the transform included in the transform set corresponding to the intra prediction mode of the object block.
  • the decoding apparatus 200 can perform inverse transform on the residual signal using the transform included in the transform set corresponding to the intra-prediction mode of the target block.
  • the transform set applied to the residual signal may be determined as illustrated in Table 3, Table 4, and may not be signaled.
  • the conversion instruction information can be signaled from the encoding apparatus 100 to the decoding apparatus 200.
  • the conversion instruction information may be information indicating which conversion candidate is used among a plurality of conversion candidates included in the conversion set applied to the residual signal.
  • all three transform sets may be constructed as in the example of Table 4 according to the intra prediction mode.
  • An optimal conversion method can be selected from all nine multiple conversion methods due to a combination of three transversions in the horizontal direction and three transformations in the vertical direction. The encoding efficiency can be improved by encoding and / or decoding the residual signal with this optimum conversion method.
  • information on which of the transforms belonging to the transform set is used may be entropy encoded and / or decoded. Truncated unary binarization may be used for encoding and / or decoding such information.
  • the method of using various transformations as described above can be applied to the residual signal generated by intra prediction or inter prediction.
  • the transform may include at least one of a primary transform and a secondary transform.
  • a transform coefficient may be generated by performing a first transform on the residual signal and a second transform coefficient may be generated by performing a second transform on the transform coefficient.
  • the primary transformation can be named primary.
  • the first order transformation can be named as Adaptive Multiple Transform (AMT).
  • AMT may mean that different transforms are applied for each of the 1D directions (i.e., vertical and horizontal directions) as described above.
  • the quadratic transformation may be a transform to improve the energy concentration of the transform coefficients generated by the primary transform.
  • the quadratic transformation can be a separable transformation or a non-separable transformation like the primary transformation.
  • the non-separable transform may be a non-separable secondary transform (NSST).
  • the primary transformation may be performed using at least one of a plurality of predetermined transformation methods.
  • a plurality of predetermined conversion methods may be implemented using discrete cosine transform (DCT), discrete sine transform (DST), and Karhunen-Loeve Transform (KLT) .
  • DCT discrete cosine transform
  • DST discrete sine transform
  • KLT Karhunen-Loeve Transform
  • the first order transform may be a transform of various types depending on the kernel function defining the DCT or DST.
  • the primary transform may include transforms such as DCT-2, DCT-5, DCT-7, DST-1 and DST-8 according to the transform kernel shown in Table 6 below.
  • Table 6 various transformation types and transformation kernel functions for Multiple Transform Selection (MTS) are illustrated.
  • the MTS may mean that a combination of one or more DCT and / or DST transform kernels is selected for conversion to the horizontal and / or vertical direction of the residual signal.
  • i and j may be an integer value of 0 or more and N-1 or less.
  • a secondary transform may be performed on the transform coefficients generated by performing the primary transform.
  • the transformation set can also be defined in the second transformation.
  • the methods for deriving and / or determining the transform set as described above may be applied to the first transform as well as the second transform.
  • the first order transform and the second order transform can be determined for a specified object.
  • the first order transform and the second order transform can be applied to one or more signal components of a luma component and a chroma component.
  • the application of the primary transformation and / or the secondary transformation may be determined according to at least one of coding parameters for a target block and / or a neighboring block.
  • the application of the primary transformation and / or the secondary transformation may be determined by the size and / or shape of the target block.
  • conversion information indicating a conversion method used for an object can be derived by using the specified information.
  • the transformation information may include an index of transformations to be used for the primary transformation and / or the secondary transformation.
  • the transform method may indicate that the first transform and / or the second transform are not used.
  • the transformation method (s) applied to the primary transformation and / or the secondary transformation indicated by the transformation information may be applied to the target block and / Lt; / RTI > may be determined according to at least one of the following coding parameters:
  • the conversion information for the specified object may be signaled from the encoding device 100 to the decryption device 200.
  • an index indicating a primary transformation, whether or not a secondary transformation is used, and an index indicating a secondary transformation can be derived as transformation information in the decryption apparatus 200 have.
  • conversion information indicating whether or not to use a primary conversion, an index indicating a primary conversion, whether or not a secondary conversion is used, and an index indicating a secondary conversion may be signaled.
  • the quantized transform coefficients (i.e., quantized levels) can be generated by performing quantization on the result or residual signal generated by performing the primary transform and / or the quadratic transform.
  • FIG 13 illustrates diagonal scanning according to an example.
  • the quantized transform coefficients may be scanned according to at least one of an intraprediction mode, a block size, and a block type according to at least one of (up-right) diagonal scanning, vertical scanning and horizontal scanning.
  • the block may be a conversion unit.
  • Each scanning can start at a specified starting point and end at a specified ending point.
  • the quantized transform coefficients can be changed to a one-dimensional vector form by scanning the coefficients of the block using the diagonal scanning of FIG.
  • horizontal scanning of FIG. 14 or vertical scanning of FIG. 15 may be used instead of diagonal scanning depending on the block size and / or intra prediction mode.
  • Vertical scanning may be a two dimensional block form factor scan in the column direction.
  • the horizontal scanning may be to scan the two-dimensional block form factor in the row direction.
  • the quantized transform coefficients may be scanned along the diagonal direction, the horizontal direction, or the vertical direction.
  • the quantized transform coefficients may be expressed in block form.
  • a block may include a plurality of sub-blocks. Each sub-block may be defined according to a minimum block size or a minimum block type.
  • the scanning order according to the type or direction of scanning may be first applied to the sub-blocks.
  • the scanning order according to the scanning direction may be applied to the quantized transform coefficients in the sub-block.
  • the scanned quantized transform coefficients may be entropy encoded and the bitstream may comprise entropy encoded quantized transform coefficients.
  • the decoding apparatus 200 can generate quantized transform coefficients through entropy decoding on the bit stream.
  • the quantized transform coefficients may be arranged in a two-dimensional block form through inverse scanning.
  • inverse scanning at least one of top-right diagonal scanning, vertical scanning, and horizontal scanning may be performed.
  • the inverse quantization may be performed on the quantized transform coefficients.
  • the second-order inverse transform can be performed on the result generated by performing the inverse quantization.
  • the first-order inverse transform can be performed on the result generated by performing the second-order inverse transform.
  • the reconstructed residual signal can be generated by performing the first-order inverse transform on the result generated by performing the second-order inverse transform.
  • 16 is a structural diagram of an encoding apparatus according to an embodiment.
  • the encoding apparatus 1600 may correspond to the encoding apparatus 100 described above.
  • the encoding device 1600 includes a processing unit 1610, a memory 1630, a user interface (UI) input device 1650, a UI output device 1660, and a storage unit 1630, which communicate with each other via a bus 1690.
  • UI user interface
  • the encoding apparatus 1600 may further include a communication unit 1620 connected to the network 1699.
  • the processing unit 1610 may be a semiconductor device that executes processing instructions stored in a central processing unit (CPU), memory 1630, or storage 1640.
  • the processing unit 1610 may be at least one hardware processor.
  • the processing unit 1610 can generate and process signals, data, or information that are input to the encoding apparatus 1600, output from the encoding apparatus 1600, used in the encoding apparatus 1600, Compare, and judge related to data or information. In other words, in the embodiment, the generation and processing of data or information and the inspection, comparison and judgment relating to data or information can be performed by the processing unit 1610.
  • the processing unit 1610 includes an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy coding unit 150, An inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190, as shown in FIG.
  • the inter prediction unit 110, the intra prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, the entropy coding unit 150, the inverse quantization unit 160, At least some of the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190 may be program modules and may communicate with an external device or system.
  • the program modules may be included in the encoding device 1600 in the form of an operating system, application program modules, and other program modules.
  • the program modules may be physically stored on various known storage devices. At least some of these program modules may also be stored in a remote storage device capable of communicating with the encoding device 1600.
  • Program modules may be implemented as a set of routines, subroutines, programs, objects, components, and data that perform functions or operations in accordance with one embodiment, implement an abstract data type according to one embodiment, Data structures, and the like, but are not limited thereto.
  • Program modules may be comprised of instructions or code that are executed by at least one processor of the encoding device 1600.
  • the processing unit 1610 includes an inter prediction unit 110, an intra prediction unit 120, a switch 115, a subtractor 125, a transform unit 130, a quantization unit 140, an entropy coding unit 150, The adder 175, the filter unit 180, and the reference picture buffer 190, as shown in FIG.
  • the storage may represent memory 1630 and / or storage 1640.
  • Memory 1630 and storage 1640 can be various types of volatile or non-volatile storage media.
  • memory 1630 may include at least one of ROM (R) 1631 and RAM (RAM) 1632.
  • the storage unit may store data or information used for the operation of the encoding apparatus 1600.
  • the data or information possessed by the encoding apparatus 1600 can be stored in the storage unit.
  • the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bit streams, and the like.
  • Encoding device 1600 can be implemented in a computer system that includes a recording medium that can be read by a computer.
  • the recording medium may store at least one module required for the encoding apparatus 1600 to operate.
  • the memory 1630 may store at least one module, and at least one module may be configured to be executed by the processing unit 1610.
  • the function related to the communication of data or information of the encoding apparatus 1600 may be performed through the communication unit 1620.
  • the communication unit 1620 can transmit the bit stream to the decoding apparatus 1700 to be described later.
  • 17 is a structural diagram of a decoding apparatus according to an embodiment.
  • the decoding apparatus 1700 may correspond to the decoding apparatus 200 described above.
  • the decryption apparatus 1700 includes a processing unit 1710, a memory 1730, a user interface (UI) input device 1750, a UI output device 1760, and a storage 1760, which communicate with each other via a bus 1790.
  • UI user interface
  • the decryption apparatus 1700 may further include a communication unit 1720 connected to the network 1799.
  • the processing unit 1710 may be a semiconductor device that executes processing instructions stored in a central processing unit (CPU), memory 1730, or storage 1740.
  • the processing unit 1710 may be at least one hardware processor.
  • the processing unit 1710 may perform generation and processing of signals, data, or information to be input to the decoding apparatus 1700, output from the decoding apparatus 1700, or used in the decoding apparatus 1700, Compare, and judge related to data or information.
  • the generation and processing of data or information and the inspection, comparison and judgment relating to the data or information can be performed by the processing unit 1710.
  • the processing unit 1710 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, A reference picture buffer 260, and a reference picture buffer 270.
  • An entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, a filter unit 260, At least some of the reference picture buffer 270 may be program modules and may communicate with an external device or system.
  • the program modules may be included in the decryption device 1700 in the form of an operating system, an application program module, and other program modules.
  • the program modules may be physically stored on various known storage devices. At least some of these program modules may also be stored in a remote storage device capable of communicating with the decryption device 1700.
  • Program modules may be implemented as a set of routines, subroutines, programs, objects, components, and data that perform functions or operations in accordance with one embodiment, implement an abstract data type according to one embodiment, Data structures, and the like, but are not limited thereto.
  • the program modules may be comprised of instructions or code that are executed by at least one processor of the decoding apparatus 1700.
  • the processing unit 1710 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an intra prediction unit 240, an inter prediction unit 250, a switch 245, an adder 255, (260) and the reference picture buffer (270).
  • the storage may represent memory 1730 and / or storage 1740.
  • Memory 1730 and storage 1740 can be various types of volatile or non-volatile storage media.
  • the memory 1730 may include at least one of a ROM 1731 and a RAM 1732.
  • the storage unit may store data or information used for the operation of the decoding apparatus 1700.
  • the data or information possessed by the decryption apparatus 1700 can be stored in the storage unit.
  • the storage unit may store pictures, blocks, lists, motion information, inter prediction information, bit streams, and the like.
  • the decryption apparatus 1700 can be implemented in a computer system including a recording medium that can be read by a computer.
  • the recording medium may store at least one module required for the decryption apparatus 1700 to operate.
  • the memory 1730 may store at least one module, and at least one module may be configured to be executed by the processing unit 1710.
  • the function related to the communication of data or information of the decryption apparatus 1700 may be performed through the communication unit 1720.
  • the communication unit 1720 can receive the bit stream from the encoding device 1600.
  • FIG. 18 is a flowchart of a method of generating a reconstructed block according to an embodiment.
  • the generation of the reconstructed block can be performed in the encoding apparatus 100 and the decoding apparatus 200.
  • the processing unit may be the processing unit 1610 of the encoding apparatus 1600 or the processing unit 1710 of the decoding apparatus 1700.
  • the inverse quantization unit may be an inverse quantization unit 160 of the encoding apparatus 100 or an inverse quantization unit 220 of the decoding apparatus 200.
  • the inverse transform unit may be an inverse transform unit 170 of the coding apparatus 100 or an inverse transform unit 230 of the decoding apparatus 200.
  • the intra prediction unit may be an intra prediction unit 120 of the encoding apparatus 100 or an intra prediction unit 240 of the decoding apparatus 200.
  • the inter prediction unit may be an inter prediction unit 110 of the encoding apparatus 100 or an inter prediction unit 250 of the decoding apparatus 200.
  • the adder may be an adder 175 of the encoding apparatus 100 or an adder 255 of the decoding apparatus 200.
  • the processing unit may generate a residual block.
  • Step 1810 may include steps 1811 and 1812.
  • the de-emphasizing unit may generate de-quantized coefficients by performing inverse quantization on the quantized levels.
  • the inverse transformer may generate a reconstructed residual block by performing an inverse transform on the de-quantized coefficients.
  • the processing unit may generate a prediction block for a target block by performing a prediction on the target block using the prediction network.
  • the processing unit may include an intra prediction unit and an inter prediction unit.
  • the prediction network may include an intra prediction network and an inter prediction network.
  • the intra prediction unit may generate a prediction block for the target block by performing prediction on the target block using the intra prediction network.
  • the intra prediction unit may generate a prediction block for the target block by performing prediction on the target block using the inter prediction network.
  • the processing unit may generate a reconstructed block based on the prediction block and the reconstructed residual block.
  • the adder can generate the reconstructed block by summing the prediction block and the reconstructed residual block.
  • 19 is a flowchart of a method of encoding an image according to an embodiment.
  • the processing unit 1610 may generate a prediction block for a target block by performing prediction on the target block using the prediction network.
  • the intra prediction unit may generate a prediction block for the target block by performing prediction on the target block using the intra prediction network.
  • the intra prediction unit may generate a prediction block for the target block by performing prediction on the target block using the inter prediction network.
  • the processing unit 1610 may generate a residual block for the object block based on the prediction block.
  • the subtracter 125 may generate a residual block which is a difference between the target block and the prediction block.
  • the transforming unit 130 may perform transform on the residual block to generate transform coefficients for the target block.
  • the quantization unit 140 may generate a quantized level for the target block by performing quantization on the transform coefficients.
  • the entropy encoding unit 150 may generate a bitstream.
  • the bitstream may be stored in storage 1640.
  • the bit stream may be transmitted to the decoding apparatus 1700 through the communication unit 1620.
  • the bitstream may include information about a target block.
  • the information on the target block may include a coding parameter related to the target block and a quantized level for the target block.
  • the information on the target block can be entropy-encoded by the entropy encoding unit 150.
  • the entropy encoding unit 150 may generate information on a target block that is entropy-encoded by performing entropy encoding on information on the target block.
  • the bitstream may include information about an entropy-encoded target block.
  • step 1960 the processing unit 1610 may generate a reconstructed block for the object block.
  • Step 1960 may include the steps 1810, 1820 and 1830 described above with reference to FIG.
  • 20 is a flowchart of a method of decoding an image according to an embodiment.
  • step 2010 the communication unit 1710 can receive the bit stream.
  • the bitstream may include information about a target block.
  • the information on the target block may include a coding parameter related to the target block and a quantized level for the target block.
  • the bitstream may contain information about the entropy encoded target block.
  • the entropy decoding unit 210 may generate information on the target block by performing entropy decoding on the information on the entropy-encoded target block.
  • step 2020 the processing unit may use the information on the target block to generate a reconstructed block for the target block.
  • Step 2020 may include the steps 1810, 1820 and 1830 described above with reference to FIG.
  • FIG. 21 shows the generation of a prediction block using an intra prediction network according to an example.
  • the prediction network described with reference to FIG. 18 may be an intra prediction network.
  • the intra prediction network can perform prediction on a target block using a spatial reference block.
  • the spatial reference block may be a spatial neighbor block as described above.
  • the spatial reference block may be a reconstructed block.
  • the spatial reference block may be plural.
  • FIG. 22 shows reference blocks for an intra prediction network according to an example.
  • a plurality of reference blocks for a target block may be determined based on a position relative to a target block.
  • the plurality of reference blocks for the target block may include a reconstructed block adjacent to the upper left of the target block, a reconstructed block adjacent to the upper end of the target block, and a reconstructed block adjacent to the left of the target block.
  • the shape of the reference block and the shape of the target block may be different.
  • the size of the reference block and the size of the target block may be different from each other.
  • FIG 23 shows reference blocks for an intra prediction network according to an example.
  • the plurality of reference blocks for the target block may include a reconstructed block adjacent to the upper left of the target block, a reconstructed block adjacent to the upper end of the target block, and a reconstructed block adjacent to the left of the target block.
  • the shape of the reference block and the shape of the target block may be the same.
  • the size of the reference block and the size of the target block may be the same.
  • FIG. 24 shows the generation of a prediction block using an inter prediction network according to an example.
  • the prediction network described with reference to FIG. 18 may be an inter prediction network.
  • the inter prediction network can perform a prediction on a target block using a temporal reference block.
  • the spatial reference block may be a temporal neighbor block as described above.
  • the temporal reference block may be a reconstructed block.
  • the temporal reference block may be plural.
  • the inter prediction network can perform prediction on a target block using a spatial reference block. That is to say, the spatial reference block may be an additional input to the inter prediction network.
  • the spatial reference block may be a reconstructed block.
  • the spatial reference block may be plural.
  • the description related to the spatial neighboring block, the spatial reference block, the temporal neighboring block, and the temporal reference block in the above-described embodiment is also applicable to this embodiment.
  • 25 shows temporal reference blocks for an inter prediction network according to an example.
  • a reconstructed block belonging to a coded and / or decoded reference picture prior to coding and / or decoding of the target picture can be used as a reference block.
  • the reference picture may be a picture in the specified reference picture list.
  • the temporal reference block may be a reconstructed block which is a part of the reference picture in the L0 direction.
  • the temporal reference block may be a reconstructed block that is a part of a region within the reference picture in the L1 direction.
  • the prediction network may be plural.
  • the intra prediction network described above may be plural within each of the encoding apparatus 100 and the decoding apparatus 200.
  • the above-described inter prediction network may be plural in each of the encoding apparatus 100 and the decoding apparatus 200.
  • the coding parameter associated with the object block may have a value of one of a plurality of values.
  • the plurality of values may represent different states of the coding parameters.
  • the plurality of prediction networks may be configured and used for different values of the coding parameters associated with the target block, respectively.
  • the plurality of prediction networks may include a plurality of intra prediction networks and / or a plurality of inter prediction networks.
  • a plurality of prediction networks may be used for a plurality of color channels, a plurality of block sizes, or a plurality of quantization parameters, respectively.
  • the color channel of the target block may be plural.
  • a plurality of prediction networks may be used for each of the plurality of color channels.
  • a plurality of intra prediction networks may be used for the plurality of color channels, respectively.
  • a plurality of inter prediction networks may be used for a plurality of color channels, respectively.
  • the block sizes of the prediction blocks for the target blocks may be different from each other.
  • a plurality of prediction networks may be used for a plurality of block sizes, respectively.
  • a plurality of intra prediction networks may be used for a plurality of block sizes, respectively.
  • a plurality of inter prediction networks may be used for a plurality of block sizes, respectively.
  • the quantization parameters for the target blocks may be different.
  • a plurality of prediction networks may be used for a plurality of quantization parameters, respectively.
  • a plurality of intra prediction networks may be used for a plurality of quantization parameters, respectively.
  • a plurality of inter prediction networks may be used for a plurality of quantization parameters, respectively.
  • 26 shows an intra prediction network based on a fully connected layer according to an example.
  • the prediction network may be an artificial neural network including an input layer, one or more hidden layers, and an output layer.
  • One layer belonging to the artificial neural network described above may include one or more nodes.
  • a node can output a value represented as a floating point or an integer. Also, one or more values may be input to the node.
  • the prediction network may be based on a fully connected layer. That is to say, each layer of the prediction network can be a fully connected layer.
  • connecting two nodes means that the output end of the node of the previous layer is connected to the input end of the node of the following layer.
  • the input layer may be a layer into which a reference block is input.
  • the reference block may be plural.
  • Each reference sample of the plurality of reference blocks may be associated with all nodes of the hidden layer H 1 , respectively.
  • the H 1 layer may be the first hidden layer. Further, each node of the hidden layer H 1 may be connected to all the reference samples of the plurality of reference blocks, respectively.
  • Each node in the hidden layer H n-1 can be connected to all nodes of the hidden layer H n , respectively.
  • each node in the hidden layer, H n may be connected to each and every node in the hidden layer, H n-1.
  • Equation 2 The output of the hidden layer H n can be expressed as Equation 2 below.
  • n may be an integer of 1 or more and L or less.
  • L may be the number of hidden layers.
  • X n may be the input vector of layer H n .
  • W n may be a weight vector of the layer H n .
  • b n may be a bias.
  • f (x) may be an activation function.
  • a sigmoid, a hyperbolic tangent (tanh), a rectified linear unit (ReLU), a softmax and an identity function can be used as the activation function.
  • the output of the hidden layer H L which is the last hidden layer, can determine the value of the sample of the prediction block.
  • the layers of the predictive network may be layers other than the fully connected layer.
  • the layer in another connected state may be a completely unconnected layer such as a convolutional layer, a pooling layer, a deconvolutional layer, and an unpooling layer.
  • FIG. 27 illustrates an intra prediction network based on a composite product layer according to an example.
  • the prediction network may be an artificial neural network that includes an input layer, one or more hidden layers, and an output layer.
  • the input layer may be a layer into which a reference block and a target block are inputted.
  • the reference block may be plural.
  • the specified areas of the plurality of reference blocks and the specified area of the target block may be associated with one node of the hidden layer H 1 .
  • the specified region may be a region of specified width and specified height.
  • the width of the specified region may be w
  • the height of the specified region may be h.
  • Each of h and w may be an integer of 1 or more.
  • an area may refer to samples in an area. That is to say, the samples in the specified areas of the plurality of reference blocks and the samples in the specified area of the target block can be connected to one node of the hidden layer H 1 .
  • Samples of the target block may each have a default value.
  • the samples of the target block may each have a random value.
  • a target block with default values or random values can be input to the input layer.
  • Equation 3 The output of the hidden layer H n can be expressed as Equation 3 below.
  • n may be an integer of 1 or more and L or less.
  • L may be the number of hidden layers.
  • X n may be an input of layer H n .
  • W n may be a weight of the layer H n .
  • b n may be a bias.
  • the operator * can represent a convolution.
  • f (x) can be an activation function.
  • activation function sigmoid, hyperbolic tangent, ReLU, soft max, and identity function can be used.
  • the output of the hidden layer H L which is the last hidden layer, can determine the value of the sample of the prediction block.
  • the layers of the prediction network may be layers other than the composite product layer.
  • the layers in different connection states may be other layers such as a full connection layer, a pooling layer, a deconvolution layer, and a un-pooling layer.
  • the input and output of the prediction network may be a gray scale image.
  • the input and output of the prediction network may be images represented in a specified color space such as RGB, YUV or YCbCr.
  • the input and output of the prediction network may be a plurality of image planes separated according to a plurality of channels of the color space.
  • pre-processing Prior to the sample of the reference block being input to the prediction network, pre-processing may be performed on the sample of the reference block.
  • post-processing on the sample of the prediction block can be performed.
  • the post-processing may be the inverse of the preprocessing.
  • the sample of the prediction block can be used for prediction etc. after the post-processing is performed.
  • the preprocessing may be mean subtraction, normalization, principal component analysis (PCA), or whitening.
  • PCA principal component analysis
  • the preprocessing may be an average subtraction
  • the post-processing may be the inverse of the average subtraction
  • the preprocessing may be normalization
  • the post-processing may be the inverse of normalization
  • the preprocessing may be a principal component analysis
  • the post-processing may be the inverse of principal component analysis
  • the pretreatment may be whitening
  • the post-treatment may be the reverse of whitening
  • the unit of learning of prediction network can be video. That is, learning of the prediction network can be performed for one image.
  • the unit of learning of the prediction network can be batch or mini batch.
  • the arrangement may be a plurality of images. That is, learning of the prediction network can be performed on the layout.
  • a batch normalization layer may be inserted at the input end of the hidden layer of the prediction network.
  • a placement normalization layer may be inserted at the input end of each hidden layer of one or more hidden layers of the prediction network.
  • Learning of the prediction network can be accomplished by iteratively updating the network parameters of the prediction network.
  • the network parameters may be iteratively updated by a method such as error back-propagation using a gradient descent.
  • the network parameters may include weights and weights of the nodes of the layer of the prediction network. Weights and biases for all nodes of all layers of the prediction network may be set as network parameters. That is to say, the network parameters may refer to all learnable variables that affect the output of the prediction network. For example, the network parameters may include W n and b n in Equation (2) and Equation (3).
  • a learning rate can be used to adjust the degree of learning.
  • the learning rate may be a value that adjusts a change in network parameters due to one learning.
  • the learning rate can be multiplied by the value associated with the learning. For example, the learning rate can be multiplied by the slope of the slope descent algorithm.
  • the loss function for learning of the prediction network can be defined based on the prediction image and the original image.
  • the original video can be the video that is input for the learning of the prediction network.
  • the prediction image may be an image output from the learning of the prediction network.
  • the loss function can be defined based on the square of the difference between the predicted image and the original image as shown in Equation 4 below.
  • the loss function can be defined based on the absolute value of the difference between the predicted image and the original image.
  • ? can represent network parameters.
  • m may be the size of the batch or the size of the mini batch.
  • X i can be input to the prediction network.
  • F (x) may be the output from the prediction network.
  • Y i can mean original video.
  • prediction network learning can be performed in a direction in which the difference between the predicted image and the original image is reduced.
  • the loss function of the prediction network can be defined based on the adversarial loss function in addition to the prediction image and the original image.
  • the loss function L total Can be defined as shown in Equation 6 below.
  • L recon. May be a loss function according to Equation 4 or Equation 5 described above.
  • Speaking of L recon. May be a loss function defined based on the difference between the predicted image and the original image or a loss function defined based on the absolute value of the difference between the predicted image and the original image.
  • L adv. May be a hostile loss function.
  • alpha can be a real number between 0 and 1 inclusive.
  • is L recon.
  • L adv. Can be adjusted.
  • a default parameter ⁇ default can be defined.
  • the default parameter can be a predefined value.
  • the default parameters can be used in the encoding apparatus 100 and the decoding apparatus 200 as initial values of the network parameters of the predictive network. Initialization of the network can occur at various points in time.
  • the network parameters of the prediction network may be repeatedly initialized within one video sequence.
  • the network parameters of the predictive network may be initialized for a specified object, and the specified objects may be initialized by 1) a slice, 2) a picture, 3) an instantaneous decoding refresh (IDR) A picture having a temporal identifier different from a temporal identifier (ID) of the picture, or the like.
  • IDR instantaneous decoding refresh
  • all the network parameters of the prediction network can be initialized at the beginning of the encoding or decoding of the IDR picture.
  • all network parameters of the prediction network can be initialized at the beginning of the encoding or decoding of each slice.
  • the network parameters of the intra prediction network can be initialized at the start of encoding or decoding of each picture.
  • the network parameters of the inter prediction network may be initialized at the beginning of the encoding or decoding of the first picture having different temporal identifiers in a group of pictures (GOP).
  • GOP group of pictures
  • the network parameters of the predictive network may be initiated non-iteratively at a particular point in time within a video sequence.
  • the initialization of the network parameters of the prediction network can be done using data related to the above initialization.
  • the data associated with the initialization may include an initial value of a network parameter.
  • the data relating to the initialization of the network parameters of the prediction network may be a predetermined value or a value derived from other data.
  • data related to the initialization of the network parameters may be signaled from the encoding apparatus 100 to the decoding apparatus 200 via the bit stream.
  • the bitstream may include data related to the initialization of the network parameters.
  • the object of the signaling may be a set of available parameters or a block.
  • one or more of the VPS, SPS, PPS, and slice headers may contain data related to the initialization of the network parameters.
  • one or more of the CTU, CU, PU, and TU may comprise data related to the initialization of the network parameters. Data relating to the initialization of the network parameters may be used for encoding and decoding the entity including the data itself.
  • the network parameters of the prediction network may be updated online using sample values of samples of the residual block in the course of encoding and / or decoding the image.
  • a bitstream may be received.
  • a quantized transform coefficient level i.e., a quantized level
  • a residual block for a target block may be generated.
  • the residual block may be a reconstructed residual block.
  • the online update of the network parameters of the predicted network can be performed using the residual block.
  • an online update of the network parameters of the predictive network may be performed using the reconstructed residual block.
  • the online update of the network parameters can be performed in each of the plurality of prediction networks.
  • Various loss functions can be defined for online updating of the network parameters of the prediction network.
  • the loss function can be determined based on the residual block.
  • the loss function may be defined based on the square of the sample value of the sample of the residual block, as described in Equation 7 below.
  • the loss function can be defined based on the absolute value of the sample value of the sample of the residual block, as described in Equation 8 below.
  • ⁇ online can be a network parameter of the prediction network.
  • s may be a number of R i that is input to the prediction network.
  • R i may be the sum of the sample values of the samples in the reconstructed residual block.
  • R i may be generated by decoding for one or more TUs.
  • residual block R i when the size of the residual block R i 16x16, residual block R i may be produced by the decoding of a single TU having a size of 16x16.
  • the residual block R i may be generated by decoding for four adjacent TUs having a size of 8 ⁇ 8.
  • the loss function L total May be used as a loss function for on-line updating of the network parameters of the prediction network.
  • the loss function for online updating of the network parameters may be determined based on the reconstructed block.
  • the loss function may be determined based on the reconstructed block.
  • the loss function can be defined based on the sample value of the sample of the reconstructed block.
  • R i may represent the difference between the reconstructed block and the prediction block output from the prediction network.
  • R i can be the sum of the difference values.
  • the difference value may be the difference between the sample value of the sample of the reconstructed block and the sample value of the sample of the prediction block.
  • s may be a number of R i.
  • the loss function can be defined based on the square of R i .
  • loss of function may be defined on the basis of the absolute value of R i.
  • the online updating of the network parameters of the prediction network can be performed continuously during the decoding of the video. That is to say, the online update of the network parameters of the prediction network can be repeatedly performed within one video sequence.
  • the network parameters of the predictive network may be updated for a specified object, and the specified object may be updated with a different temporal identifier than the temporal identifier of 1) slice, 2) picture, 3) IDR picture or 4) And the like.
  • online updating of all network parameters of the prediction network can be performed at the completion of coding or decoding of the IDR picture.
  • online updating of all network parameters of the prediction network can be performed at the completion of encoding or decoding of each slice.
  • online updating of the network parameters of the intra prediction network can be performed at the completion of encoding or decoding of each picture.
  • the online update of the network parameters of the inter prediction network may be performed at the completion of coding or decoding of the first picture having different temporal identifiers in the group of pictures.
  • the online update of the network parameters of the predictive network may be performed non-iteratively at a specified point within a video sequence.
  • the degree of update can be adjusted through an update rate.
  • the online update of the network parameters of the prediction network can be done using data related to the above-mentioned online update.
  • the data related to the online update may include information indicating the time of online update, update rate of online update, number of residual blocks input for online update, and the like.
  • the data relating to the online updating of the network parameters may be a predetermined value or a value derived from other data.
  • the data related to the online update of the network parameters may be signaled from the encoding apparatus 100 to the decoding apparatus 200 via the bit stream.
  • the bitstream may include data related to online updating of network parameters.
  • the object of the signaling may be a set of available parameters or a block.
  • one or more of the VPS, SPS, PPS, and slice headers may contain data related to online updating of network parameters.
  • one or more of the CTU, CU, PU, and TU may include data related to the online update of the network parameters.
  • the data relating to the online update of the network parameters may be used for encoding and decoding of objects including itself.
  • the order in which the steps, operations, and procedures of the embodiments are applied may be different from each other in the encoding apparatus 1600 and the decoding apparatus 1700. Alternatively, the order in which the steps, operations, and procedures of the embodiments are applied may be the same in the encoding apparatus 1600 and the decoding apparatus 1700.
  • Embodiments may differ from each other for each of the luma and chroma signals.
  • embodiments may be performed identically for a luma signal and a chroma signal.
  • the prediction network of the embodiment can be applied only to a specific part of the plurality of channels of the target block. Some of the channels may be luma channels or chroma channels.
  • the prediction network of the embodiment can be applied only when the type of the target block is a specified type.
  • the specified shape may be a square shape or a non-square shape.
  • Embodiments may be determined based on the size of at least one of the CU, PU, TU and target block.
  • the size may be defined as a minimum size and / or a maximum size for an embodiment to be applied to an object, or may be defined as a fixed size for an embodiment to be applied to an object.
  • first embodiment may be applied to the first size
  • second embodiment may be applied to the second size. That is, the embodiments can be applied in combination according to the size of the object. Embodiments may also be applied only when the size of the object is at least the minimum size and less than the maximum size. That is, the embodiments may be applied only when the size of the object is included in a certain range.
  • the embodiments can be applied only when the size of the target block is larger than the specified size.
  • the specified size may be 8x8, 16x16, 32x32 or 64x64.
  • the embodiments may be applied only when the size of the target block is 4x4.
  • the embodiments can be applied only when the size of the target block is smaller than or equal to the specified size.
  • the specified size may be 8x8, 16x16, 32x32 or 64x64.
  • the embodiments can be applied only when the size of the target block is 8x8 or more and 16x16 or less.
  • the embodiments can be applied only when the size of the target block is 16x16 or more and 64x64 or less.
  • Embodiments can be determined depending on a temporal layer.
  • a separate identifier may be signaled to identify the temporal layer to which the embodiments are applied.
  • Embodiments may optionally be applied to temporal layers specified by identifiers.
  • the identifier may represent the lowest layer and / or the uppermost layer for which the embodiment is to be applied, and may represent the specified layer to which the embodiment is applied.
  • the temporal layer to which the embodiment is applied may be predetermined.
  • the embodiments can be applied only when the temporal layer of the target image is the lowest layer.
  • the embodiments can be applied only when the temporal layer identifier of the target image is zero.
  • the embodiments may be applied only when the temporal layer identifier of the target image is one or more.
  • the embodiments can be applied only when the temporal layer of the target image is the highest layer.
  • a slice type to which the embodiments are applied can be defined. Embodiments may be selectively applied depending on the type of slice.
  • the target block may be divided into sub-blocks, and the intra-prediction of the embodiment may be performed on the divided sub-blocks.
  • the embodiments of the present invention described above can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer-readable recording medium.
  • the computer-readable recording medium may include program commands, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the computer-readable recording medium may be those specially designed and constructed for the present invention or may be those known and used by those skilled in the computer software arts.
  • the computer-readable recording medium may include information used in embodiments according to the present invention.
  • the computer readable recording medium may comprise a bit stream, and the bit stream may comprise the information described in embodiments according to the present invention.
  • the computer-readable recording medium may comprise a non-transitory computer-readable medium.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include machine language code such as those generated by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules for performing the processing according to the present invention, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de décodage vidéo, un dispositif de décodage, un procédé de codage et un dispositif de codage. Un bloc de prédiction pour un bloc cible est généré par réalisation d'une prédiction pour le bloc cible à l'aide d'un réseau de prédiction et un bloc reconstruit pour le bloc cible est généré sur la base du bloc de prédiction et d'un bloc résiduel reconstruit. Le réseau de prédiction comprend un réseau de prédiction intra et un réseau de prédiction inter et, lors de la réalisation de la prédiction, utilise un bloc de référence spatial et/ou un bloc de référence temporel. Une fonction de perte est définie pour entraîner le réseau de prédiction et l'apprentissage du réseau de prédiction est réalisé selon la fonction de perte.
PCT/KR2018/015844 2017-12-14 2018-12-13 Procédé et dispositif de codage et de décodage d'image utilisant un réseau de prédiction WO2019117645A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/772,443 US11166014B2 (en) 2017-12-14 2018-12-13 Image encoding and decoding method and device using prediction network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20170172537 2017-12-14
KR10-2017-0172537 2017-12-14
KR10-2018-0160775 2018-12-13
KR1020180160775A KR102262554B1 (ko) 2017-12-14 2018-12-13 예측 네트워크를 사용하는 영상의 부호화 및 복호화를 위한 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2019117645A1 true WO2019117645A1 (fr) 2019-06-20

Family

ID=66819306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/015844 WO2019117645A1 (fr) 2017-12-14 2018-12-13 Procédé et dispositif de codage et de décodage d'image utilisant un réseau de prédiction

Country Status (1)

Country Link
WO (1) WO2019117645A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501010A (zh) * 2020-10-28 2022-05-13 Oppo广东移动通信有限公司 图像编码方法、图像解码方法及相关装置
WO2022111233A1 (fr) * 2020-11-30 2022-06-02 华为技术有限公司 Procédé de codage de mode de prédiction intra, et appareil
CN115209147A (zh) * 2022-09-15 2022-10-18 深圳沛喆微电子有限公司 摄像头视频传输带宽优化方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194599A1 (en) * 2008-10-22 2011-08-11 Nippon Telegraph And Telephone Corporation Scalable video encoding method, scalable video encoding apparatus, scalable video encoding program, and computer readable recording medium storing the program
JP2014236264A (ja) * 2013-05-31 2014-12-15 ソニー株式会社 画像処理装置、画像処理方法及びプログラム
US20150281691A1 (en) * 2014-03-31 2015-10-01 JVC Kenwood Corporation Video image coding data transmitter, video image coding data transmission method, video image coding data receiver, and video image coding data transmission and reception system
WO2016140090A1 (fr) * 2015-03-04 2016-09-09 ソニー株式会社 Dispositif et procédé de codage d'image
KR20170124499A (ko) * 2009-12-16 2017-11-10 한국전자통신연구원 영상 부호화 및 복호화를 위한 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194599A1 (en) * 2008-10-22 2011-08-11 Nippon Telegraph And Telephone Corporation Scalable video encoding method, scalable video encoding apparatus, scalable video encoding program, and computer readable recording medium storing the program
KR20170124499A (ko) * 2009-12-16 2017-11-10 한국전자통신연구원 영상 부호화 및 복호화를 위한 장치 및 방법
JP2014236264A (ja) * 2013-05-31 2014-12-15 ソニー株式会社 画像処理装置、画像処理方法及びプログラム
US20150281691A1 (en) * 2014-03-31 2015-10-01 JVC Kenwood Corporation Video image coding data transmitter, video image coding data transmission method, video image coding data receiver, and video image coding data transmission and reception system
WO2016140090A1 (fr) * 2015-03-04 2016-09-09 ソニー株式会社 Dispositif et procédé de codage d'image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501010A (zh) * 2020-10-28 2022-05-13 Oppo广东移动通信有限公司 图像编码方法、图像解码方法及相关装置
WO2022111233A1 (fr) * 2020-11-30 2022-06-02 华为技术有限公司 Procédé de codage de mode de prédiction intra, et appareil
CN115209147A (zh) * 2022-09-15 2022-10-18 深圳沛喆微电子有限公司 摄像头视频传输带宽优化方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
WO2019112394A1 (fr) Procédé et appareil de codage et décodage utilisant un partage d'informations sélectif entre des canaux
WO2019190224A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement mémorisant un flux binaire
WO2019177354A1 (fr) Dispositif et procédé de codage/décodage d'image et support d'enregistrement ayant un train de bits stocké en son sein
WO2018012886A1 (fr) Procédé de codage/décodage d'images et support d'enregistrement correspondant
WO2018016823A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement dans lequel le flux binaire est stocké
WO2018012851A1 (fr) Procédé de codage/décodage d'image, et support d'enregistrement correspondant
WO2017222237A1 (fr) Procédé et dispositif de prédiction intra
WO2018026148A1 (fr) Procédé et dispositif de codage/décodage d'images, et support d'enregistrement stockant un flux binaire
WO2018174617A1 (fr) Procédé de prédiction basé sur une forme de bloc et dispositif associé
WO2019107927A1 (fr) Procédé et appareil de prédiction intra bidirectionnelle
WO2021015581A1 (fr) Procédé, appareil et support d'enregistrement pour coder/décoder une image à l'aide d'un partitionnement géométrique
WO2018097700A1 (fr) Procédé et dispositif de filtrage
WO2018174618A1 (fr) Procédé et dispositif de prédiction à l'aide d'un bloc de référence
WO2020050600A1 (fr) Procédé et dispositif de codage/décodage vidéo, et support d'enregistrement pour stockage de flux binaire
WO2021054805A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2020256522A1 (fr) Procédé et appareil de codage d'image et de décodage d'image à l'aide d'une segmentation de zone
WO2021112651A1 (fr) Procédé et dispositif permettant de coder/décoder une image au moyen d'un mode palette, et support d'enregistrement
WO2021112652A1 (fr) Procédé, appareil et support d'enregistrement pour codage/décodage d'image différentielle basée sur une zone
WO2022019613A1 (fr) Procédé, appareil et support d'enregistrement pour coder/décoder une image à l'aide d'un partitionnement géométrique
WO2019074273A1 (fr) Procédé et dispositif utilisant des informations d'inter-prédiction
WO2019117645A1 (fr) Procédé et dispositif de codage et de décodage d'image utilisant un réseau de prédiction
WO2019147067A1 (fr) Méthode et appareil de codage et de décodage d'image à l'aide d'informations de mouvement temporelles
WO2020256495A1 (fr) Procédé, dispositif et support d'enregistrement pour coder/décoder une image à l'aide d'une représentation de référence
WO2020256324A1 (fr) Procédé et appareil de codage/décodage vidéo et support d'enregistrement contenant en mémoire un flux binaire
WO2020209671A1 (fr) Procédé et dispositif de signalisation d'un signal lié à un mode de prédiction pour prédiction intra

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18888056

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18888056

Country of ref document: EP

Kind code of ref document: A1