WO2019190197A1 - Procédé et appareil de traitement de signal vidéo - Google Patents

Procédé et appareil de traitement de signal vidéo Download PDF

Info

Publication number
WO2019190197A1
WO2019190197A1 PCT/KR2019/003583 KR2019003583W WO2019190197A1 WO 2019190197 A1 WO2019190197 A1 WO 2019190197A1 KR 2019003583 W KR2019003583 W KR 2019003583W WO 2019190197 A1 WO2019190197 A1 WO 2019190197A1
Authority
WO
WIPO (PCT)
Prior art keywords
latitude
region
face
image
boundary
Prior art date
Application number
PCT/KR2019/003583
Other languages
English (en)
Korean (ko)
Inventor
이배근
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of WO2019190197A1 publication Critical patent/WO2019190197A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Definitions

  • the present invention relates to a video signal processing method and apparatus.
  • High efficiency image compression techniques can be used to solve these problems caused by high resolution and high quality image data.
  • An inter-screen prediction technique for predicting pixel values included in the current picture from a picture before or after the current picture using an image compression technique an intra prediction technique for predicting pixel values included in a current picture using pixel information in the current picture
  • An object of the present invention is to provide a method and apparatus for projection-converting a 360 degree image in two dimensions.
  • An object of the present invention is to provide a method of adding a padding area to a boundary or a face boundary of a 360 degree image.
  • An object of the present invention is to provide a method of performing padding using a neighboring face neighboring a current face in three-dimensional space.
  • An object of the present invention is to provide a method for calculating the value of an inactive sample in consideration of the proximity in three-dimensional space.
  • the image encoding method may include converting a 360 degree image onto a two-dimensional plane based on an SSP projection transformation technique, and encoding the two-dimensional image projected and converted on the two-dimensional plane.
  • the high latitude region face is extended with the reference latitude region using an active region corresponding to the first circle region generated by projecting and converting the reference latitude region of the 360 degree image, and a square corresponding to the first circle region.
  • an inactive area corresponding to an area in which the second circle area generated by projecting and converting an area between the reference latitudes has been cropped.
  • An image decoding method includes decoding information on a projection conversion technique of a 360 degree projection image, decoding the 360 degree projection image projected and transformed by an SSP projection conversion technique, based on the information; And back-projecting the decoded 360 degree projection image.
  • the high latitude region face is extended with the reference latitude region using an active region corresponding to the first circle region generated by projecting and converting the reference latitude region of the 360 degree image, and a square corresponding to the first circle region.
  • an inactive area corresponding to an area in which the second circle area generated by projecting and converting an area between the reference latitudes has been cropped.
  • the extended reference latitude may be derived by adding or subtracting an offset from the reference latitude.
  • At least one of information on the position of the boundary of the first circle region in contact with the square or information on the position of the boundary of the second circle region in contact with the vertex of the square. Can be encoded in the bitstream.
  • the projecting transformation includes frame packing the high latitude regional faces and the mid latitude regional faces, wherein the frame packing comprises the high latitude regional faces and the mid latitude region. And arranging faces in a line.
  • the frame packing may include resizing the sizes of the high latitude region faces to be equal to the size of the mid latitude region face.
  • the encoding / decoding efficiency can be improved by projecting and converting a 360 degree image in two dimensions.
  • a padding area is added to a boundary or a face boundary of a 360 degree image to increase encoding / decoding efficiency.
  • padding is performed using a neighboring face neighboring the current face in a three-dimensional space, thereby preventing the deterioration of the image quality.
  • the present invention it is possible to determine whether to add a padding area to the boundary of the current face in consideration of continuity in three-dimensional space, thereby improving the encoding / decoding efficiency.
  • An object of the present invention is to provide a method for calculating the value of an inactive sample in consideration of the proximity in three-dimensional space.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a partition mode that can be applied to a coding block when the coding block is encoded by inter-screen prediction.
  • 4 to 6 are diagrams illustrating a camera apparatus for generating a panoramic image.
  • FIG. 7 is a block diagram of a 360 degree video data generating device and a 360 degree video playing device.
  • FIG. 8 is a flowchart illustrating operations of a 360 degree video data generating device and a 360 degree video playing device.
  • FIG. 10 illustrates a 2D projection method using a cube projection technique.
  • FIG. 11 illustrates a 2D projection method using a icosahedron projection technique.
  • FIG. 12 illustrates a 2D projection method using an octahedron projection technique.
  • FIG. 13 illustrates a 2D projection method using a truncated pyramid projection technique.
  • FIG. 15 is a diagram illustrating the conversion between face 2D coordinates and 3D coordinates.
  • 16 is a diagram for explaining an example in which padding is performed on an ERP projection image.
  • FIG. 17 is a diagram for explaining an example in which a length of a padding area in a horizontal direction and a vertical direction is different in an ERP projection image.
  • 18 is a diagram illustrating an example in which padding is performed at a boundary of a face.
  • 19 illustrates an example of determining a sample value of a padding area between faces.
  • 20 and 21 are diagrams for describing an example in which padding is performed in an SSP-based projection image.
  • 22 and 23 illustrate examples of resampling an area where padding is performed.
  • 25 is a diagram illustrating an example of determining a high latitude region and a mid-latitude region based on an extended reference latitude.
  • FIG. 26 is a diagram illustrating an example in which a face of a high latitude region is set smaller than a face of a middle latitude region.
  • FIG. 27 is a diagram illustrating an example in which a padding region is added to a high latitude region face in the modified SSP projection transformation technique.
  • 28 is a diagram illustrating additional aspects of various padding regions for the mid-latitude region.
  • 29 is a diagram illustrating a plurality of sub inactive regions.
  • FIG. 30 is a diagram illustrating an example in which a high latitude region face is generated.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present invention.
  • the image encoding apparatus 100 may include a picture splitter 110, a predictor 120 and 125, a transformer 130, a quantizer 135, a realigner 160, and an entropy encoder. 165, an inverse quantizer 140, an inverse transformer 145, a filter 150, and a memory 155.
  • each of the components shown in FIG. 1 is independently illustrated to represent different characteristic functions in the image encoding apparatus, and does not mean that each of the components is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • the picture dividing unit 110 may divide the input picture into at least one processing unit.
  • the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
  • the picture dividing unit 110 divides one picture into a combination of a plurality of coding units, prediction units, and transformation units, and combines one coding unit, prediction unit, and transformation unit on a predetermined basis (eg, a cost function). You can select to encode the picture.
  • one picture may be divided into a plurality of coding units.
  • a recursive tree structure such as a quad tree structure, a binary tree structure, or a ternary tree structure may be used.
  • a coding unit that is divided into other coding units by using one image or a largest coding unit as a root may be split with as many child nodes as the number of split coding units. Coding units that are no longer split according to certain restrictions become leaf nodes.
  • One coding unit may be divided into two, three, or four coding units. When a quad tree structure is used, one coding unit may be divided into four square coding units.
  • a coding unit may be used as a unit for encoding or may be used as a unit for decoding.
  • the prediction unit may be split in the form of at least one square or rectangle having the same size in one coding unit, or the prediction unit of any one of the prediction units split in one coding unit is different from one another. It may be divided to have a different shape and / or size than the unit.
  • the intra prediction may be performed without splitting into a plurality of prediction units NxN.
  • the predictors 120 and 125 may include an inter predictor 120 that performs inter prediction and an intra predictor 125 that performs intra prediction. Whether to use inter prediction or intra prediction on the prediction unit may be determined, and specific information (eg, an intra prediction mode, a motion vector, a reference picture, etc.) according to each prediction method may be determined. In this case, the processing unit in which the prediction is performed may differ from the processing unit in which the prediction method and the details are determined. For example, the method of prediction and the prediction mode may be determined in the prediction unit, and the prediction may be performed in the transform unit. The residual value (residual block) between the generated prediction block and the original block may be input to the transformer 130.
  • specific information eg, an intra prediction mode, a motion vector, a reference picture, etc.
  • prediction mode information and motion vector information used for prediction may be encoded by the entropy encoder 165 together with the residual value and transmitted to the decoder.
  • the original block may be encoded as it is and transmitted to the decoder without generating the prediction block through the prediction units 120 and 125.
  • the inter prediction unit 120 may predict the prediction unit based on the information of at least one of the previous picture or the next picture of the current picture. In some cases, the inter prediction unit 120 may predict the prediction unit based on the information of the partial region in which the encoding is completed in the current picture. You can also predict units.
  • the inter predictor 120 may include a reference picture interpolator, a motion predictor, and a motion compensator.
  • the reference picture interpolator may receive reference picture information from the memory 155 and generate pixel information of an integer pixel or less in the reference picture.
  • a DCT based 8-tap interpolation filter having different filter coefficients may be used to generate pixel information of integer pixels or less in units of 1/4 pixels.
  • a DCT-based interpolation filter having different filter coefficients may be used to generate pixel information of an integer pixel or less in units of 1/8 pixels.
  • the motion predictor may perform motion prediction based on the reference picture interpolated by the reference picture interpolator.
  • various methods such as full search-based block matching algorithm (FBMA), three step search (TSS), and new three-step search algorithm (NTS) may be used.
  • the motion vector may have a motion vector value in units of 1/2, 1/4, 1/8, or 1/16 pixels based on the interpolated pixels.
  • the motion prediction unit may predict the current prediction unit by using a different motion prediction method.
  • various methods such as a skip method, a merge method, an advanced motion vector prediction (AMVP) method, an intra block copy method, and the like may be used.
  • AMVP advanced motion vector prediction
  • the intra predictor 125 may generate a prediction unit based on reference pixel information around the current block, which is pixel information in the current picture. If the neighboring block of the current prediction unit is a block that has performed inter prediction, and the reference pixel is a pixel that has performed inter prediction, the reference pixel of the block that has performed intra prediction around the reference pixel included in the block where the inter prediction has been performed Can be used as a substitute for information. That is, when the reference pixel is not available, the unavailable reference pixel information may be replaced with at least one reference pixel among the available reference pixels.
  • a prediction mode may have a directional prediction mode using reference pixel information according to a prediction direction, and a non-directional mode using no directional information when performing prediction.
  • the mode for predicting the luminance information and the mode for predicting the color difference information may be different, and the intra prediction mode information or the predicted luminance signal information used for predicting the luminance information may be utilized to predict the color difference information.
  • intra prediction When performing intra prediction, if the size of the prediction unit and the size of the transform unit are the same, the intra prediction on the prediction unit is performed based on the pixels on the left of the prediction unit, the pixels on the upper left, and the pixels on the top. Can be performed. However, when performing intra prediction, if the size of the prediction unit is different from that of the transform unit, intra prediction may be performed using a reference pixel based on the transform unit. In addition, intra prediction using NxN division may be used only for a minimum coding unit.
  • the intra prediction method may generate a prediction block after applying an adaptive intra smoothing (AIS) filter to a reference pixel according to a prediction mode.
  • AIS adaptive intra smoothing
  • the type of AIS filter applied to the reference pixel may be different.
  • the intra prediction mode of the current prediction unit may be predicted from the intra prediction mode of the prediction unit existing around the current prediction unit.
  • the prediction mode of the current prediction unit is predicted by using the mode information predicted from the neighboring prediction unit, if the intra prediction mode of the current prediction unit and the neighboring prediction unit is the same, the current prediction unit and the neighboring prediction unit using the predetermined flag information If the prediction modes of the current prediction unit and the neighboring prediction unit are different, entropy encoding may be performed to encode the prediction mode information of the current block.
  • a residual block may include a prediction unit performing prediction based on the prediction units generated by the prediction units 120 and 125 and residual information including residual information that is a difference from an original block of the prediction unit.
  • the generated residual block may be input to the transformer 130.
  • the transform unit 130 converts the residual block including residual information of the original block and the prediction unit generated by the prediction units 120 and 125 into a discrete cosine transform (DCT), a discrete sine transform (DST), and a KLT. You can convert using the same conversion method. Whether to apply DCT, DST, or KLT to transform the residual block may be determined based on intra prediction mode information of the prediction unit used to generate the residual block.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • KLT KLT
  • the quantization unit 135 may quantize the values converted by the transformer 130 into the frequency domain.
  • the quantization coefficient may change depending on the block or the importance of the image.
  • the value calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the reordering unit 160.
  • the reordering unit 160 may reorder coefficient values with respect to the quantized residual value.
  • the reordering unit 160 may change the two-dimensional block shape coefficients into a one-dimensional vector form through a coefficient scanning method. For example, the reordering unit 160 may scan from DC coefficients to coefficients in the high frequency region by using a Zig-Zag scan method and change them into one-dimensional vectors.
  • a vertical scan that scans two-dimensional block shape coefficients in a column direction instead of a zig-zag scan may be used, and a horizontal scan that scans two-dimensional block shape coefficients in a row direction. That is, according to the size of the transform unit and the intra prediction mode, it is possible to determine which scan method among the zig-zag scan, the vertical scan, and the horizontal scan is used.
  • the entropy encoder 165 may perform entropy encoding based on the values calculated by the reordering unit 160. Entropy encoding may use various encoding methods such as, for example, Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
  • Entropy encoding may use various encoding methods such as, for example, Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC).
  • the entropy encoder 165 receives residual value coefficient information, block type information, prediction mode information, partition unit information, prediction unit information, transmission unit information, and motion of the coding unit from the reordering unit 160 and the prediction units 120 and 125.
  • Various information such as vector information, reference frame information, interpolation information of a block, and filtering information can be encoded.
  • the entropy encoder 165 may entropy encode a coefficient value of a coding unit input from the reordering unit 160.
  • the inverse quantizer 140 and the inverse transformer 145 inverse quantize the quantized values in the quantizer 135 and inversely transform the transformed values in the transformer 130.
  • the residual value generated by the inverse quantizer 140 and the inverse transformer 145 is reconstructed by combining the prediction units predicted by the motion estimator, the motion compensator, and the intra predictor included in the predictors 120 and 125. You can create a Reconstructed Block.
  • the filter unit 150 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
  • a deblocking filter may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the deblocking filter may remove block distortion caused by boundaries between blocks in the reconstructed picture.
  • it may be determined whether to apply a deblocking filter to the current block based on the pixels included in several columns or rows included in the block.
  • a strong filter or a weak filter may be applied according to the required deblocking filtering strength.
  • horizontal filtering and vertical filtering may be performed in parallel when vertical filtering and horizontal filtering are performed.
  • the offset correction unit may correct the offset with respect to the original image on a pixel-by-pixel basis for the deblocking image.
  • the pixels included in the image are divided into a predetermined number of areas, and then, an area to be offset is determined, an offset is applied to the corresponding area, or offset considering the edge information of each pixel. You can use this method.
  • Adaptive Loop Filtering may be performed based on a value obtained by comparing the filtered reconstructed image with the original image. After dividing the pixels included in the image into a predetermined group, one filter to be applied to the group may be determined and filtering may be performed for each group. For information related to whether to apply ALF, a luminance signal may be transmitted for each coding unit (CU), and the shape and filter coefficient of an ALF filter to be applied may vary according to each block. In addition, regardless of the characteristics of the block to be applied, the same type (fixed form) of the ALF filter may be applied.
  • ALF Adaptive Loop Filtering
  • the memory 155 may store the reconstructed block or picture calculated by the filter unit 150, and the stored reconstructed block or picture may be provided to the predictors 120 and 125 when performing inter prediction.
  • FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present invention.
  • the image decoder 200 includes an entropy decoder 210, a reordering unit 215, an inverse quantizer 220, an inverse transformer 225, a predictor 230, 235, and a filter unit ( 240, a memory 245 may be included.
  • the input bitstream may be decoded by a procedure opposite to that of the image encoder.
  • the entropy decoder 210 may perform entropy decoding in a procedure opposite to that of the entropy encoding performed by the entropy encoder of the image encoder. For example, various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied to the method performed by the image encoder.
  • various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied to the method performed by the image encoder.
  • the entropy decoder 210 may decode information related to intra prediction and inter prediction performed by the encoder.
  • the reordering unit 215 may reorder the entropy decoded bitstream by the entropy decoding unit 210 based on a method of rearranging the bitstream. Coefficients expressed in the form of a one-dimensional vector may be reconstructed by reconstructing the coefficients in a two-dimensional block form.
  • the reordering unit 215 may be realigned by receiving information related to coefficient scanning performed by the encoder and performing reverse scanning based on the scanning order performed by the corresponding encoder.
  • the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient values of the rearranged block.
  • the inverse transform unit 225 may perform an inverse transform, i.e., an inverse DCT, an inverse DST, and an inverse KLT, for a quantization result performed by the image encoder, that is, a DCT, DST, and KLT. Inverse transformation may be performed based on a transmission unit determined by the image encoder.
  • the inverse transform unit 225 of the image decoder may selectively perform a transform scheme (eg, DCT, DST, KLT) according to a plurality of pieces of information such as a prediction method, a size of a current block, and a prediction direction.
  • the inverse transform unit 225 of the image decoder may select a transform scheme based on the information signaled from the encoding apparatus.
  • the information may indicate a transformation technique for the horizontal direction and a transformation technique for the vertical direction.
  • the prediction units 230 and 235 may generate the prediction block based on the prediction block generation related information provided by the entropy decoder 210 and previously decoded blocks or picture information provided by the memory 245.
  • Intra prediction is performed on a prediction unit based on a pixel, but when intra prediction is performed, when the size of the prediction unit and the size of the transformation unit are different, intra prediction may be performed using a reference pixel based on the transformation unit. Can be. In addition, intra prediction using NxN division may be used only for a minimum coding unit.
  • the predictors 230 and 235 may include a prediction unit determiner, an inter predictor, and an intra predictor.
  • the prediction unit determiner receives various information such as prediction unit information input from the entropy decoder 210, prediction mode information of the intra prediction method, and motion prediction related information of the inter prediction method, and distinguishes the prediction unit from the current coding unit, and predicts It may be determined whether the unit performs inter prediction or intra prediction.
  • the inter prediction unit 230 predicts the current prediction based on information included in at least one of a previous picture or a subsequent picture of the current picture including the current prediction unit by using information required for inter prediction of the current prediction unit provided by the image encoder. Inter prediction may be performed on a unit. Alternatively, inter prediction may be performed based on information of some regions pre-restored in the current picture including the current prediction unit.
  • a motion prediction method of a prediction unit included in a coding unit based on a coding unit includes a skip mode, a merge mode, an AMVP mode, and an intra block copy mode. It can be determined whether or not it is a method.
  • the intra predictor 235 may generate a prediction block based on pixel information in the current picture.
  • intra prediction may be performed based on intra prediction mode information of the prediction unit provided by the image encoder.
  • the intra predictor 235 may include an adaptive intra smoothing (AIS) filter, a reference pixel interpolator, and a DC filter.
  • the AIS filter is a part of filtering the reference pixel of the current block and determines whether to apply the filter according to the prediction mode of the current prediction unit.
  • AIS filtering may be performed on the reference pixel of the current block by using the prediction mode and the AIS filter information of the prediction unit provided by the image encoder. If the prediction mode of the current block is a mode that does not perform AIS filtering, the AIS filter may not be applied.
  • the reference pixel interpolator may generate a reference pixel having an integer value or less by interpolating the reference pixel. If the prediction mode of the current prediction unit is a prediction mode for generating a prediction block without interpolating the reference pixel, the reference pixel may not be interpolated.
  • the DC filter may generate the prediction block through filtering when the prediction mode of the current block is the DC mode.
  • the reconstructed block or picture may be provided to the filter unit 240.
  • the filter unit 240 may include a deblocking filter, an offset correction unit, and an ALF.
  • Information about whether a deblocking filter is applied to a corresponding block or picture, and when the deblocking filter is applied to the corresponding block or picture, may be provided with information about whether a strong filter or a weak filter is applied.
  • the deblocking filter related information provided by the image encoder may be provided and the deblocking filtering of the corresponding block may be performed in the image decoder.
  • the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction and offset value information applied to the image during encoding.
  • the ALF may be applied to a coding unit based on ALF application information, ALF coefficient information, and the like provided from the encoder. Such ALF information may be provided included in a specific parameter set.
  • the memory 245 may store the reconstructed picture or block to use as a reference picture or reference block, and may provide the reconstructed picture to the output unit.
  • a coding unit is used as a coding unit for convenience of description, but may also be a unit for performing decoding as well as encoding.
  • the current block represents a block to be encoded / decoded, and according to the encoding / decoding step, a coding tree block (or a coding tree unit), an encoding block (or a coding unit), a transform block (or a transform unit), or a prediction block. (Or prediction unit) or the like.
  • 'unit' may indicate a basic unit for performing a specific encoding / decoding process
  • 'block' may indicate a sample array having a predetermined size.
  • 'block' and 'unit' may be used interchangeably.
  • the coding block (coding block) and the coding unit (coding unit) may be understood to have the same meaning.
  • One picture may be divided into square or non-square basic blocks and encoded / decoded.
  • the basic block may be referred to as a coding tree unit.
  • a coding tree unit may be defined as the largest coding unit allowed in a sequence or slice. Information regarding whether the coding tree unit is square or non-square or the size of the coding tree unit may be signaled through a sequence parameter set, a picture parameter set or a slice header.
  • the coding tree unit may be divided into smaller sized partitions.
  • the partition generated by dividing the coding tree unit is called depth 1
  • the partition generated by dividing the partition having depth 1 may be defined as depth 2. That is, a partition generated by dividing a partition that is a depth k in a coding tree unit may be defined as having a depth k + 1.
  • a partition of any size generated as the coding tree unit is split may be defined as a coding unit.
  • the coding unit may be split recursively or split into basic units for performing prediction, quantization, transform, or in-loop filtering.
  • an arbitrary size partition generated as a coding unit is divided may be defined as a coding unit or a transform unit or a prediction unit that is a basic unit for performing prediction, quantization, transform, or in-loop filtering.
  • a prediction block having the same size as the coding block or a size smaller than the coding block may be determined through prediction division of the coding block.
  • Predictive partitioning of a coding block may be performed by a partition mode (Part_mode) indicating a partition type of a coding block.
  • Part_mode partition mode
  • the size or shape of the prediction block may be determined according to the partition mode of the coding block.
  • the division type of the coding block may be determined through information specifying any one of partition candidates.
  • the partition candidates available to the coding block may include an asymmetric partition shape (eg, nLx2N, nRx2N, 2NxnU, 2NxnD) according to the size, shape, or coding mode of the coding block.
  • a partition candidate available to a coding block may be determined according to an encoding mode of the current block.
  • FIG. 3 is a diagram illustrating a partition mode that may be applied to a coding block when the coding block is encoded by inter prediction.
  • any one of eight partition modes may be applied to the coding block, as shown in the example illustrated in FIG. 3.
  • partition mode PART_2Nx2N or PART_NxN may be applied to the coding block.
  • PART_NxN may be applied when the coding block has a minimum size.
  • the minimum size of the coding block may be predefined in the encoder and the decoder.
  • information about the minimum size of the coding block may be signaled through the bitstream.
  • the minimum size of the coding block is signaled through the slice header, and accordingly, the minimum size of the coding block may be defined for each slice.
  • the partition candidates available to the coding block may be determined differently according to at least one of the size or shape of the coding block.
  • the number or type of partition candidates that a coding block may use may be differently determined according to at least one of the size or shape of the coding block.
  • the type or number of asymmetric partition candidates among partition candidates available to the coding block may be limited according to the size or shape of the coding block.
  • the number or type of asymmetric partition candidates that a coding block may use may be differently determined according to at least one of the size or shape of the coding block.
  • the size of the prediction block may have a size of 64x64 to 4x4.
  • the prediction block may not have a 4x4 size in order to reduce the memory bandwidth.
  • FIGS. 4 to 6 illustrate an example of capturing up, down, left, and right sides simultaneously using a plurality of cameras.
  • Video generated by stitching a plurality of videos may be referred to as panoramic video.
  • an image having a degree of freedom based on a predetermined central axis may be referred to as 360 degree video.
  • the 360 degree video may be an image having rotation degrees of freedom for at least one of Yaw, Roll, and Pitch.
  • the structure (or camera arrangement) of the camera for capturing 360 degree video may be in a circular arrangement, as in the example shown in FIG. 4.
  • it may be a one-dimensional vertical / horizontal arrangement as in the example shown in (a) of FIG. 5.
  • it may be a two-dimensional arrangement (that is, a mixture of vertical arrangement and horizontal arrangement).
  • the spherical device may be equipped with a plurality of cameras.
  • FIG. 7 is a block diagram of a 360 degree video data generating apparatus and a 360 degree video playing apparatus
  • FIG. 8 is a flowchart illustrating operations of the 360 degree video data generating apparatus and 360 degree video playing apparatus.
  • the 360-degree video data generating apparatus includes a projection unit 710, a frame packing unit 720, an encoding unit 730, and a transmission unit 740. It may include a parser 750, a decoder 760, a frame depacking unit 770, and a reverse projection unit 780.
  • the encoding unit and the decoding unit illustrated in FIG. 7 may correspond to the image encoding apparatus and the image decoding apparatus illustrated in FIGS. 1 and 2, respectively.
  • the data generating apparatus may determine a projection conversion technique of the 360 degree image generated by stitching the images photographed by the plurality of cameras.
  • the projection unit 710 may determine the 3D form of the 360 degree video according to the determined projection transformation technique, and project the 360 degree video onto the 2D plane according to the determined 3D form (S801).
  • the projection transformation technique may represent an aspect in which the 360 degree video is developed on the 3D form and the 2D plane of the 360 degree video.
  • the 360-degree image may be approximated as having a form of sphere, cylinder, cube, octahedron or icosahedron in 3D space, according to a projection transformation technique.
  • an image generated by projecting a 360 degree video onto a 2D plane may be referred to as a 360 degree projection image.
  • the 360 degree projection image may be composed of at least one face according to a projection conversion technique.
  • each face constituting the polyhedron may be defined as a face.
  • the specific surface constituting the polyhedron may be divided into a plurality of regions, and the divided regions may be set to form separate faces.
  • a plurality of faces on the polyhedron may be set to constitute one face.
  • one face and the padding area on the polyhedron may be configured to constitute one face.
  • the 360 degree video approximated in the shape of a sphere may have a plurality of faces according to the projection transformation technique.
  • a face that is a signal processing object will be referred to as a 'current face'.
  • the current face may mean a face that is an object of encoding / decoding or a frame packing / frame depacking according to a signal processing step.
  • the frame packing may be performed in the frame packing unit 720 (S802).
  • Frame packing may include at least one of reordering, resizing, warping, rotating, or flipping a face.
  • the 360-degree projection image may be converted into a form (eg, a rectangle) having high encoding / decoding efficiency, or discontinuity data between faces may be removed.
  • Frame packing may also be referred to as frame reordering or region-wise packing. Frame packing may be selectively performed to improve encoding / decoding efficiency for the 360 degree projection image.
  • the encoding unit 730 may perform encoding on the 360 degree projection image or the 360 degree projection image on which the frame packing is performed (S803).
  • the encoder 730 may encode information indicating a projection transformation technique for the 360 degree video.
  • the information indicating the projection transformation technique may be index information indicating any one of the plurality of projection transformation techniques.
  • the encoder 730 may encode information related to frame packing for the 360 degree video.
  • the information related to the frame packing may include at least one of whether frame packing is performed, the number of faces, the position of the face, the size of the face, the shape of the face, or the rotation information of the face.
  • the transmitter 740 may encapsulate the bitstream and transmit the encapsulated data to the player terminal (S804).
  • the file parsing unit 750 may parse the file received from the content providing device (S805).
  • the decoding unit 760 may decode the 360 degree projection image using the parsed data (S806).
  • the frame depacking unit 760 may perform frame depacking (Region-wise depacking) opposite to the frame packing performed on the content providing side (S807).
  • Frame depacking may be to restore the frame packed 360 degree projection image to before frame packing is performed.
  • frame depacking may be to reverse the reordering, resizing, warping, rotation, or flipping of a face performed in the data generating device.
  • the inverse projection unit 780 may inversely project the 360 degree projection image on the 2D plane in a 3D form according to a projection transformation technique of the 360 degree video (S808).
  • Projection transformation techniques include isotropic rectangular projection (ERP), cubic projection transformation (Cube Map Projection, CMP), isosahedral projection transformation (ISP), octahedron projection transformation (Octahedron Projection, OHP), truncated pyramid It may include at least one of a projection transformation (Truncated Pyramid Projection (TPP)), a sphere segment projection (SSP), an equatorial cylindrical projection (ECP), or a rotated sphere projection (RSP).
  • TPP projection transformation
  • SSP sphere segment projection
  • ECP equatorial cylindrical projection
  • RSP rotated sphere projection
  • the isotropic method is a method of projecting a pixel corresponding to a sphere into a rectangle having an aspect ratio of N: 1, which is the most widely used 2D transformation technique.
  • N may be two, and may be two or less or two or more real numbers.
  • the actual length of the sphere corresponding to the unit length on the 2D plane becomes shorter toward the pole of the sphere.
  • the coordinates of both ends of the unit length on the 2D plane may correspond to a distance difference of 20 cm near the equator of the sphere, while corresponding to a distance difference of 5 cm near the pole of the sphere.
  • the isotropic rectangular method has a disadvantage in that the image distortion is large and coding efficiency is lowered near the poles of the sphere.
  • FIG. 10 illustrates a 2D projection method using a cube projection technique.
  • the cube projection technique involves approximating a 360-degree video to a cube and then converting the cube into 2D.
  • one face or plane
  • the continuity between the faces is high, and the cube projection method has an advantage of higher coding efficiency than the isotonic diagram method.
  • encoding / decoding may be performed by rearranging the 2D projection-converted image into a quadrangle form.
  • FIG. 11 illustrates a 2D projection method using a icosahedron projection technique.
  • the icosahedron projection technique is a method of approximating a 360 degree video to an icosahedron and converting it into 2D.
  • the icosahedral projection technique is characterized by strong continuity between faces.
  • encoding / decoding may be performed by rearranging faces in the 2D projection-converted image.
  • FIG. 12 illustrates a 2D projection method using an octahedron projection technique.
  • the octahedral projection method is a method of approximating a 360 degree video to an octahedron and converting it into 2D.
  • the octahedral projection technique is characterized by strong continuity between faces.
  • encoding / decoding may be performed by rearranging faces in the 2D projection-converted image.
  • FIG. 13 illustrates a 2D projection method using a truncated pyramid projection technique.
  • the truncated pyramid projection technique is a method of approximating a 360 degree video to a truncated pyramid and converting it into 2D.
  • frame packing may be performed such that the face at a particular point in time has a different size than the neighboring face.
  • the front face may have a larger size than the side face and the back face.
  • the SSP is a method of dividing a spherical 360-degree video into high- and mid-latitude regions and performing 2D projection transformation.
  • the high latitude region includes an arctic region in which the latitude value is greater than or equal to the reference value in the northern hemisphere, and an antarctic region in which the latitude value is greater than or equal to the reference value in the southern hemisphere.
  • the mid-latitude region represents the remaining region, except for the high latitude region.
  • two north and south high latitude regions of the sphere may be mapped to two circles on the 2D plane, and the mid latitude regions of the sphere may be mapped to rectangles on the 2D plane, such as ERP.
  • the boundary between high and mid latitude may be 45 degrees latitude.
  • the latitude less than 45 degrees / over value can be set to the reference latitude.
  • ECP converts spherical 360-degree video into cylindrical form and then converts cylindrical 360-degree video into 2D projection. Specifically, when the ECP is followed, the top and bottom of the cylinder can be mapped to two circles on the 2D plane, and the body of the cylinder can be mapped to the rectangle on the 2D plane.
  • RSP like a tennis ball, represents a method of converting a spherical 360 degree video into two ellipses on a 2D plane.
  • Each sample of the 360 degree projection image may be identified by face 2D coordinates.
  • the face 2D coordinates may include an index f for identifying the face where the sample is located, a coordinate (m, n) representing a sample grid in a 360 degree projection image.
  • FIG. 15 is a diagram illustrating a conversion between a face 2D coordinate and a 3D coordinate.
  • conversion between three-dimensional coordinates (x, y, z) and face 2D coordinates (f, m, n) may be performed using Equations 1 to 3 below. have.
  • the current picture in the 360 degree projection image may include at least one or more faces.
  • the number of faces may be 1, 2, 3, 4 or more natural numbers, depending on the projection method.
  • f may be set to a value equal to or smaller than the number of faces.
  • the current picture may include at least one or more faces having the same temporal order or output order (POC).
  • the number of faces constituting the current picture may be fixed or variable.
  • the number of faces constituting the current picture may be limited not to exceed a predetermined threshold.
  • the threshold value may be a fixed value previously promised by the encoder and the decoder.
  • information about the maximum number of faces constituting one picture may be signaled through a bitstream.
  • the faces may be determined by partitioning the current picture using at least one of a horizontal line, a vertical line, or a diagonal line, depending on the projection method.
  • Each face within a picture may be assigned an index to identify each face.
  • Each face may be parallelized, such as a tile or a slice. Accordingly, when performing intra prediction or inter prediction of the current block, neighboring blocks belonging to different faces from the current block may be determined to be unavailable.
  • Paces (or non-parallel regions) where parallelism is not allowed may be defined, or faces with interdependencies may be defined. For example, faces that do not allow parallel processing or faces with interdependencies may be coded / decoded sequentially instead of being parallel coded / decoded. Accordingly, even neighboring blocks belonging to different faces from the current block may be determined to be available for intra prediction or inter prediction of the current block, depending on whether parallel processing between faces or dependencies is possible.
  • padding may be performed at a picture or face boundary.
  • the padding may be performed as part of the frame packing step S802, or may be performed as a separate step before the frame packing step.
  • the padding may be performed as a preprocessing process before encoding the 360 degree projection image in which the frame packing is performed, or the padding may be performed as part of the encoding step S803.
  • the padding may be performed in consideration of the continuity of the 360 degree image.
  • the continuity of the 360-degree image may mean whether the 360-degree image is spatially adjacent when the 360-degree image is projected back into a sphere or a polyhedron.
  • spatially adjacent faces may be understood to have continuity in 3D space. Padding between picture or face boundaries can be performed using spatially continuous samples.
  • 16 is a diagram for explaining an example in which padding is performed on an ERP projection image.
  • a 360-degree projection image approximated by a sphere can be expanded into a rectangle having a 2: 1 ratio to obtain a two-dimensional 360-degree projection image.
  • the left boundary of the 360 degree projection image has continuity with the right boundary.
  • pixels A, B, and C outside the left boundary may be expected to have values similar to pixels A ', B', and C 'inside the right boundary, and outside the right boundary.
  • the pixels D, E and F of have values similar to the pixels D ', E' and F 'inside the left boundary line.
  • the upper boundary on the left has continuity with the upper boundary on the right.
  • pixels G and H outside the upper left boundary can be predicted to be similar to the inner pixels G 'and H' of the upper right boundary, and pixels I and J outside the right upper boundary. Can be expected to be similar to the inner pixels I 'and J' of the upper left boundary.
  • the upper boundary on the left has continuity with the upper boundary on the right.
  • pixels K and L outside the lower left boundary are similar to pixels K 'and L' inside the right lower boundary, and pixels M and N outside the lower right boundary. Can be expected to be similar to the inner pixels M 'and N' of the lower left boundary.
  • the padding may be performed at the boundary of the 360-degree projection image or the boundary between faces.
  • the padding may be performed using samples included inside the boundary having continuity with the boundary where the padding is performed.
  • padding is performed using samples adjacent to the right boundary at the left boundary of the 360 degree projection image
  • padding is performed using samples adjacent to the left boundary at the right boundary of the 360 degree projection image.
  • padding may be performed using samples of the positions of D ′, E ′, and F ′ included inside the left boundary.
  • padding may be performed using samples adjacent to the upper right boundary at the upper left boundary, and padding may be performed using samples adjacent to the upper left boundary at the upper right boundary. That is, at G and H positions of the upper left boundary, padding is performed using samples of G 'and H' positions contained inside the right upper boundary, and at I and J positions of the upper right boundary, the upper left boundary Padding may be performed using samples of the I 'and J' positions contained inside of.
  • padding may be performed using samples adjacent to the lower right boundary at the lower left boundary, and padding may be performed using samples adjacent to the lower left boundary at the lower right boundary. That is, at the K and L positions of the lower left boundary, padding is performed using samples of the K 'and L' positions contained inside the right upper boundary, and at the M and N positions of the upper right boundary, the upper left boundary Padding may be performed using samples of the M 'and N' positions contained inside of.
  • An area where padding is performed may be referred to as a padding area, and the padding area may include a plurality of sample lines.
  • the number of sample lines included in the padding area may be defined as the length or padding size of the padding area.
  • the padding area has a length k in both the horizontal and vertical directions.
  • the length of the padding area may be set differently according to the horizontal direction or the vertical direction, or differently according to the face boundary.
  • a method of adaptively setting the length of the padding area or using a smoothing filter may be considered according to the degree of distortion.
  • FIG. 17 is a diagram for explaining an example in which a length of a padding area in a horizontal direction and a vertical direction is different in an ERP projection image.
  • the length of the arrow indicates the length of the padding area.
  • the length of the padding area performed in the horizontal direction and the length of the padding area performed in the vertical direction may be set differently. For example, if k columns of samples are generated through the padding in the horizontal direction, the padding may be performed such that 2k rows of samples are generated in the vertical direction.
  • padding may be performed with the same length in both the vertical direction and the horizontal direction, but in at least one of the vertical direction and the horizontal direction, the length of the padding area may be extended after interpolation.
  • k sample lines may be generated in the vertical direction and the horizontal direction, and k sample lines may be additionally generated in the vertical direction through interpolation. That is, after generating k sample lines in both the horizontal and vertical directions (see FIG. 16), k sample lines may be additionally generated in the vertical direction, so that the length in the vertical direction is 2k (see FIG. 17). .
  • Interpolation may be performed using at least one of a sample included inside the boundary or a sample included outside the boundary.
  • an additional padding area may be generated by copying samples adjacent to the bottom boundary outside the padding area adjacent to the top boundary and then interpolating the copied samples and the samples included in the padding area adjacent to the top boundary.
  • the interpolation filter may include at least one of a vertical filter and a horizontal filter. Depending on the position of the sample to be produced, one of the filter in the vertical direction and the filter in the horizontal direction may be selectively used. Alternatively, a sample included in the additional padding area may be generated using a filter in the vertical direction and a filter in the horizontal direction at the same time.
  • the length n in the horizontal direction of the padding area and the length m in the vertical direction of the padding area may have the same value or may have different values.
  • n and m are natural numbers greater than or equal to 0, and may have the same value, or one of m and n may have a smaller value than the other.
  • m and n may be encoded by the encoder and signaled through the bitstream.
  • the length n in the horizontal direction and the length m in the vertical direction may be predefined in the encoder and the decoder.
  • the padding area may be generated by copying samples located inside the image.
  • the padding area adjacent to the predetermined boundary may be generated by copying a sample located inside another boundary adjacent to the predetermined boundary in 3D space.
  • the padding area located at the left boundary of the image may be generated by copying a sample adjacent to the right boundary of the image.
  • the padding area may be generated using at least one sample included in the inside of the boundary to be padded and at least one sample located outside the boundary.
  • the padding area may be copied by copying the samples that are spatially adjacent to the boundary to be padded to the outside of the boundary, and then performing a weighted average or averaging operation between the copied samples and the samples included inside the boundary.
  • the sample value of can be determined.
  • the sample value of the padding area positioned at the left boundary of the image includes at least one sample adjacent to the left boundary of the image and at least one sample adjacent to the right boundary of the image. It can be determined by a weighted average or by averaging.
  • the weight applied to each sample in the weighted average calculation may be determined based on a distance from the boundary where the padding area is located. For example, samples close to the left boundary of the samples in the padding area located at the left boundary are derived by giving a large weight to samples located inside the left boundary, while samples farther to the left boundary are samples located outside the left boundary ( That is, it may be derived by giving a large weight to the samples adjacent to the right boundary of the image.
  • frame packing may be performed by adding a padding area between the faces. That is, a 360 degree projection image may be generated by adding a padding area to the face boundary.
  • 18 is a diagram illustrating an example in which padding is performed at a boundary of a face.
  • the face located at the top of the 360 degree projection image is referred to as the upper face
  • the face located at the bottom of the 360 degree projection image is referred to as the lower face.
  • the upper face may represent any one of faces 1, 2, 3, and 4
  • the lower face may represent any one of faces 5, 6, 7, and 8.
  • a padding area of a shape surrounding the given face can be set.
  • a padding area including m samples may be generated for a triangular face.
  • the padding area is set to surround the face, but the padding area may be set only at a part of the face boundary. That is, unlike the example illustrated in FIG. 18B, the frame packing may be performed by adding a padding area only at an image boundary or by adding a padding area only between faces.
  • a padding area may be added only to a boundary of faces that are not continuous in 3D space.
  • the length of the padding area between the faces may be set identically or differently depending on the position.
  • the length (i.e., horizontal length) n of the padding area where a given face is located on the left or right side and the horizontal length m of the padding area, which is located at the top or bottom of the predetermined face may have the same value or be different from each other. May have a value.
  • n and m are natural numbers greater than or equal to 0, and may have the same value, or one of m and n may have a smaller value than the other.
  • m and n may be encoded by the encoder and signaled through the bitstream.
  • the length n in the horizontal direction and the length m in the vertical direction may be predefined in the encoder and the decoder according to the projection transformation method, the position of the face, the size of the face or the shape of the face.
  • the sample value of the padding area may be determined based on a sample included in a predetermined face or a sample included in a sample included in a predetermined face and a face adjacent to the predetermined face.
  • the sample value of the padding area adjacent to a boundary of a predetermined face may be generated by copying a sample included in the face or interpolating the samples included in the face.
  • the upper extension region U of the upper face is generated by copying a sample adjacent to the boundary of the upper face or interpolating a predetermined number of samples adjacent to the boundary of the upper face.
  • the lower extension region D of the lower face may be generated by copying a sample adjacent to the boundary of the lower face or interpolating a predetermined number of samples adjacent to the boundary of the lower face.
  • the sample value of the padding area adjacent to the boundary of the predetermined face may be generated using the sample value included in the face spatially adjacent to the face.
  • the inter-face proximity may be determined based on whether the faces have continuity when the 360-degree projection image is back projected on the 3D space.
  • a sample value of a padding area adjacent to a boundary of a predetermined face is generated by copying a sample included in a face spatially adjacent to the face, or included in a sample included in the face and a face spatially adjacent to the face. Samples can be generated by interpolating. For example, the left part of the upper extension region of the second face may be generated based on the samples included in the first face, and the right part may be generated based on the samples included in the third face.
  • 19 illustrates an example of determining a sample value of a padding area between faces.
  • the padding area between the first face and the second face may be obtained by weighted averaging at least one sample included in the first face and at least one sample included in the second face.
  • the padding area between the upper face and the lower face may be obtained by weighted averaging the upper extension area U and the lower extension area D.
  • the weight w may be determined based on information encoded and signaled by the encoder. Alternatively, the weight w may be variably determined according to the position of the sample in the padding area. For example, the weight w may be determined based on the distance from the position of the sample in the padding area to the first face and the distance from the position of the sample in the padding area to the second face.
  • Equations 4 and 5 are diagrams showing examples in which the weight w is variably determined according to the position of the sample.
  • a sample value of the padding area is generated based on Equation 4 in the lower extended region close to the lower face, and based on Equation 5 in the upper extended region close to the upper face.
  • the sample value of the padding area may be generated.
  • the filter for weighting operation may have a vertical direction, a horizontal direction, or a predetermined angle.
  • a sample included in the first face and a sample included in the second face may be used to determine the sample value of the sample from the sample in the padding area.
  • the padding area may be generated using only samples included in one of the first and second faces. For example, when one of the samples included in the first face or the sample included in the second face is not available, padding may be performed using only the available samples. Alternatively, padding may be performed by replacing unused samples with surrounding available samples.
  • padding may be performed on the same principle as the described embodiments even in a projection conversion method other than the illustrated projection conversion method.
  • padding may be performed at a face boundary or an image boundary in a 360 degree projection image based on CMP, OHP, ECP, RSP, TPP, and the like.
  • padding related information may be signaled through the bitstream.
  • the padding related information may include whether padding is performed, a location of a padding area, or a padding size.
  • the padding related information may be signaled in units of sequences, pictures, slices, or faces. For example, information indicating whether padding is performed on the upper boundary, the lower boundary, the left boundary, or the right boundary and the padding size may be signaled in units of paces.
  • the 360-degree image may be converted into a rectangle in which the mid-latitude region is projected and transformed similarly to the two circles and the ERP technique in which the two high-latitude regions are projected.
  • an inactive region may be added to the boundary of the circle to generate a rectangular face including the circle.
  • the region corresponding to the circle in the square-shaped face may be defined as an active region, and the region outside the circle may be defined as an inactive region.
  • Frame packing may be performed by arranging high and mid latitude regions in a line.
  • the faces can be arranged according to a predetermined arrangement order.
  • the predetermined arrangement order may be determined in consideration of data continuity between faces. You can line up two high latitude zones and a mid-latitude zone.
  • the arrangement order can be two high latitude zones and a middle latitude zone. Alternatively, two high latitude regions may be arranged on either side of the mid latitude region, respectively.
  • the arrangement direction may be a horizontal direction or a vertical direction.
  • the frame packing may be performed in such a manner that the high latitude region and the middle latitude region are sequentially arranged horizontally.
  • a rectangular frame whose width is longer than the height can be generated.
  • frame packing may be performed in which the high latitude regions and the mid latitude regions are arranged longitudinally. As a result, a frame in which the rectangular frame is rotated 90 degrees may be generated.
  • the rectangular frame can be divided into a plurality of faces.
  • the face may have an mxm size.
  • m may have the same size as the diameter of the circle on which the high latitude is projected.
  • the size of the diameter m of the circle may be predefined.
  • the diameter m of the circle may be adaptively determined according to the latitude value for distinguishing the high latitude region.
  • two high latitude regions may be assigned Face IDs 0 and 1, respectively.
  • the face ID 0 may identify a face corresponding to an image of an arctic region (eg, 45 degrees or more north latitude)
  • the face ID 1 may identify a face corresponding to an image of an Antarctic region (region 45 degrees or more south latitude).
  • the width or height of the rectangle in which the mid-latitude region is projected and converted may be set to an integer multiple of the diameter m of the circle.
  • the mid-latitude region may be projected and transformed into a rectangle of size mx4m.
  • the rectangle corresponding to the mid-latitude area may be divided into four faces of size mxm. That is, a rectangle of size mxNm can be divided into N faces. Accordingly, the 360 degree projection image frame-packed based on the SSP may have a size of mx6m in which the size of each face is mxm.
  • the face (s) included in the rectangle obtained by translating the mid-latitude area will be referred to as the face of the mid-latitude area.
  • the face (s) including the circle converted by the high latitude region will be referred to as the face of the high latitude region.
  • the mid-latitude region is shown to be divided into four faces identified by faces IDs 2-5.
  • the number of faces constituting the mid-latitude region is not limited to four, and the mid-latitude region may be configured with more or fewer faces.
  • the rectangular size in which the mid-latitude region is converted by projection may be variably determined according to the size of the circle in which the high-latitude region is converted by projection, or the latitude that separates the high-latitude region from the mid-latitude region.
  • the sizes and shapes of the faces are illustrated to be the same, but it is also possible to set at least one of a size or a shape of a face constituting the high latitude region and a face forming the mid-latitude region differently.
  • face artifacts may occur between the high-latitude region and the mid-latitude region when the decoded 360-degree projected image is projected back in 3D space. have. That is, face artifacts may occur at the image boundary of the arctic region and the middle latitude region, and the image boundary of the south pole region and the middle latitude region.
  • padding may be performed on at least one boundary of an image corresponding to a high latitude region or an image corresponding to a mid latitude region. Padding may be performed using at least one of an inactive sample or a sample adjacent to an image boundary.
  • 20 and 21 are diagrams for describing an example in which padding is performed in an SSP-based projection image.
  • padding may be performed at a boundary of an image corresponding to a high latitude region. For example, as in the example illustrated in FIG. 20, a padding area having a shape surrounding a boundary of a circle on which a high latitude area is projected and converted may be set.
  • the radius of the circle in the face may increase by the padding size.
  • the size of the face corresponding to the high latitude region may also increase.
  • the width, height, or size of the rectangular projection of the mid-latitude region can be expanded by the diameter increment of the circle. For example, when the padding size in the high latitude region is k, the mid-latitude region image may be projected and converted into a rectangle having a width of m + 2k.
  • padding is performed only at a boundary of an image corresponding to a high latitude region. Unlike the illustrated example, padding may be performed at a boundary of an image corresponding to a mid-latitude region.
  • padding may be performed not only on an image boundary corresponding to a high latitude region but also on an image boundary corresponding to a mid latitude region. As in the example illustrated in FIG. 21, padding may be performed on all boundaries of an image corresponding to a mid-latitude region.
  • padding regions are added to the three boundaries, and at the centered face (i.e., face 3 and face 4), the padding regions are located at the two boundaries. Can be added. That is, the padding area may be added to the remaining boundary except for the boundary of adjacent faces.
  • Padding may be performed only at some boundaries of the mid-latitude region image. For example, padding may be performed only on a boundary adjacent to a high latitude region in 3D space. Alternatively, for faces included in the rectangle, padding may be performed only at boundaries of faces that are not adjacent to each other on the 2D plane but are adjacent to each other on the 3D space.
  • padding samples included in the padding area will be referred to as padding samples or padded samples.
  • the image boundary sample may include at least one of a sample adjacent to the boundary where the padding area is set or a sample adjacent to the boundary where the padding area is set and when the 360 degree projection image is back projected on the 3D space.
  • the value of the padding sample included in the padding region surrounding the arctic region image may be at least one of a sample adjacent to the boundary of the arctic region image or a sample adjacent to the boundary of the mid-latitude region image adjacent to the boundary of the arctic region image in 3D space. It can be calculated using one.
  • the padding sample may be generated by copying a sample of a predetermined position or by interpolating a plurality of samples included in an arctic region image.
  • the padding sample may be generated by a weighted sum operation or an average operation between the sample included in the arctic region image and the sample included in the mid-latitude region image.
  • the value of the padding sample may be calculated based on weighted prediction between the sample located at the boundary of the image and the sample included in the inactive area.
  • the padding sample in the padding area adjacent to the high latitude region image may be included in the sample included in the active area adjacent to the boundary and the inactive area adjacent to the boundary based on the boundary between the active area and the inactive area. It can be generated based on a weighted sum operation between samples.
  • a weight applied to each of the sample included in the active region and the sample included in the inactive region may be determined based on a distance from the padding sample. For example, as the padding sample is closer to the active region, the weight applied to the sample included in the active region may increase.
  • the weight applied to the sample included in the inactive region may increase.
  • the weighted prediction using the samples included in the inactive region may be referred to as 'inactive weighted prediction'.
  • 'inactive weighted prediction' When padding samples are generated based on inactive weighted prediction, a gradual pixel value change may occur between padding samples in the padding area.
  • face artifacts may be reduced.
  • the padding sample may be generated based on at least one of an image boundary sample or a specific value.
  • the specific value may be a value of a sample included in the inactive region, or may be a value predefined by the encoder / decoder or a value derived by the same rule in the encoder / decoder.
  • the padding sample generation method in the high latitude region and the mid latitude region may be the same.
  • the padding sample generation method in the high latitude region and the padding sample generation method in the mid latitude region may be different.
  • a padding sample may be generated based on a weighted sum operation of the active region and the inactive region.
  • padding samples may be generated by copying samples adjacent to the face boundary or interpolating a plurality of samples adjacent to the face boundary.
  • the padding size may be predefined in the encoder and the decoder. Alternatively, the padding size may be adaptively determined according to the position of the padding area. For example, a padding size for each projection transformation technique may be previously defined in an encoder and a decoder. Alternatively, information indicating the padding size may be signaled through a sequence, picture, or slice header. The padding sizes for the high latitude region image and the mid-latitude region image may be set identically or differently.
  • the size of the image is increased.
  • the data to be stored in the line buffer increases, thereby increasing the memory occupancy, and thus, a problem in that encoding / decoding efficiency may decrease.
  • the padded image / face may be resampled. That is, the image size before and after the padding may be maintained the same.
  • 22 and 23 illustrate examples of resampling an area where padding is performed.
  • the high latitude region face to which the padding region is added may be resampled to maintain the same size of the high latitude region face before and after padding is performed.
  • the rectangular mid-latitude region image having the padding region added thereto may be resampled to maintain the same size of the rectangle before and after the padding is performed.
  • the type or property of a filter used for resampling may be determined based on at least one of a padding size, an image size, or information signaled through a bitstream.
  • the attribute of the filter may include at least one of the number of taps of the filter, the strength of the filter, or the coefficient of the filter.
  • the number of filter taps may be set in proportion to the padding size.
  • the number of filter taps may be set to an integer multiple of the padding size.
  • the padding size is k
  • the number of filter taps may be set to 2k. Equation 6 below shows an example in which resampling is performed using a 4-tap filter when the padding size is 2.
  • the padding region may be limited to an area that is outside the boundary of the face or the image.
  • the padding region may be added only to the inactive region of the high latitude region pace.
  • the padding may be performed only on a part of the face or image boundary so that no padding area outside the face or image boundary occurs.
  • the padding area when the padding area is set to surround the active area (that is, the circle), the padding area has a circular band shape surrounding the circle.
  • the size of the active area is increased by 2k, as in the example shown in FIG.
  • the padding area may have a shape in which a region beyond the mxm square is deleted from the circular band area surrounding the active area of diameter m. Accordingly, padding regions may only be added to inactive regions within the high latitude region pace.
  • the face size can be prevented from increasing, and the amount of data to be stored in the line buffer can be lowered to efficiently use the memory.
  • the above limitation may be applied to at least one of a high latitude region image or a mid latitude region image.
  • the size of the high latitude region image may be maintained at mxm, and the width or height of the mid-latitude region image may be increased by the padding size.
  • padding regions are illustrated as being added to boundaries of faces that are not adjacent to each other in the 2D plane but adjacent to each other in 3D space. Looking at each face, the padding area may be added only at one boundary of the face located at both ends of the mid-latitude region image.
  • padding may be performed only at a boundary continuous with a high latitude region image in 3D space.
  • a portion of the mid-latitude region image may be overlapped at the boundary of the high latitude region image, or a portion of the high-latitude region image may be overlapped at the boundary of the mid-latitude region image.
  • the boundary of at least one of the high latitude region or the mid-latitude region may be extended beyond the reference latitude.
  • 25 is a diagram illustrating an example of determining a high latitude region and a mid-latitude region based on an extended reference latitude.
  • the boundary of the high latitude region may be extended by the first offset in the reference latitude.
  • the boundary of the high latitude region may be determined as a latitude line of 45-f degrees.
  • phase 0 is based on an arctic region image (i.e., an image above 45-f north latitude) extended by a first offset from a reference latitude
  • phase 1 is an antarctic region image extended by a first offset from a reference latitude. (I.e., images above 45-f south latitude).
  • the image extended by the first offset may be repeatedly inserted into the high-latitude region image and the mid-latitude region image.
  • the boundary of the mid-latitude region may be extended by a second offset at the reference latitude.
  • the boundary of the mid-latitude region may be determined as a latitude line of 45 + g degrees.
  • phases 2 to 5 may be configured based on the mid-latitude regional image (ie, the image between the north latitude 45 + g and the south latitude 45 + g degrees) extended by a second offset from the reference latitude.
  • the image extended by the second offset may be repeatedly inserted into the high latitude region image and the middle latitude region image.
  • Information for calculating whether the reference latitude has changed or the changed reference latitude may be transmitted from the content providing apparatus to the content reproducing apparatus. Specifically, the information may be provided through a bitstream. The information indicating whether the reference latitude is changed may be a 1-bit flag. Information for calculating the changed reference latitude may indicate an offset.
  • first offset may be overlapped with the high latitude region image and the middle latitude region image.
  • the 360-degree projected image may be back-projected on the 3D space so that the overlapped image is superimposed on the high-latitude region image and the mid-latitude region image.
  • the final image may be reconstructed by weighted prediction of an overlapping image between the high latitude region and the mid-latitude region.
  • the diameter of the circle generated by the projection transformation of the high latitude region may be 1/2 of the width or height of the middle latitude region.
  • the diameter of the circle generated by converting the high-latitude region may be set to m / 2.
  • the rectangle corresponding to the mid-latitude region may be divided into faces of mxm size, while the face corresponding to the high latitude region may be set to have a size of (m / 2) x (m / 2).
  • FIG. 26 is a diagram illustrating an example in which a face of a high latitude region is set smaller than a face of a middle latitude region.
  • a rectangle obtained by translating a mid-latitude region may be divided into four mxm sized faces.
  • the face of the high latitude region may have a size of (m / 2) x (m / 2).
  • the size of the face of the high latitude region may be set to 1/4 of the size of the face of the mid-latitude region.
  • the high latitude regional faces may be disposed in the horizontal direction, while the mid latitude regional faces may be disposed in the vertical direction. Accordingly, a 360 degree projection image of mx (4m + m / 2) size may be generated.
  • the rectangular frame shown in FIG. 26 may be rotated 90 degrees to place high latitude regional faces in the vertical direction and mid latitude regional faces in the horizontal direction.
  • At least one of re-sizing or warping may be applied to at least one of a pace of a high latitude region or a pace of a mid-latitude region.
  • the projection transformation method in which the width or height of the face in the high latitude region is smaller than the face in the mid latitude region can be defined as the modified SSP projection transformation technique.
  • Padding may be performed at the high latitude face even under the modified SSP projection transformation technique.
  • FIG. 27 is a diagram illustrating an example in which a padding region is added to a high latitude region face in the modified SSP projection transformation technique.
  • the padding sample may be derived based on at least one of a sample located at an image boundary at which a high latitude region is projected and converted, a sample included in an inactive region, or a specific value.
  • padding samples may be generated through inactive weighted prediction.
  • the padding area may be a shape in which a region beyond the rectangle of (m / 2) x (m / 2) size is deleted from the circular band-shaped area surrounding the active area having a diameter of m / 2. Accordingly, padding regions may only be added to inactive regions within the high latitude region pace.
  • padding can be performed at the mid-latitude local pace.
  • 28 is a diagram illustrating additional aspects of various padding regions for the mid-latitude region.
  • padding may be performed only at any one of the mid-latitude region faces that faces the high-latitude region face. Specifically, a padding area may be added at the boundary of the mid-latitude area face that faces the high-latitude area face. As a result, a 360 degree projection image of mx (4m + (m / 2) + k) size may be generated.
  • padding may be performed only on two faces positioned at both ends of the mid-latitude region faces. Padding may also be performed at mid-latitude local paces. Specifically, a padding area may be added to the boundary of the latitude faces that are not adjacent to each other in the 2D plane but adjacent to each other in 3D space. As a result, a 360 degree projection image of mx (4m + (m / 2) + 2k) size can be generated.
  • the padding size of the high latitude regional face and the padding size of the middle latitude regional face may be the same.
  • the padding size of the high latitude region face may be set to 1/2 of the padding size of the middle latitude region face.
  • the face of the high latitude region may include an inactive region excluding the circle (and the padding region) in the circle and rectangle that projected the high latitude region.
  • the value of the pixel included in the inactive area may be assigned a default value or a value calculated by an operation.
  • the value of the inactive pixel may be determined by the bit depth.
  • the value of the inactive pixel may be the median of the maximum value representable by the bit depth.
  • the default value may be determined as (1 ⁇ Bit depth) ⁇ 1.
  • the bit depth may have a value of 8 bits, 10 bits, or 12 bits.
  • the median value in an 8-bit image may be 128, which is the median of the maximum value (i.e., 256) that can be expressed in an 8-bit image, and the median value in the 10-bit image may be the median of the maximum value (i.e., 1024) that can be represented in a 10-bit image May be 512.
  • the value of the inactive pixel may be set to a value predefined in the encoder and the decoder.
  • the value of the inactive sample may be derived from the value of the sample included in the peripheral face.
  • the peripheral face may include at least one of a face adjacent to the face of the high latitude region in the 2D plane or a face adjacent to the face of the high latitude region in 3D space.
  • the peripheral face of face 0 when determining the peripheral face based on the adjacency on the 2D plane, the peripheral face of face 0 may include at least one of face 1 or face 2. Meanwhile, when the peripheral face is determined based on the proximity in 3D space, the peripheral face of face 0 may include at least one of face 2, face 3, face 4, or face 5.
  • At least one or more samples included in the edge region of the peripheral face may be used to derive the value of the inactive sample.
  • at least one sample line adjacent to the peripheral face boundary of the edge region may be represented.
  • the position of the edge region within the peripheral face can be determined. For example, in the example illustrated in FIG. 26, when reconstructing a 2D image into a 3D image, the face boundary of the arctic region is in contact with the left boundary (or right boundary) of the mid-latitude region face, and the face boundary of the south pole region is the mid-latitude region face. It is in contact with the right boundary (or left boundary) of.
  • the value of the inactive region of the arctic region face may be calculated based on a sample included in the edge region located at the left boundary of the mid-latitude region face.
  • the value of the inactive region of the Antarctic region face may be calculated based on the samples included in the edge region located at the right boundary of the mid-latitude region face.
  • the value of the inactive sample may be determined as an average value of the samples included in the edge region of the peripheral face.
  • the value of the inactive sample may be set equal to the value of the sample included in the edge region of the peripheral face.
  • the value of the inactive sample can be determined based on the sample adjacent to the boundary of the circle.
  • the value of the inactive sample may be determined as an average value of samples adjacent to the boundary of the circle.
  • the value of the inactive sample may be set equal to the sample adjacent to the boundary of the circle.
  • the value of the inactive sample may be determined based on the sample adjacent to the boundary of the circle and the sample adjacent to the boundary of the peripheral face.
  • the value of the inactive pixel may be determined based on an average operation or weighted sum operation of a sample adjacent to a circle boundary and a sample adjacent to a peripheral face boundary.
  • the value of the inactive sample can be derived by taking into account the amount of change of the sample from the specific position of the circle in the direction of the circle boundary.
  • the specific position may be a predetermined fixed position in the encoder and the decoder.
  • the specific position may be the center of the circle.
  • the specific position may be variably determined in consideration of the direction of the image.
  • the amount of change can be derived based on the difference value between the plurality of samples lying in a straight line from the specific position toward the circle boundary direction.
  • the value of the inactive sample is equal to the sample at the position (m / 2, m / 2) and ((m / 2) -k, (m / 2) -k)) difference between samples at position or ((m / 2) -1, (m / 2) -1) and ((m / 2) -k-1, (m / 2) -k-1)) may be calculated based on the difference value between the samples at the position.
  • the value of the inactive sample may be derived by summing the difference between two samples to the value of the (m / 2, m / 2) sample.
  • the value of the inactive sample can be derived by adding the difference between the two samples to the value of the ((m / 2) -1, (m / 2) -1) sample.
  • k may be a value predefined in the encoder / decoder. Alternatively, k may be determined based on at least one of the size of the face or the location of the inactive sample.
  • the inactive region may be divided into a plurality of sub inactive regions.
  • 29 is a diagram illustrating a plurality of sub inactive regions.
  • an inactive region between a circle and a square face may be divided into four sub inactive regions.
  • the inactive region at the upper left of the face is referred to as the sub inactive region 0
  • the inactive region at the upper right of the face is referred to as the sub inactive region 1
  • the inactive region at the lower left of the face is referred to as the sub inactive region 2
  • the inactive region at the lower right of the face May be defined as the sub inactive region 3.
  • the boundary of a circle in contact with each sub inactive region will be referred to as a sub circle.
  • the sub circle N may represent a boundary of a circle in contact with the sub inactive area N.
  • the value of the inactive sample may be set differently for each sub inactive region.
  • the value of the inactive sample of the at least one sub inactive region may be different from the value of at least one inactive sample of the remaining inactive regions.
  • the peripheral face for calculating the value of the inactive sample can be set differently for each position of the sub inactive region.
  • the 360-degree projection image may be configured such that each sub circle contacts a different mid-latitude region face in 3D space.
  • the boundary of the circle included in face 0 is in contact with the left boundary (or right boundary) of face 2, face 3, face 4 and face 5 in 3D space.
  • the sub circle 0 may face Pace 2
  • the sub circle 1 faces Pace 3
  • the sub circle 2 faces Pace 4
  • the sub circle 3 contacts Pace 5.
  • the value of the inactive sample is calculated using the sample included in face 2 for the sub inactive region 0, and the value of the inactive sample is calculated using the sample included in face 3 for the sub inactive region 1.
  • One or more samples located at the border of the peripheral face may be used to derive the inactive sample.
  • a sample of a specific position of the peripheral face boundary may be used to derive the value of the inactive sample.
  • the specific position may be at least one of the leftmost, rightmost, topmost, bottommost, or center.
  • the position of the sample included in the peripheral face may be determined based on the position of the inactive sample.
  • the value of the inactive sample may be set equal to the value of the sample at a specific location.
  • the value of the inactive sample can be derived by adding or subtracting an offset to the value of the sample at a particular location.
  • the value of the inactive sample may be derived using a plurality of samples located at the peripheral face boundary.
  • the value of the inactive sample may be set as an average value of a plurality of samples located at the peripheral face boundary.
  • the value of the inactive samples included in the sub inactive region N may be set to an average value of the samples located at the left boundary (or right boundary) of face N + 2. As a result, the value of the inactive sample may be set differently for each subface.
  • the high latitude region region may be projected and transformed based on the reference latitude to generate a first circular region, and the second circular region may be generated by projecting and transforming an area between the reference latitude and the extended reference latitude.
  • Extended reference latitude indicates that the reference latitude is extended by an offset.
  • the second circle region may have a shape surrounding the first circle region.
  • FIG. 30 is a diagram illustrating an example in which a high latitude region face is generated.
  • the base latitude is assumed to be 45 degrees. According to one embodiment of the invention, it is possible to set the reference latitude to a value greater than 45 or less than 45 degrees.
  • the extended reference latitude can be derived by adding or subtracting an offset f from the reference latitude.
  • the first circular region may be generated by performing a projection transformation on an area between 45 degrees north latitude and the north pole.
  • the second circular region may be generated by performing a projection conversion on an area between the north latitude 45-f and the north latitude 45 degrees.
  • the second circle region can be cropped by using a square in contact with the first circle region.
  • the diameter of the second circle region may be equal to the diagonal length of the square.
  • a square including an area cropped from the first circle region and the second circle region may be set at the arctic region face.
  • an arctic region face having a face index of zero may include an inactive region corresponding to the first circle region and an inactive region corresponding to a region cropped from the second circle region.
  • the region between 45 degrees south latitude and the south pole can be projected to generate a third circle region.
  • the fourth circular region may be generated by performing a projection transformation on an area between the south 45-f degrees and the south 45 degrees.
  • the fourth circular region can be cropped by using a square in contact with the third circular region.
  • the diameter of the fourth circled region may be equal to the diagonal length of the square.
  • a square including an area cropped from the third circle area and the fourth circle area may be set as the Antarctic area face.
  • an Antarctic region face having a face index of 1 may include an inactive region corresponding to the third circle region and an inactive region corresponding to an area cropped from the fourth circle region.
  • the inactive area can be configured by using data continuous with the active area. Accordingly, there is an advantage of reducing image quality deterioration at a face boundary.
  • the mid-latitude region faces may be generated by projecting and converting a mid-latitude region image between 45 degrees north latitude and 45 degrees south latitude. Accordingly, the data of the inactive regions of the high latitude region faces may overlap with the data of some regions of the mid latitude region faces.
  • Frame packing may be performed to line up the high latitude regional faces and the mid latitude regional faces.
  • the arrangement can be along the transverse or longitudinal direction.
  • the mid-latitude regional faces may be in the order of the arctic region pace, the south pole region pace and the mid-latitude region faces.
  • each of the high latitude regional faces may be arranged on both sides of the mid latitude regional faces.
  • the arrangement order may be in the order of the arctic region pace, the mid-latitude region faces, and the south pole region pace.
  • At least one of shape conversion (warping) of the face, rotation of the face, or resizing of the face may be used when the frame is packed.
  • shape conversion warping
  • rotation of the face rotation of the face
  • resizing of the face may be used when the frame is packed.
  • the high latitude region face may be resized according to the size of the mid latitude region face.
  • Information about the position of the first circle region boundary in contact with the square or the position of the second circle region boundary in contact with the vertex of the square may be defined. Based on the information, the crop area of the second circle area may be determined. The information may be encoded and transmitted to the content reproducing apparatus. The content reproducing apparatus may render the decoded image on the 3D space based on the information.
  • information related to the extended reference latitude may be encoded. Based on the information, an offset can be determined.
  • padding may be performed on the same principle as the described embodiments even in a projection conversion method other than the illustrated projection conversion method.
  • padding may be performed at a face boundary or an image boundary in a 360 degree projection image based on CMP, OHP, ECP, RSP, TPP, and the like.
  • padding related information may be signaled through the bitstream.
  • the padding related information may include whether padding is performed, a location of a padding area, or a padding size.
  • the padding related information may be signaled in picture, slice or face units. For example, information indicating whether padding is performed on the upper boundary, the lower boundary, the left boundary, or the right boundary and the padding size may be signaled in units of paces.
  • each component for example, a unit, a module, etc. constituting the block diagram may be implemented as a hardware device or software, and a plurality of components are combined into one hardware device or software. It may be implemented.
  • the above-described embodiments may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.
  • the present invention can be applied to an electronic device capable of encoding / decoding an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de codage d'image comprenant : une étape de transformation par projection d'une image à 360 degrés sur un plan bidimensionnel sur la base d'une technique de transformation projective SSP ; et une étape de codage de la projection d'image bidimensionnelle transformée sur le plan bidimensionnel. L'image bidimensionnelle projetée sur le plan bidimensionnel comprend des faces régionales de latitude élevée et des faces régionales de latitude médiane, les faces régionales de latitude élevée comprenant une zone active correspondant à une première zone circulaire générée par transformation par projection de la zone de latitude de référence de l'image à 360 degrés, et une zone inactive correspondant à une zone au niveau de laquelle est rognée une seconde zone circulaire générée par transformation projective d'une zone entre la zone de latitude de référence et une latitude de référence étendue, à l'aide d'un carré correspondant à la première zone circulaire.
PCT/KR2019/003583 2018-03-27 2019-03-27 Procédé et appareil de traitement de signal vidéo WO2019190197A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20180035302 2018-03-27
KR10-2018-0035302 2018-03-27

Publications (1)

Publication Number Publication Date
WO2019190197A1 true WO2019190197A1 (fr) 2019-10-03

Family

ID=68060248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/003583 WO2019190197A1 (fr) 2018-03-27 2019-03-27 Procédé et appareil de traitement de signal vidéo

Country Status (2)

Country Link
KR (1) KR20190113651A (fr)
WO (1) WO2019190197A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111349A1 (fr) * 2020-11-25 2022-06-02 腾讯科技(深圳)有限公司 Procédé de traitement d'images, dispositif, support de stockage et produit-programme d'ordinateur

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140112909A (ko) * 2013-03-14 2014-09-24 삼성전자주식회사 파노라마 영상을 생성하는 전자 장치 및 방법
KR20170017700A (ko) * 2015-08-07 2017-02-15 삼성전자주식회사 360도 3d 입체 영상을 생성하는 전자 장치 및 이의 방법
WO2017127816A1 (fr) * 2016-01-22 2017-07-27 Ziyu Wen Codage et diffusion en continu de vidéo omnidirectionnelle
KR20170096975A (ko) * 2016-02-17 2017-08-25 삼성전자주식회사 전방향성 영상의 메타데이터를 송수신하는 기법
WO2018009746A1 (fr) * 2016-07-08 2018-01-11 Vid Scale, Inc. Codage vidéo à 360 degrés à l'aide d'une projection géométrique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140112909A (ko) * 2013-03-14 2014-09-24 삼성전자주식회사 파노라마 영상을 생성하는 전자 장치 및 방법
KR20170017700A (ko) * 2015-08-07 2017-02-15 삼성전자주식회사 360도 3d 입체 영상을 생성하는 전자 장치 및 이의 방법
WO2017127816A1 (fr) * 2016-01-22 2017-07-27 Ziyu Wen Codage et diffusion en continu de vidéo omnidirectionnelle
KR20170096975A (ko) * 2016-02-17 2017-08-25 삼성전자주식회사 전방향성 영상의 메타데이터를 송수신하는 기법
WO2018009746A1 (fr) * 2016-07-08 2018-01-11 Vid Scale, Inc. Codage vidéo à 360 degrés à l'aide d'une projection géométrique

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111349A1 (fr) * 2020-11-25 2022-06-02 腾讯科技(深圳)有限公司 Procédé de traitement d'images, dispositif, support de stockage et produit-programme d'ordinateur

Also Published As

Publication number Publication date
KR20190113651A (ko) 2019-10-08

Similar Documents

Publication Publication Date Title
WO2018106047A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018117706A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2020218793A1 (fr) Procédé de codage basé sur une bdpcm et dispositif associé
WO2018236028A1 (fr) Procédé de traitement d'image basé sur un mode d'intra-prédiction et appareil associé
WO2016153146A1 (fr) Procédé de traitement d'image sur la base d'un mode de prédiction intra et appareil correspondant
WO2019132577A1 (fr) Procédé et dispositif d'encodage et de décodage d'image, et support d'enregistrement avec un train de bits stocké dedans
WO2018044089A1 (fr) Procédé et dispositif pour traiter un signal vidéo
WO2018105759A1 (fr) Procédé de codage/décodage d'image et appareil associé
WO2018124819A1 (fr) Procédé et appareil pour traiter des signaux vidéo
WO2020246805A1 (fr) Dispositif et procédé de prédiction intra basée sur une matrice
WO2020246803A1 (fr) Dispositif et procédé de prédiction intra sur la base d'une matrice
WO2020180119A1 (fr) Procédé de décodage d'image fondé sur une prédiction de cclm et dispositif associé
WO2020197274A1 (fr) Procédé de codage d'image basé sur des transformations et dispositif associé
WO2018221946A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2016200235A1 (fr) Procédé de traitement d'image basé sur un mode de prédiction intra et appareil associé
WO2018131830A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018174531A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2020149616A1 (fr) Procédé et dispositif de décodage d'image sur la base d'une prédiction cclm dans un système de codage d'image
WO2020055208A1 (fr) Procédé et appareil de prédiction d'image pour prédiction intra
WO2019190197A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2019182293A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018174542A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018124822A1 (fr) Procédé et appareil pour traiter des signaux vidéo
WO2019190203A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2019182294A1 (fr) Procédé et appareil de traitement de signal vidéo

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19776525

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19776525

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19776525

Country of ref document: EP

Kind code of ref document: A1