WO2014014276A1 - Procédé de filtrage en boucle et appareil associé - Google Patents

Procédé de filtrage en boucle et appareil associé Download PDF

Info

Publication number
WO2014014276A1
WO2014014276A1 PCT/KR2013/006401 KR2013006401W WO2014014276A1 WO 2014014276 A1 WO2014014276 A1 WO 2014014276A1 KR 2013006401 W KR2013006401 W KR 2013006401W WO 2014014276 A1 WO2014014276 A1 WO 2014014276A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
prediction
current block
boundary
filtering
Prior art date
Application number
PCT/KR2013/006401
Other languages
English (en)
Korean (ko)
Inventor
방건
정원식
허남호
김경용
박광훈
Original Assignee
한국전자통신연구원
경희대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원, 경희대학교 산학협력단 filed Critical 한국전자통신연구원
Priority to US14/399,823 priority Critical patent/US20150146779A1/en
Priority claimed from KR1020130084336A external-priority patent/KR20140019221A/ko
Publication of WO2014014276A1 publication Critical patent/WO2014014276A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the present invention relates to an encoding and decoding process of an image, and more particularly, to an in-loop filling method of an image and an apparatus using the same.
  • an inter prediction technique for predicting pixel values included in a current picture from a previous and / or subsequent picture in time, and predicting pixel values included in a current picture using pixel information in the current picture.
  • An intra prediction technique an entropy encoding technique of allocating a short code to a symbol with a high frequency of appearance and a long code to a symbol with a low frequency of appearance may be used.
  • Video compression technology is a technology that provides a constant network bandwidth under a limited operating environment of hardware without considering a fluid network environment.
  • a new compression technique is required to compress image data applied to a network environment in which bandwidth changes frequently, and a scalable video encoding / decoding method may be used for this purpose.
  • the 3D video vividly provides the user with a three-dimensional effect as seen and felt in the real world through the three-dimensional stereoscopic display device.
  • three-dimensional video standards are underway in MPEG of the video standardization organization ISO / IEC.
  • the 3D video standard includes standards for advanced data formats and related technologies that can support not only stereoscopic images but also autostereoscopic images using real images and their depth information maps.
  • FIG. 1 is a diagram illustrating the basic structure and data format of a 3D video system, and shows an example of a system currently considered in a 3rd video standard.
  • a 3D Content Production side generates a stereo camera, a depth camera, a multi-camera setup, and a two-dimensional image as a three-dimensional image. 2D / 3D conversion is used to acquire the image content at the time of N (N ⁇ 2).
  • the acquired image content may include video information of N viewpoints (N x Video), depth-map information thereof, and camera-related additional information.
  • the video content of the N view is compressed using a multi-view video encoding method (Multi-View), and the compressed bitstream is transmitted to the terminal through a network, for example, through digital video broadcasting (DVB).
  • Multi-View multi-view video encoding method
  • DVD digital video broadcasting
  • the receiving side decodes the received bitstream by using a multi-view video decoding method (Depth-Image? Based Rendering) to restore an image of N views.
  • a multi-view video decoding method (Depth-Image? Based Rendering) to restore an image of N views.
  • the reconstructed N-view image generates virtual view images of at least N views by a depth-map-based rendering (DIBR) process.
  • DIBR depth-map-based rendering
  • the generated virtual viewpoint images of N or more views are reproduced according to various stereoscopic display devices (for example, 2D display, M-View 3D display, and head-tracked stereo display) to provide a stereoscopic image to the user.
  • various stereoscopic display devices for example, 2D display, M-View 3D display, and head-tracked stereo display
  • the depth information map used to generate the virtual view image represents a distance between the camera and the real object in the real world (depth information corresponding to each pixel at the same resolution as the real image) in a constant number of bits.
  • FIG. 2 is a diagram illustrating a depth information map of “balloons” images used in the 3D video coding standard of MPEG, an international standardization organization.
  • FIG. 2 (a) is an actual image of the “balloons” image and shows a depth information map of the (b) balloons ”image of FIG. 2. (b) represents depth information at 8 bits per pixel.
  • FIG. 3 is a diagram illustrating an example of an encoding structure diagram of H.264.
  • the encoding structure of H.264 is generally known to have the highest coding efficiency among video coding standards developed to date for encoding the depth information map.
  • a unit for processing data in an H.264 encoding structure diagram is a macroblock having a size of 16x16 pixels, and receives an image and performs encoding in an intra mode or an inter mode. And output the bitstream.
  • the switch In the intra mode, the switch is switched to intra, and in the inter mode, the switch is switched to inter.
  • the main flow of the encoding process is to first generate a prediction block for the input block image, then obtain the difference between the input block and the prediction block and encode the difference.
  • the generation of the prediction block is performed according to the intra mode and the inter mode.
  • a prediction block is generated by spatial prediction by using neighboring pixel values already encoded in the current block in the intra prediction process.
  • the motion vector is found from the reference image stored in the reference image buffer to find the best match with the currently input block, and then motion compensation is performed using the obtained motion vector. By generating the predictive block.
  • a residual block is generated by obtaining a difference between a currently input block and a prediction block, and then encoded.
  • the method of encoding a block is largely divided into an intra mode and an inter mode.
  • the intra mode is divided into 16x16, 8x8, and 4x4 intra modes
  • the inter mode is divided into 16x16, 16x8, 8x16, and 8x8 inter modes.
  • 8x8 inter mode it is divided into 8x8, 8x4, 4x8 and 4x4 sub inter mode.
  • Encoding of the residual block is performed in the order of transform, quantization, and entropy encoding.
  • the block encoded in the 16x16 intra mode performs transform on the differential block, and outputs transform coefficients, and collects only DC coefficients from the output transform coefficients and performs Hadamard transform again to output hard-mad transformed DC coefficients.
  • the transform process receives the input residual block, performs transform, and outputs a transform coefficient.
  • the quantization process outputs a quantized coefficient obtained by performing quantization on the input transform coefficient according to the quantization parameter.
  • the input quantized coefficients are subjected to entropy encoding according to a probability distribution, and are output as a bitstream.
  • H.264 performs inter-frame predictive encoding, it is necessary to decode and store the currently encoded image for use as a reference image of a later input image. Therefore, by performing inverse quantization and inverse transformation on the quantized coefficients, a reconstructed block is generated through a predictive image and an adder, and then a blocking artifact generated during the encoding process is removed by a deblocking filter, and then stored in a reference image buffer. do.
  • FIG. 4 is a diagram illustrating an example of a decoding structure diagram of H.264.
  • a unit for processing data in an H.264 decoding structure diagram is a macroblock having a size of 16x16 pixels, and decoding in an intra mode or an inter mode by receiving a bitstream. The reconstructed image is output.
  • the switch In the intra mode, the switch is switched to intra, and in the inter mode, the switch is switched to inter.
  • the main flow of the decoding process is to first generate a prediction block, and then decode the input bitstream to generate a reconstructed block by adding the block and the prediction block.
  • the generation of the prediction block is performed according to the intra mode and the inter mode.
  • the prediction block is generated by performing spatial prediction using the neighboring pixel values of the current block in the intra prediction process, and in the inter mode, the prediction block is stored using the motion vector.
  • the prediction block is generated by finding a region in the reference image and performing motion compensation.
  • quantized coefficients are output by performing entropy decoding on the input bitstream according to a probability distribution.
  • the quantized coefficients are subjected to inverse quantization and inverse transformation to generate a reconstructed block through a predictive image and an adder, and then remove a blocking artifact through a deblocking filter, and store them in a reference image buffer.
  • HEVC High Efficiency Video Coding
  • MPEG Moving Picture Experts Group
  • VCEG Video Coding Experts Group
  • the goal is to encode at twice the compression efficiency.
  • 3D broadcasting and mobile communication networks can provide high quality video at lower frequencies than currently available.
  • the present invention provides a method for determining a boundary filtering strength (bS) of a boundary adjacent to a block to which a skip coding mode is applied in a screen is 0 when determining a boundary filtering strength when using a deblocking filter among in-loop filtering methods, and an apparatus using the same. Is in.
  • Another technical problem of the present invention is to reduce the complexity in video encoding and decoding, and to improve the quality of the virtual image generated through the decoded depth image.
  • Another object of the present invention is to provide an image encoding method, a sampling method, and a filtering method for a depth information map.
  • An image decoding method may include generating a predicted pixel value of a neighboring block adjacent to a current block as a pixel value of the current block when intra prediction of a depth information map.
  • the configuring of the prediction image may include copying (padding) peripheral pixels adjacent to a current block to configure an intra prediction image, determining pixels to be copied in consideration of characteristics of neighboring pixels adjacent to the current block, and then determining the determined pixels.
  • the method may be configured by at least one of a method of constructing a current block and a method of constructing and using a predicted block image using a sum of weights according to respective methods or a mean value of a mixture of a plurality of prediction methods.
  • An image decoding method for depth information includes generating a prediction block for a current block of the depth information; Generating a reconstruction block of the current block based on the prediction block; And performing filtering on the reconstructed block, and whether to perform the filtering may be determined according to block information of the current block and encoding information of the current block.
  • the encoding information includes information about a portion having the same depth, a portion to be a background, and a portion corresponding to the inside of the object in the reconstructed image, and a portion having the same depth, a portion to be a background, and an object in the restoration block.
  • the filtering may not be performed on at least one of the portions corresponding to the internal portions.
  • a deblocking filter, a sample adaptive offset filter (SAO) filter, and an adaptive loop are included in at least one of a portion having the same depth, a background portion, and a portion corresponding to a portion inside the object in the reconstruction block.
  • At least one of an adaptive loop filter (ALF) and in-loop joint inter-view depth filtering (JVDF) may not be performed.
  • the encoding information includes information about a portion having the same depth, a background portion, and a portion corresponding to the inside of the object in the restoration block, and the portion having the same depth, the portion that is a background, and the object in the restoration block.
  • Weak filtering may be performed on at least one of the portions corresponding to the portion corresponding to the inside.
  • the method may include performing upsampling on the reconstructed block, and the upsampling may pad one sample value to a predetermined number of sample values.
  • the upsampling may not be performed in at least one of a portion having the same depth, a background portion, and a portion corresponding to the inside of the object in the restoration block.
  • the filtering of the reconstructed block may include determining boundary filtering strengths of two adjacent blocks; And applying filtering on the pixel values of the two blocks according to the boundary filtering intensity, and determining the boundary filtering intensity includes whether at least one block of two adjacent blocks is intraskip coded.
  • Determining If it is determined that both of the two blocks are not intra skip coded, determining whether at least one block of two adjacent blocks is intra coded; If it is determined that both blocks are not intra coded, determining whether at least one of the two blocks has an orthogonal transform coefficient; If it is determined that the two blocks do not have orthogonal transform coefficients, at least one of the absolute value of the difference between the x-axis component or the y-axis component of the motion vector with respect to the two blocks is 1 (or 4) or Or determining whether motion compensation is performed based on another reference frame; And determining whether the absolute values of the differences of the motion vectors are all less than 1 (or 4) and whether the motion compensation is performed based on the same reference frame.
  • the boundary filtering strength bS may be determined to be zero.
  • the boundary filtering strength may be determined as one of 0, 1, 2, and 3. .
  • generating the prediction block may infer a prediction direction for the current block from a neighboring block adjacent to the current block.
  • the filtering of the reconstructed block may include determining boundary filtering strengths of two adjacent blocks; And applying filtering on the pixel values of the two blocks according to the boundary filtering intensity, and determining the boundary filtering intensity when the prediction direction of the current block and a neighboring block adjacent to the current block is the same.
  • the boundary filtering intensity bS may be determined as 0.
  • the filtering of the reconstructed block may include determining boundary filtering strengths of two adjacent blocks; Applying filtering of the pixel values of the two blocks according to the boundary filtering intensity, and determining the boundary filtering intensity is an intra skip in which the prediction mode of the current block does not exist difference information, If the intra prediction direction for the current block is the horizontal direction, the boundary filtering intensity for the vertical boundary of the current block is set to '0', and the prediction mode of the current block is intra skip without difference information. When the intra prediction direction for the current block is a vertical direction, the boundary filtering intensity for the horizontal boundary of the current block may be set to '0'.
  • the filtering of the reconstructed block may include determining boundary filtering strengths of two adjacent blocks; And applying filtering on the pixel values of the two blocks according to the boundary filtering intensity, wherein determining the boundary filtering intensity includes a boundary between the current block and a neighboring block adjacent to the current block. If the boundary matches, the boundary filtering strength may be set to '0'.
  • the image decoding apparatus for depth information includes a prediction image generation unit for generating a prediction block for the current block of the depth information and an addition for generating a reconstruction block of the current block based on the prediction block. Wealth;
  • a filter unit configured to perform filtering on the reconstruction block, wherein the filter unit comprises: a boundary filtering intensity determiner for determining boundary filtering strengths of two adjacent blocks; And a filtering applying unit for applying filtering on the pixel values of the two blocks according to the boundary filtering strength.
  • the boundary filtering strength determiner may determine the boundary filtering strength bS as 0 when at least one block of two adjacent blocks is intraskip coded.
  • the boundary filtering strength determiner determines that one block of two adjacent blocks is an intraskip coding mode and the other block is a general coding mode (intra or inter), and when there are at least one orthogonal transform coefficient in the general coding mode.
  • the boundary filtering intensity bS may be determined as one of 1, 2, 3, and 4.
  • a method for determining a boundary filtering strength (bS) of a boundary adjacent to a block to which an in-screen skip coding mode is applied is 0 when determining a boundary filtering strength when using a deblocking filter in an in-loop filtering method and an apparatus using the same.
  • the complexity of video encoding and decoding is reduced, and the image quality of the virtual image generated through the decoded depth image is thereby improved.
  • an image encoding method, a sampling method, and a filtering method for a depth information map are provided.
  • 1 is a diagram illustrating the basic structure and data format of a 3D video system.
  • FIG. 2 is a diagram illustrating a depth information map for an image of "balloons”.
  • FIG. 3 is a diagram illustrating an example of an encoding structure diagram of H.264.
  • FIG. 4 is a diagram illustrating an example of a decoding structure diagram of H.264.
  • FIG. 5 is a control block diagram illustrating a configuration of a video encoding apparatus according to an embodiment of the present invention.
  • FIG. 6 is a control block diagram illustrating a configuration of a video decoding apparatus according to an embodiment of the present invention.
  • 7A is a diagram illustrating a depth information map for a “kendo” image.
  • FIG. 7B is a 2D graph showing pixel values in a horizontal direction at an arbitrary position of a depth information map for a “kendo” image.
  • FIG. 7C is a 2D graph showing pixel values in a horizontal direction at an arbitrary position of a depth information map for a kendo ”image.
  • FIG. 8 is a diagram illustrating a prediction method in a plane-based split screen.
  • FIG. 9 illustrates neighboring blocks used to infer a prediction direction for a current block according to an embodiment of the present invention.
  • FIG. 10 is a control flowchart illustrating a method of deriving an intra prediction direction with respect to a current block according to an embodiment of the present invention.
  • 11 is a control flowchart illustrating a method of deriving an intra prediction direction with respect to a current block according to another embodiment of the present invention.
  • FIG. 12 illustrates neighboring blocks used to infer a prediction direction for a current block according to another embodiment of the present invention.
  • FIG. 13 is a diagram illustrating an example of a down sampling method of a depth information map.
  • FIG. 14 is a diagram illustrating an example of an upsampling method of a depth information map.
  • 15 is a control flowchart illustrating a method of determining boundary filtering strength bS of deblocking filtering according to an embodiment of the present invention.
  • 16 is a diagram illustrating the boundary between adjacent blocks p and block q.
  • 17 is a control flowchart illustrating a method of determining boundary filtering strength bS of deblocking filtering according to another embodiment of the present invention.
  • FIG. 18 illustrates a prediction direction and a macroblock boundary of a current block and neighboring blocks according to an embodiment of the present invention.
  • 19 is a diagram illustrating an example of an encoding mode of a current block and a current block boundary.
  • 20 is a control block diagram illustrating a configuration of a video encoding apparatus according to an embodiment of the present invention.
  • 21 is a control block diagram illustrating a configuration of a video decoding apparatus according to an embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • FIG. 5 is a block diagram illustrating a configuration of a video encoding apparatus according to an embodiment.
  • a scalable video encoding / decoding method or apparatus may be implemented by extension of a general video encoding / decoding method or apparatus that does not provide scalability, and the block diagram of FIG. 5 is scalable.
  • An embodiment of an image encoding apparatus that may be the basis of a video encoding apparatus is illustrated.
  • the image encoding apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, and a converter 130. And a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a deblocking filter unit 180, and a reference image buffer 190.
  • the image encoding apparatus 100 may perform encoding in an intra mode or an inter mode on an input image and output a bit stream.
  • Intra prediction means intra prediction and inter prediction means inter prediction.
  • the switch 115 is switched to intra, and in the inter mode, the switch 115 is switched to inter.
  • the image encoding apparatus 100 may generate a prediction block for an input block of an input image and then encode a difference between the input block and the prediction block.
  • the selection of encoding or not encoding the generated block may be determined to be excellent in terms of rate distortion.
  • the generation of the prediction block may be generated through intra prediction or may be generated through inter prediction. In this case, selection of whether to perform intra prediction or inter prediction may be determined to be excellent in encoding efficiency in terms of rate distortion.
  • the intra predictor 120 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
  • the motion predictor 111 may obtain a motion vector by searching for a region that best matches an input block in the reference image stored in the reference image buffer 190 during the motion prediction process.
  • the motion compensator 112 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 190.
  • the subtractor 125 may generate a residual block by the difference between the input block and the generated prediction block.
  • the transform unit 130 may output a transform coefficient by performing transform on the residual block.
  • the quantization unit 140 may output the quantized coefficient by quantizing the input transform coefficient according to the quantization parameter.
  • the entropy encoding unit 150 entropy encodes a symbol according to a probability distribution based on values calculated by the quantization unit 140 or encoding parameter values calculated in the encoding process, thereby generating a bit stream. You can print
  • the entropy encoding method is a method of receiving a symbol having various values and expressing it in a decodable column while removing statistical redundancy.
  • Encoding parameters are parameters necessary for encoding and decoding, and may include information that may be inferred during encoding or decoding, as well as information encoded by an encoder and transmitted to a decoder, such as syntax elements. Means necessary information. Coding parameters may be, for example, intra / inter prediction modes, moving / motion vectors, reference picture indexes, coding block patterns, presence or absence of residual signals, transform coefficients, quantized transform coefficients, quantization parameters, block sizes, block partitioning information, or the like. May include statistics.
  • the residual signal may mean a difference between the original signal and the prediction signal, and a signal in which the difference between the original signal and the prediction signal is transformed or a signal in which the difference between the original signal and the prediction signal is converted and quantized It may mean.
  • the residual signal may be referred to as a residual block in block units.
  • the entropy encoder 150 may store a table for performing entropy encoding, such as a variable length coding (VLC) table, and the entropy encoder 150 may store the stored variable length encoding. Entropy encoding may be performed using the (VLC) table. In addition, the entropy encoder 150 derives a binarization method of a target symbol and a probability model of a target symbol / bin, and then performs entropy encoding using the derived binarization method or a probability model. You may.
  • VLC variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the quantized coefficients may be inversely quantized by the inverse quantizer 160 and inversely transformed by the inverse transformer 170.
  • the inverse quantized and inverse transformed coefficients are added to the prediction block through the adder 175 and a reconstruction block can be generated.
  • the reconstruction block passes through the deblocking filter unit 180, and the deblocking filter unit 180 restores at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF). It can be applied to the reconstructed picture.
  • the reconstructed block that has passed through the deblocking filter unit 180 may be stored in the reference image buffer 190.
  • FIG. 6 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
  • a scalable video encoding / decoding method or apparatus may be implemented by extension of a general video encoding / decoding method or apparatus that does not provide scalability
  • the block diagram of FIG. 2 is scalable video decoding.
  • An embodiment of an image decoding apparatus that may be the basis of an apparatus is shown.
  • the image decoding apparatus 200 may include an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an intra predictor 240, a motion compensator 250, and deblocking.
  • the filter unit 260 and the reference image buffer 270 are included.
  • the image decoding apparatus 200 may receive a bitstream output from the encoder and perform decoding in an intra mode or an inter mode, and output a reconstructed image, that is, a reconstructed image.
  • the switch In the intra mode, the switch may be switched to intra, and in the inter mode, the switch may be switched to inter.
  • the image decoding apparatus 200 may generate a reconstructed block, that is, a reconstructed block by obtaining a residual block reconstructed from the received bitstream, generating a prediction block, and adding the reconstructed residual block and the prediction block.
  • the entropy decoding unit 210 may entropy decode the input bitstream according to a probability distribution to generate symbols including symbols in the form of quantized coefficients.
  • the entropy decoding method is a method of generating each symbol by receiving a binary string.
  • the entropy decoding method is similar to the entropy coding method described above.
  • the quantized coefficients are inversely quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230, and as a result of the inverse quantization / inverse transformation of the quantized coefficients, a reconstructed residual block may be generated.
  • the intra predictor 240 may generate a predictive block by performing spatial prediction using pixel values of an already encoded block around the current block.
  • the motion compensator 250 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 270.
  • the reconstructed residual block and the prediction block are added through the adder 255, and the added block passes through the deblocking filter unit 260.
  • the deblocking filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture.
  • the deblocking filter 260 outputs a reconstructed image, that is, a reconstructed image.
  • the reconstructed picture may be stored in the reference picture buffer 270 to be used for inter prediction.
  • the depth information map used to generate the virtual view image represents the distance between the camera and the object, so the correlation between pixels is very high. Especially in the case of the inside of the object or the background part, the value of the same depth information appears widely.
  • FIG. 7A is a diagram showing a depth information map for a “kendo” image
  • FIG. 7B is a 2D graph showing values of pixels in a horizontal direction at an arbitrary position of the depth information map for a “kendo” image
  • FIG. 7C Is a 2D graph showing pixel values in a horizontal direction at an arbitrary position of a depth information map for a kendo ”image.
  • FIGS. 7A to 7C it can be seen that the inter-pixel correlation of the depth information map is very high. In particular, it can be seen that the value of the depth information is the same in the object and the background of the depth information map.
  • FIG. 7B the pixel value of the portion corresponding to the horizontal line II-II is illustrated in FIG. 7A, and as shown in FIG. 7A, the portion corresponding to the horizontal line II-II of FIG. 7A is divided into two regions. If FIG. 7B reflects these two areas, it can be seen that the depth information values in the two areas are the same.
  • FIG. 7C illustrates pixel values of the portion corresponding to the vertical line III-III in FIG. 7A. As illustrated, the portion corresponding to the vertical line III-III of FIG. 7B is divided into two regions. If FIG. 7C reflects these two areas, it can be seen that the depth information values in the two areas are the same.
  • the pixel value of the current block can be almost predicted using only the pixel values of the neighboring block. Therefore, the encoding and decoding process of the residual signal, which is a difference between the current block and the prediction block, is performed. Almost no need.
  • the object boundary portion of the depth information map is a very important element in virtual image synthesis.
  • An example of a method of encoding an object boundary part of the depth information map is a plane-based intra-prediction method.
  • FIG. 8 is a diagram illustrating a prediction method in a plane-based split screen.
  • the prediction method in a plane-based split screen is a method of splitting and encoding a current block into two regions (whether inside an object or outside an object) based on neighboring pixels of the current block.
  • the divided binary bitmap information is transmitted to the decoder.
  • the prediction method within a plane-based split screen is a method applied to an object boundary part of a depth information map.
  • the image quality of the virtual composite image may be improved when the depth information map is used while maintaining the characteristics of the object boundary, that is, without smoothing or crushing the object boundary. Therefore, the object boundary portion of the depth information map may not be filtered to crush the object boundary such as a defiltering process.
  • the existing deblocking filtering method removes the blocking phenomenon at the block boundary due to the encoding mode (intra prediction or inter prediction) between two adjacent blocks, the sameness of the reference picture between the two blocks, and the difference in the motion information between the two blocks. It is a method for doing so. Therefore, when the deblocking filter is performed on the block boundary part, the strength of the deblocking is determined by the coding mode between the two blocks (either intra prediction or inter prediction), equality of reference pictures between the two blocks, and difference in motion information between the two blocks. To judge. For example, when one of the two blocks is predicted in the picture and the other block is encoded in the inter prediction mode, a very severe blocking phenomenon may occur between the two blocks, and in this case, the deblocking filter may be performed with a strong strength.
  • the deblocking filter May not be performed or the deblocking filter may be weakly performed.
  • the deblocking filter removes the blocking phenomenon in the image and improves the subjective quality of the image.
  • the depth map is only used to generate the virtual composite image and is not output to the actual display device. Therefore, the block boundary of the depth information map may be necessary to improve the quality of the virtual composite image, rather than the subjective quality improvement aspect.
  • the deblocking filter (or other filtering methods) of the depth information map includes an encoding mode (either intra prediction or inter prediction) between two blocks, equality of reference pictures between two blocks, and movement information of the current block, not motion information between two blocks. It is necessary to set whether to perform filtering and the strength of filtering according to the encoding mode.
  • the current block is encoded by a plane-based split coding method that encodes an object boundary portion
  • it may be more effective not to perform a deblocking filter on the block boundary in order to avoid crushing the boundary portion of the object as much as possible. have.
  • the depth information map is depth information for representing a sense of distance between objects, and generally has a very gentle value.
  • the depth information has the same broad value.
  • the depth information of the neighboring block may be padded to configure depth information of the current block.
  • the filtering process that is applied to the general image may not be required in the part having the same depth information.
  • the sampling (up-sampling or down-sampling) process can also be applied simply.
  • the present invention proposes an intra picture encoding method of various depth information maps and a filtering and sampling method of the depth information map.
  • the prediction block When generating a prediction block using the intra prediction method, the prediction block is generally predicted through neighboring blocks of the current block in which encoding has been performed. As described above, a method of configuring the current block using only intra-prediction blocks is called an intra prediction mode.
  • a block encoded in the intra prediction mode is referred to as an intra skip mode when the intra 16x16 mode (or 8x8 mode, 4x4 mode, or NxN mode) and no differential data are present.
  • an intra prediction prediction direction may be inferred from a coded neighboring block in space.
  • the block encoded in the intra skip mode does not transmit intra prediction picture information and other information to the decoding device, and deduces intra prediction picture information of the current block through information of neighboring blocks. Can be.
  • FIG. 9 illustrates neighboring blocks used to infer a prediction direction for a current block according to an embodiment of the present invention.
  • the intra prediction direction with respect to the current block X may be inferred from neighboring blocks A and B adjacent to the current block X.
  • the neighboring block may mean all blocks adjacent to the current block (X).
  • FIG. 10 is a control flowchart illustrating a method of deriving an intra prediction direction with respect to a current block according to an embodiment of the present invention.
  • the intra prediction direction includes the vertical prediction (0), the horizontal prediction (1), the DC prediction (2) using a predetermined average value, and the plane (diagonal) prediction (3).
  • the value for the prediction direction has a smaller value as the probability of occurrence increases.
  • vertical prediction (0) has the highest probability of occurrence.
  • the prediction direction information on the current block may be represented by IntraPredMode.
  • Step 1) The availability of the block A is determined (S1000a).
  • IntraPredModeA prediction direction information for block A
  • S1001 the DC prediction direction
  • step 2 If block A is available, perform step 2.
  • IntraPredModeA is set to IntraPredMode of block A, that is, the prediction direction information of block A as IntraPredModeA ( S1002).
  • Step 3 If block B is not available, such as when block B is not encoded or the coding mode of block B is not available (S1000b), IntraPredModeB (prediction direction information for block B) is set as the DC prediction direction ( S1003).
  • IntraPredModeB prediction direction information for block B
  • Step 4 If block B is an intra 16x16 encoding mode (or 8x8 encoding mode or 4x4 encoding mode) or intra_skip encoding mode, IntraPredMode of block B is set to IntraPredModeB (S1004).
  • Step 5 a minimum value among IntraPredModeA and IntraPredModeB values may be set to IntraPredMode of the current block X (S1005).
  • 11 is a control flowchart illustrating a method of deriving an intra prediction direction with respect to a current block according to another embodiment of the present invention.
  • Step 1) If block A is not encoded or the coding mode of block A cannot be used, if block A is not available (S1100a), IntraPredModeA is set to '-1' (S1101).
  • step 2 If block A is available, perform step 2.
  • Step 2 If block A is an intra 16x16 encoding mode (or 8x8 encoding mode or 4x4 encoding mode) or intra_skip encoding mode, IntraPredModeA is set to IntraPredMode of block A, that is, the prediction direction information of block A as IntraPredModeA (S1102). ).
  • Step 3) If block B is not encoded or the coding mode of block B cannot be used (S1100b), IntraPredModeB is set to '-1'. If not, perform step 4.
  • Step 4 If block B is an intra 16x16 encoding mode (or 8x8 encoding mode or 4x4 encoding mode) or intra_skip encoding mode, IntraPredMode of block B is set to IntraPredModeB (S1104).
  • Step 5 Then, if at least one of IntraPredModeA and IntraPredModeB is '-1' (S1105), IntraPredMode of the current block X is set as the DC prediction direction (S1106).
  • IntraPredModeA and IntraPredModeB values are set to IntraPredMode of the current block X (S1107).
  • FIG. 12 illustrates neighboring blocks used to infer a prediction direction for a current block according to another embodiment of the present invention.
  • the prediction direction information of the block C may be used as well as the blocks A and B adjacent to the current block.
  • the prediction direction for the current block may be inferred through the characteristics of the prediction directions of blocks A, B, and C.
  • the prediction direction of block A may be set as the prediction direction for the current block.
  • the minimum value among the prediction directions of blocks A, B, and C may be set as the prediction direction for the current block.
  • the prediction direction of the block B may be set as the prediction direction with respect to the current block.
  • the prediction direction of the block A may be set as the prediction direction with respect to the current block.
  • the prediction direction of the block A may be set as the prediction direction with respect to the current block.
  • the prediction direction of the block B may be set as the prediction direction for the current block.
  • the pixel of the neighboring block adjacent to the current block image may be copied (padded) as it is, wherein the pixel to be copied (padded) to the current block is the upper pixel in the neighboring block adjacent to the current block
  • the pixel may be a left pixel, and may be an average or weighted average of pixels adjacent to the current block.
  • information on which position of the pixel to use may be encoded and included in the bitstream. This method may be similar to the intra prediction method of H.264 / AVC.
  • the pixel to be copied may be determined in consideration of characteristics of neighboring pixels adjacent to the current block, and then the current block may be configured through the determined pixel.
  • the prediction block for the current block may be generated through the upper pixel adjacent to the current block.
  • the prediction block for the current block may be generated through pixels on the left adjacent to the current block.
  • a prediction block image may be configured by using a plurality of prediction methods and mixing the average value or the sum of weights according to each method.
  • the method of configuring the intra prediction block as described above may be variously changed.
  • the intra prediction direction information for the current block is not transmitted to the decoder.
  • the intra prediction direction information may be inferred from the neighboring block or other information.
  • a prediction block for a current block is generated by taking a block most similar to a current block from a previous frame that has been previously encoded and then decoded.
  • the generated prediction block image is differentiated from the current block image to generate a differential block image.
  • the encoding is performed in two ways depending on whether a transform, quantization, and entropy encoding process is performed on the differential block image, and information on whether encoding is performed on the differential block image may be included in the bitstream. The two methods are as follows.
  • the current block image is transformed and quantized into a block image that is differential from the predicted block image, and is then entropy encoded to output a bitstream, and inverse quantization of the quantized coefficients before entropy encoding is performed. After inverse transformation, the prediction block image is added, and the current block image is reconstructed.
  • the current block image is composed of only prediction block images.
  • the differential block image is not encoded, and only information on whether encoding is performed on the differential block image may be included in the bitstream.
  • information for generating a prediction block image of the current block may be configured from information of neighboring blocks.
  • the prediction block generation information and the differential block image are not encoded, and only the information on whether the encoding is performed on the prediction block generation information and the differential block image may be included in the bitstream.
  • arithmetic encoding may be performed probabilisticly in consideration of whether to encode the residual block or not by encoding information on neighboring blocks of the current block.
  • the depth information map may be filtered (eg, a deblocking filter, a sample adaptive offset (SAO) filter, an adaptive loop filter (ALF), and depth information filtering between in-loop combining time points).
  • a deblocking filter e.g, a sample adaptive offset (SAO) filter, an adaptive loop filter (ALF), and depth information filtering between in-loop combining time points.
  • SAO sample adaptive offset
  • ALF adaptive loop filter
  • depth information filtering between in-loop combining time points.
  • JVDF In-loop Joint inter-View Depth Filtering
  • the filtering may be weakly performed or not performed on the background of the depth map or the inside of the object.
  • the deblocking filter may be weakly performed or not performed on the depth information map.
  • a sample adaptive offset (SAO) filter may not be applied to the depth information map.
  • adaptive loop filter can be applied only to the object boundary part, and / or only to the internal part of the object, and / or to the background part of the depth information map.
  • an adaptive loop filter may not be applied to the depth information map.
  • JVDF joint inter-view depth filtering
  • JVDF joint inter-view depth filtering
  • no filtering may be applied to the depth map.
  • the decoded depth map is up-sampled.
  • This sampling process typically uses a 4 or 6 tap upsampling (or downsampling) filter.
  • Such a sampling filter has a disadvantage of high complexity and is not particularly suitable for an image having a very monotonous characteristic such as a depth information map. Therefore, it is desirable that the upsampling and downsampling filter suitable for the characteristics of the depth information map be very simple and simple.
  • FIG. 13 is a diagram illustrating an example of a down sampling method of a depth information map.
  • one sample (pixel) per four samples (pixels) may be copied as it is from the depth information map 1310 of the original size, thereby configuring the down sampled depth information map 1320.
  • This method can only be used in the background part of the depth map or inside an object.
  • any depth information value of the current block is downsampled without going through a downsampling process. Applicable to all pixels (pixels).
  • FIG. 14 is a diagram illustrating an example of an upsampling method of a depth information map.
  • one sample (pixel) may be copied as it is from the down-sampled depth information map 1410 and copied (or padded) to four samples (pixels) of the up-sampled depth information map 1420. .
  • This method can only be used in the background part of the depth map or inside an object.
  • any depth information value of the current block (or any region) is not up-sampled, and the depth information map block (or any region) of the depth information map is upsampled. Applicable to all pixels
  • a depth map may be encoded by combining an intra-picture encoding method of the depth map and a filtering and sampling method of the depth map, and an embodiment thereof is as follows.
  • a method of configuring a current block image using only prediction blocks, adjusting the filtering strength of the deblocking filter for the prediction block, or determining whether to apply the deblocking filter is described.
  • an intra skip mode in which the intra prediction direction for the current block is inferred from neighboring blocks adjacent to the current block, that is, the current block is composed only of the intra prediction pictures may be referred to as an intra skip mode.
  • the correlation with the neighboring block is very high. Therefore, in this case, deblocking filtering may not be performed.
  • 15 is a control flowchart illustrating a method of determining boundary filtering strength bS of deblocking filtering according to an embodiment of the present invention.
  • 16 is a diagram showing the boundary between adjacent blocks p and block q.
  • block p and block q correspond to adjacent blocks sharing boundaries with each other.
  • Block p represents a block located on the left side with respect to the vertical boundary and above the horizontal boundary
  • block q represents a block located on the right side with respect to the vertical boundary and located below the horizontal boundary.
  • coding modes of blocks p and q adjacent to each other may be checked.
  • whether the block p or q is intra coded or inter coded may mean that the block p or q is an intra coded macroblock or belongs to it.
  • the intra skip mode may be regarded as an intra mode (NxN prediction mode, where N is 16, 8, 4, etc.) but no difference data.
  • an intra skip mode may be regarded as an intra mode (N ⁇ N prediction mode, where N is 16, 8, 4, etc.) and there is no difference data.
  • the boundary filtering strength bS may be determined as '0' (S1502).
  • a boundary filtering strength (bS) of zero indicates that no filtering is performed in subsequent filtering application procedures.
  • inter encoding means predictive encoding using an image of a reconstructed frame having a different time from the current frame as a reference frame.
  • the boundary filtering intensity bS may be determined to be 4 (S1505).
  • the boundary filtering strength bS is 4, it means that the strongest filtering is applied in the subsequent filtering application procedure.
  • the boundary filtering strength bS may be determined to be 3 (S1506).
  • Orthogonal transform coefficients may also be referred to as coded coefficients or non-zero transformed coefficients.
  • the boundary filtering strength bS is determined to be 2 (S1508).
  • the absolute value of one component of the motion vector that is, the x-axis component or the y-axis component with respect to block p and block q is It is determined whether it is equal to or greater than 1 (or 4) and / or whether the reference frame in motion compensation is different and / or the PU partition boundary (S1509).
  • the reference frame is different' may include both the reference frame itself is different and the number of reference frames is different.
  • the boundary filtering intensity bS may be determined as 1 (S1510).
  • the boundary filtering intensity bS is determined to be 0 (S1502).
  • 17 is a control flowchart illustrating a method of determining boundary filtering strength bS of deblocking filtering according to another embodiment of the present invention.
  • the boundary filtering strength of the block p and the block q may be set to '0' (1702).
  • the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set. Alternatively, filtering may not be performed.
  • the boundary filtering intensity bS may be determined to be 4 (S1705).
  • the boundary filtering strength bS is 4, it means that the strongest filtering is applied in the subsequent filtering application procedure.
  • the boundary filtering strength bS may be determined as 3 (S1706).
  • the boundary filtering strength bS is determined to be 2 (S1708).
  • the absolute value of one component of the motion vector that is, the x-axis component or the y-axis component with respect to block p and block q is It is determined whether it is equal to or greater than 1 (or 4) and / or whether the reference frame in the motion compensation is different and / or the PU partition boundary (S1709).
  • the reference frame is different' may include both the reference frame itself is different and the number of reference frames is different.
  • the boundary filtering intensity bS may be determined as 1 (S1710).
  • the boundary filtering intensity bS is determined to be 0 (S1702).
  • the boundary filtering strength between the current block and the neighboring block may be set to '0'.
  • the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set. Alternatively, filtering may not be performed.
  • the boundary filtering strength between the current block and the neighboring block may be set to '4' (or 3, 2, 1). Otherwise, the filtering strength may be set to 0 (or 1, 2, 3). Alternatively, the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set. Alternatively, filtering may not be performed.
  • the boundary filtering intensity is set to '0' when the intra prediction direction of the current block and the neighboring block is the same; otherwise, the boundary filtering intensity is set to '1'. It can be set to 'or other values (2, 3, 4). Alternatively, the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set. Alternatively, filtering may not be performed.
  • FIG. 18 illustrates a prediction direction and a macroblock boundary of a current block and neighboring blocks according to an embodiment of the present invention.
  • the filtering strength of the deblocking when setting the filtering strength of deblocking with respect to the vertical macroblock boundary of the current block X, the prediction direction of the neighboring block A and the prediction direction of the current block X are shown. Since the same, the filtering strength of the deblocking can be set to '0'. Alternatively, the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set. Alternatively, filtering may not be performed. This example may be equally applied to the horizontal boundary between the current block X and the neighboring block B.
  • the boundary filtering intensity for the vertical boundary is set to '0' or no filtering is performed.
  • the boundary filtering intensity for the horizontal boundary is set to '0' or no filtering is performed. In other cases, the boundary filtering strength may be weakened or no filtering may be performed.
  • the boundary filtering intensity for the vertical boundary is set to '0' or filtering is performed. Do not perform.
  • This example applies equally to the horizontal boundary between the current block X and the neighboring block B.
  • the boundary filtering strength bS may be determined in various ways.
  • FIG. 19 is a diagram illustrating an example of an encoding mode of a current block and a current block boundary. The above-described method may be applied to each of the boundaries of FIG. 19 or in combination.
  • the above methods may be applied to a macroblock (or arbitrary block) boundary that is a coding basic unit of a current block.
  • the methods may be applied at a block (or arbitrary block) boundary inside the current block.
  • the deblocking filter is not applied to the boundary of both blocks X and A. You may not.
  • the boundary filtering strength of both blocks X and B can be set to '0'. have.
  • the boundary filtering strength may be set weakly (or strongly), and in one embodiment, an arbitrary value may be set.
  • the boundary filtering strength bS may be determined to be three.
  • the deblocking filter removes the blocking phenomenon in the image and improves the subjective quality of the image.
  • the depth map is only used to generate the virtual composite image and is not output to the actual display device. Therefore, the block boundary of the depth information map may be necessary to improve the quality of the virtual composite image, rather than the subjective quality improvement aspect.
  • the deblocking filter (or other filtering methods) of the depth map includes encoding modes (either intra prediction or inter prediction) between two blocks, equality of reference pictures between the two blocks, and movement information of the current block, not motion information between the two blocks. It is necessary to set whether to perform filtering and the strength of filtering according to the encoding mode.
  • the deblocking filter should not be performed on the block boundary in order to prevent the object boundary from being crushed as much as possible.
  • pixel values around a vertical boundary of the corresponding block may be the same. Therefore, the deblocking filter may not be performed on the block boundary.
  • pixel values around the horizontal boundary may not be the same, and a deblocking filter should be performed on the block boundary. That is, if the current block is encoded of the intra skip block, filtering may be performed on the boundary according to the prediction direction in the screen of the current block rather than the correlation of neighboring blocks. Alternatively, whether to perform filtering may be determined according to the encoding mode of the current block and the neighboring block and the intra prediction direction.
  • the boundary of the current block is a horizontal boundary, the following process may be applied.
  • the filter strength of the deblocking filter (bS) can be weakly set, for example, can be set to '0'.
  • filtering of the deblocking filter Strength can be strongly given, for example, it can be set to '4'. Otherwise, if the block boundary is not the macro block boundary, the filter strength of the deblocking filter (bS) may be weakened, and may be set to '0' as an example.
  • the filter strength (bS) of the deblocking filter may be applied according to the process described with reference to FIG. 17.
  • the boundary of the current block is a vertical boundary, the following process may be applied.
  • the filter strength of the deblocking filter (bS) can be weakly set, for example, can be set to '0'.
  • filtering of the deblocking filter Strength can be strongly given, for example, it can be set to '4'. Otherwise, if the block boundary is not the macro block boundary, the filter strength of the deblocking filter (bS) may be weakened, and may be set to '0' as an example.
  • the filter strength (bS) of the deblocking filter may be applied according to the process described with reference to FIG. 17.
  • the strength of the filter strength (bS) of the deblocking filter may be weakly set at an object boundary of the depth information map to which the prediction method in the plane-based split screen of FIG. 8 is applied, and may be set to '0' as an example.
  • the derivation of the filtering strength (bS) of the deblocking filter may be performed as follows. The following method can be applied for the boundary of the current block for both the horizontal boundary or the vertical boundary.
  • the filtering strength of the deblocking filter may be weakened, and as an example, it may be set to '1'.
  • the filter strength of the deblocking filter (bS) may be weakened, and may be set to '0' as an example.
  • macroblock_layer In case of performing only intra picture coding (I frame), if the above proposed methods are implemented according to international video standard H.264 / AVC, the macroblock layer (macroblock_layer) syntax is shown in Table 1 below.
  • Mb_intra_skip_run and “mb_intra_skip_flag” mean that the current depth information map block consists only of the predicted image.
  • the fact that the current depth information map block is composed of only the prediction image may be interpreted as an intra skip mode, and also that it is an intra mode (NxN prediction mode where N is 16, 8, 4, etc.) and there is no difference data. Can be interpreted.
  • Mb_intra_skip_run means that the entropy coding method operates in context-based adaptive variable length coding (CAVLC)
  • mb_intra_skip_flag means that the entropy coding method operates in context-based adaptive arithmetic coding CABAC.
  • “moreDataFlag” indicates whether to parse encoding information (prediction block generation information and residual signal block information) with respect to the current block. If “moreDataFlag” is '1', the encoding information about the current block is parsed. If “moreDataFlag” is '0', the encoding block moves to the next block without parsing the encoding information about the current block.
  • “Mb_intra_skip_flag” means that the current depth map block is composed of only prediction images. If mb_intra_skip_flag ”is '1', differential block data is not parsed. Otherwise, when mb_intra_skip_flag ”is '0', differential block data is parsed according to the conventional method. In this case, the fact that the data of the difference block is not parsed may be interpreted as an intra skip mode, and may also be interpreted as an intra mode (NxN prediction mode where N is 16, 8, 4, etc.) but no difference data. Can be.
  • FIG. 20 is a control block diagram illustrating a configuration of a video encoding apparatus according to an embodiment of the present invention.
  • FIG. 20 is a diagram illustrating an encoding method of a method of configuring a current block image by only prediction blocks using neighboring blocks when in-screen encoding of an image having high correlation between pixels.
  • the prediction image generator 310 generates a prediction block through an intra prediction process or generates a prediction block through an inter prediction process. Detailed generation methods are described in detail in the above section.
  • the predictive image selecting unit 320 selects the most excellent encoding efficiency among the predictive images generated by the predictive image generating unit 310, and the predictive image selection information is included in the bitstream.
  • the subtractor 330 generates a differential block image by subtracting the current block image from the predicted block image.
  • the encoding determiner 340 determines whether to encode the generated block image and the prediction information of the predicted block, and outputs encoding information.
  • the encoder 350 determines whether to perform the encoding according to the encoding information determined by the encoding determiner 340, and outputs the compressed bitstream after the transform, quantization, and entropy encoding processes are performed on the differential block image.
  • One bitstream is output by mixing the selection information.
  • FIG. 21 is a control block diagram illustrating a configuration of a video decoding apparatus according to an embodiment of the present invention.
  • FIG. 21 is a diagram illustrating a decoding apparatus of a method of configuring a current block image using only prediction blocks using neighboring blocks when intra-picture encoding is performed on an image having high correlation between pixels.
  • the demultiplexer 410 decodes whether the information about the difference image is included in the bitstream and whether the information is included in the bitstream and the predictive image selection information.
  • the decryption determining unit 420 determines whether to perform the decoding unit 430 according to the decoding information.
  • the decoder 430 is performed only when information on generation information of the differential image and the prediction block exists in the bitstream according to the decoding information.
  • the decoder 430 reconstructs the differential image through inverse quantization and inverse transformation.
  • the prediction image generator 460 generates a prediction block through an intra prediction process or generates a prediction block through an inter prediction process.
  • the prediction image determiner 450 determines an optimal prediction image for the current block from the prediction images generated by the prediction image generator 460 through the prediction image selection information.
  • the adder 440 configures the reconstructed image by adding the generated prediction image and the reconstructed difference image. At this time, if the reconstructed difference image does not exist, the predictive image is composed of the reconstructed image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de décodage d'image pour des informations de profondeur comprenant : une étape de génération d'un bloc de prévision pour le bloc en cours des informations de profondeur ; une étape de génération d'un bloc de restauration du bloc en cours en fonction du bloc de prévision ; et une étape de réalisation du filtrage sur le bloc de restauration. Le choix de réaliser le filtrage peut être déterminé en fonction des informations de bloc sur le bloc en cours et des informations de codage sur le bloc en cours.
PCT/KR2013/006401 2012-07-17 2013-07-17 Procédé de filtrage en boucle et appareil associé WO2014014276A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/399,823 US20150146779A1 (en) 2012-07-17 2013-07-17 In-loop filtering method and apparatus using same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2012-0077592 2012-07-17
KR20120077592 2012-07-17
KR10-2013-0084336 2013-07-17
KR1020130084336A KR20140019221A (ko) 2012-07-17 2013-07-17 인루프 필터링 방법 및 이를 이용하는 장치

Publications (1)

Publication Number Publication Date
WO2014014276A1 true WO2014014276A1 (fr) 2014-01-23

Family

ID=49949043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/006401 WO2014014276A1 (fr) 2012-07-17 2013-07-17 Procédé de filtrage en boucle et appareil associé

Country Status (1)

Country Link
WO (1) WO2014014276A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019059646A1 (fr) * 2017-09-20 2019-03-28 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
CN110651478A (zh) * 2017-05-17 2020-01-03 株式会社Kt 用于视频信号处理的方法和装置
CN114342384A (zh) * 2020-03-27 2022-04-12 腾讯美国有限责任公司 用于去块操作的高级控制
CN114556915A (zh) * 2019-10-10 2022-05-27 北京字节跳动网络技术有限公司 几何分割模式中被编解码的块的去块
CN114946190A (zh) * 2019-11-18 2022-08-26 Lg电子株式会社 用于控制环路滤波的图像编码装置和方法
CN114982245A (zh) * 2019-11-18 2022-08-30 Lg电子株式会社 基于滤波的图像编码装置和方法
CN115134609A (zh) * 2015-06-11 2022-09-30 杜比实验室特许公司 使用自适应去块滤波编码和解码图像的方法及其装置
US11689735B2 (en) 2019-09-01 2023-06-27 Beijing Bytedance Network Technology Co., Ltd. Alignment of prediction weights in video coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090078114A (ko) * 2008-01-14 2009-07-17 광주과학기술원 가변적 화면 그룹 예측 구조를 이용한 다시점 영상 부호화방법 및 장치, 영상 복호화 장치 그리고 상기 방법을수행하는 프로그램이 기록된 기록 매체
WO2010002214A2 (fr) * 2008-07-02 2010-01-07 삼성전자 주식회사 Procédé et dispositif de codage d'images, et procédé et dispositif de décodage correspondants
KR20100102516A (ko) * 2009-03-11 2010-09-24 경희대학교 산학협력단 블록기반 깊이정보 맵의 코딩 방법과 장치, 및 이를 이용한 3차원 비디오 코딩 방법
KR20110018188A (ko) * 2009-08-17 2011-02-23 삼성전자주식회사 영상의 부호화 방법 및 장치, 영상 복호화 방법 및 장치
KR20110093532A (ko) * 2010-02-12 2011-08-18 삼성전자주식회사 그래프 기반 화소 예측을 이용한 영상 부호화/복호화 시스템 및 방법 그리고 깊이 맵 부호화 시스템 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090078114A (ko) * 2008-01-14 2009-07-17 광주과학기술원 가변적 화면 그룹 예측 구조를 이용한 다시점 영상 부호화방법 및 장치, 영상 복호화 장치 그리고 상기 방법을수행하는 프로그램이 기록된 기록 매체
WO2010002214A2 (fr) * 2008-07-02 2010-01-07 삼성전자 주식회사 Procédé et dispositif de codage d'images, et procédé et dispositif de décodage correspondants
KR20100102516A (ko) * 2009-03-11 2010-09-24 경희대학교 산학협력단 블록기반 깊이정보 맵의 코딩 방법과 장치, 및 이를 이용한 3차원 비디오 코딩 방법
KR20110018188A (ko) * 2009-08-17 2011-02-23 삼성전자주식회사 영상의 부호화 방법 및 장치, 영상 복호화 방법 및 장치
KR20110093532A (ko) * 2010-02-12 2011-08-18 삼성전자주식회사 그래프 기반 화소 예측을 이용한 영상 부호화/복호화 시스템 및 방법 그리고 깊이 맵 부호화 시스템 및 방법

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134609A (zh) * 2015-06-11 2022-09-30 杜比实验室特许公司 使用自适应去块滤波编码和解码图像的方法及其装置
CN115134607A (zh) * 2015-06-11 2022-09-30 杜比实验室特许公司 使用自适应去块滤波编码和解码图像的方法及其装置
CN115134611A (zh) * 2015-06-11 2022-09-30 杜比实验室特许公司 使用自适应去块滤波编码和解码图像的方法及其装置
CN110651478B (zh) * 2017-05-17 2023-11-21 株式会社Kt 用于视频信号处理的方法和装置
CN110651478A (zh) * 2017-05-17 2020-01-03 株式会社Kt 用于视频信号处理的方法和装置
WO2019059646A1 (fr) * 2017-09-20 2019-03-28 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
US11689735B2 (en) 2019-09-01 2023-06-27 Beijing Bytedance Network Technology Co., Ltd. Alignment of prediction weights in video coding
CN114556915A (zh) * 2019-10-10 2022-05-27 北京字节跳动网络技术有限公司 几何分割模式中被编解码的块的去块
US11758143B2 (en) 2019-10-10 2023-09-12 Beijing Bytedance Network Technology Co., Ltd Motion vector handling in geometry partition mode
CN114556915B (zh) * 2019-10-10 2023-11-10 北京字节跳动网络技术有限公司 几何分割模式中被编解码的块的去块
CN114946190A (zh) * 2019-11-18 2022-08-26 Lg电子株式会社 用于控制环路滤波的图像编码装置和方法
CN114982245A (zh) * 2019-11-18 2022-08-30 Lg电子株式会社 基于滤波的图像编码装置和方法
US12081805B2 (en) 2019-11-18 2024-09-03 Lg Electronics Inc. Image coding device and method, for controlling loop filtering
CN114342384A (zh) * 2020-03-27 2022-04-12 腾讯美国有限责任公司 用于去块操作的高级控制
US11973990B2 (en) 2020-03-27 2024-04-30 Tencent America LLC Signaling for modified deblocking filter operations
CN114342384B (zh) * 2020-03-27 2024-06-14 腾讯美国有限责任公司 一种视频解码方法、装置和计算机可读存储介质

Similar Documents

Publication Publication Date Title
WO2018030599A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé
WO2014014276A1 (fr) Procédé de filtrage en boucle et appareil associé
WO2015122549A1 (fr) Procédé et appareil de traitement d'une vidéo
WO2014003379A1 (fr) Procédé de décodage d'image et appareil l'utilisant
WO2013157825A1 (fr) Procédé et dispositif de codage/décodage d'image
WO2019066524A1 (fr) Procédé et appareil de codage/ décodage d'image et support d'enregistrement pour stocker un train de bits
WO2020004990A1 (fr) Procédé de traitement d'image sur la base d'un mode de prédiction inter et dispositif correspondant
WO2014010943A1 (fr) Procédé et dispositif de codage/décodage d'image
WO2014038906A1 (fr) Procédé de décodage d'image et appareil utilisant celui-ci
WO2020096427A1 (fr) Procédé de codage/décodage de signal d'image et appareil associé
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2019203610A1 (fr) Procédé de traitement d'images et dispositif pour sa mise en œuvre
WO2021107532A1 (fr) Procédé et appareil de codage/décodage d'image, et support d'enregistrement sur lequel est stocké un flux binaire
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2011129672A2 (fr) Appareil et procédé de codage/décodage vidéo
WO2021054811A1 (fr) Procédé et appareil de codage/décodage d'images, et support d'enregistrement sauvegardant un flux binaire
WO2019151795A1 (fr) Procédé de traitement d'image destiné au traitement d'informations de mouvement, procédé de décodage et de codage d'image mettant en œuvre ledit procédé, et appareil associé
WO2020213867A1 (fr) Codage de vidéo ou d'image basé sur la signalisation de données de liste de mise à l'échelle
WO2018034374A1 (fr) Procédé et appareil de codage et décodage d'un signal vidéo à l'aide d'un filtrage de prédiction-intra
WO2019194463A1 (fr) Procédé de traitement d'image et appareil associé
WO2021101201A1 (fr) Dispositif et procédé de codage d'image pour la commande de filtrage en boucle
WO2021025526A1 (fr) Procédé de codage vidéo sur la base d'une transformée et dispositif associé
WO2011129673A2 (fr) Appareil et procédé de codage/décodage vidéo
WO2019050300A1 (fr) Procédé et dispositif de codage/décodage d'image sur la base d'une transmission efficace d'un paramètre de quantification différentielle
WO2019050299A1 (fr) Procédé et dispositif pour exécuter un codage/décodage selon un procédé de balayage de sous-groupe de coefficients de transformée

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13819477

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14399823

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13819477

Country of ref document: EP

Kind code of ref document: A1