CN108781285B - Video signal processing method and device based on intra-frame prediction - Google Patents

Video signal processing method and device based on intra-frame prediction Download PDF

Info

Publication number
CN108781285B
CN108781285B CN201780017936.7A CN201780017936A CN108781285B CN 108781285 B CN108781285 B CN 108781285B CN 201780017936 A CN201780017936 A CN 201780017936A CN 108781285 B CN108781285 B CN 108781285B
Authority
CN
China
Prior art keywords
pixel
current block
reference pixel
prediction
intra prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780017936.7A
Other languages
Chinese (zh)
Other versions
CN108781285A (en
Inventor
李英烈
金南煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industry Academy Cooperation Foundation of Sejong University
Original Assignee
Industry Academy Cooperation Foundation of Sejong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industry Academy Cooperation Foundation of Sejong University filed Critical Industry Academy Cooperation Foundation of Sejong University
Priority to CN202310868214.XA priority Critical patent/CN116668682A/en
Priority to CN202310868397.5A priority patent/CN116668683A/en
Priority to CN202310866170.7A priority patent/CN116668681A/en
Priority to CN202310869748.4A priority patent/CN116668684A/en
Publication of CN108781285A publication Critical patent/CN108781285A/en
Application granted granted Critical
Publication of CN108781285B publication Critical patent/CN108781285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Abstract

The video signal processing method of the present invention acquires the transform coefficient of the current block from the bit stream according to a preset scanning sequence, decides an intra-frame prediction mode of the current block based on the transform coefficient, and executes intra-frame prediction on the current block by using the intra-frame prediction mode and adjacent samples adjacent to the current block.

Description

Video signal processing method and device based on intra-frame prediction
Technical Field
The invention relates to a video signal processing method and device.
Background
Recently, there is an increasing demand for high resolution and high quality images such as HD (High Definition) images and UHD (Ultra High Definition) images in various application fields. The higher the resolution and the higher the quality of the image data, the more the data volume is compared with the existing image data, so the transmission cost and the storage cost are greatly increased when the image data is transmitted by the existing media such as the wired and wireless broadband line or stored by the existing storage media. In order to solve these problems associated with the high resolution and high quality of image data, a high-efficiency image compression technique can be applied,
the image compression technique includes various techniques such as an inter-picture prediction technique for predicting pixel values included in a current image from a previous or subsequent image of the current image, an intra-picture prediction technique for predicting pixel values included in the current image using pixel information in the current image, an entropy symbolization technique for assigning a shorter code to a value having a higher frequency of occurrence and a longer code to a value having a lower frequency of occurrence, and the like, and can be used to efficiently compress image data for transmission or storage.
On the other hand, stereoscopic image contents are increasingly favored as new image services due to the increasing demand for high-resolution images. Video compression techniques that can efficiently provide high-resolution and ultra-high resolution stereoscopic video content are also attracting increasing attention.
Disclosure of Invention
Technical problem
An object of the present invention is to provide a method and apparatus for high-speed intra prediction encoding when video signal encoding/decoding is performed.
Another object of the present invention is to provide a method and apparatus for performing intra prediction based on a filter when encoding/decoding a video signal.
The technical problems to be solved by the present invention are not limited to the above-described technical problems, and other problems not mentioned above will be clearly understood by those skilled in the art to which the present invention pertains from the following description.
Technical proposal
The video signal decoding method of the present invention determines an intra prediction mode of a current block, performs a first intra prediction on the current block based on the intra prediction mode and a reference pixel adjacent to the current block, and performs a second intra prediction on the current block based on a first prediction sample derived by the first intra prediction and the reference pixel.
According to the video signal decoding method of the present invention, the second intra prediction is performed by applying a filter to the reference pixel and the first prediction samples.
According to the video signal decoding method of the present invention, when the first prediction sample is located on a diagonal line of the current block, the filter is applied to the first prediction sample, a reference pixel located at an upper end of the current block, and a reference pixel located at a left side of the current block.
According to the video signal decoding method of the present invention, when the first prediction sample is located on the right side with respect to the diagonal line of the current block, the filter is applied to the first prediction sample and a reference pixel located at the upper end of the current block.
According to the video signal decoding method of the present invention, when the first prediction sample is located at the lower side with respect to the diagonal line of the current block, the filter is applied to the first prediction sample and a reference pixel located at the left side of the current block.
According to the video signal decoding method of the present invention, the coefficients of the filter are determined based on the positions of the first prediction samples.
According to the video signal decoding method of the present invention, the weighting value of the filter assigned to the first prediction sample increases as the x-axis direction coordinate or the y-axis direction coordinate of the first prediction sample increases.
According to the video signal decoding method of the present invention, when the intra prediction mode of the current block is a planar mode, the first intra prediction can be performed based on an upper reference pixel, a left reference pixel, an upper right reference pixel, and a lower left reference pixel adjacent to the current block.
According to the video signal decoding method of the present invention, the value of the upper right reference pixel is equal to the value of the upper right reference pixel adjacent reference pixel, and the value of the left lower reference pixel is equal to the value of the left lower reference pixel adjacent reference pixel.
According to the video signal decoding method of the present invention, when the current block includes a plurality of sub-blocks, the first intra prediction and the second intra prediction are performed in sub-block units.
According to the video signal decoding method of the present invention, the order of execution of the first intra prediction and the second intra prediction is determined according to the intra prediction mode of the current block.
The video signal decoding device of the present invention includes an intra prediction unit that determines an intra prediction mode of a current block, performs a first intra prediction on the current block based on the intra prediction mode and a reference pixel adjacent to the current block, and performs a second intra prediction on the current block based on a first prediction sample derived by the first intra prediction and the reference pixel.
The features described above are merely exemplary of the detailed description of the invention, and are not intended to limit the scope of the invention.
Advantageous effects
According to the present invention, high-speed intra prediction encoding/decoding can be performed.
According to the present invention, intra prediction can be efficiently performed by means of a filter.
The effects obtainable by the present invention are not limited to the above-described effects, and other effects not mentioned above will be clearly understood by those skilled in the art to which the present invention pertains from the following description.
Drawings
Fig. 1 is a block diagram illustrating an image symbolizing apparatus according to one embodiment of the present invention.
Fig. 2 is a block diagram illustrating an image renumbering apparatus according to an embodiment of the present invention.
Fig. 3 is a block diagram showing an image renumbering method based on intra prediction, to which the present invention is applied.
Fig. 4 is an explanatory diagram for explaining a first predicted value derivation process of a target sample.
Fig. 5 and 6 are diagrams for explaining an example of performing the second intra prediction on the current block.
Fig. 7 is a diagram for explaining an intra prediction order according to an intra prediction mode of a current block.
Detailed Description
While the invention is susceptible to various modifications and alternative embodiments, specific embodiments thereof are shown in the drawings and will herein be described in detail. However, it is not intended to limit the present invention to the specific embodiment, and various substitutions, modifications and alterations are possible within the scope of the technical idea of the present invention, which will be apparent to those skilled in the art, and therefore, the substitutions, modifications and alterations are to be taken as a matter of course falling within the equivalent scope of the claims of the present invention. In describing the drawings, like elements are referred to by like reference numerals.
The terms first, second, and the like may be used in describing various constituent elements, but the constituent elements should not be limited to the terms. The term is used only for distinguishing the constituent elements from other constituent elements. For example, within the scope of the claims of the present invention, a first component may be named a second component, and similarly, a second component may be named a first component. The term "and/or" includes a combination of a plurality of related items or any one of a plurality of related items.
When a certain component is described as being "connected" or "connected" to another component, it is understood that the other component may be directly connected or connected, or that another component may be interposed therebetween. In contrast, when a component is described as being "directly connected" or "directly connected" to another component, it is to be understood that no other component exists therebetween.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention accordingly. Singular expressions also include plural unless it is clearly distinguishable in the pulse of a sentence. The terms "comprises" and "comprising" and the like in this application are used merely to specify the presence of the features, numbers, steps, movements, components, parts or combinations thereof described in the specification and are not to be construed as excluding the presence or additional possibility of one or more other features, numbers, steps, movements, components, parts or combinations thereof in advance.
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The same reference numerals will be used for the same components in the drawings, and the same components will not be described repeatedly.
Fig. 1 is a block diagram illustrating an image symbolizing apparatus according to one embodiment of the present invention.
Referring to fig. 1, the image symbolizing apparatus 100 includes an image dividing unit 110, predicting units 120, 125, a transforming unit 130, a quantizing unit 135, a reordering unit 160, an entropy symbolizing unit 165, an inverse quantizing unit 140, an inverse transforming unit 145, a filtering unit 150, and a memory 155.
The components shown in fig. 1 are illustrated separately for indicating different feature functions in the video symbolizing apparatus, and do not represent that the components are constituted by separate hardware or one software unit. That is, each component is included in a manner that each component is listed for convenience of explanation, at least two components of each component may be combined to constitute one component or one component may be divided into a plurality of components to function, and the integrated embodiment and the separated embodiment of each component are also included in the scope of the claims of the present invention without departing from the essence of the present invention.
Further, some of the components are not essential components for the present invention to perform essential functions, but are merely optional components for improving performance. The present invention can be realized by the essential components for realizing the essence of the present invention excluding only the components for improving the performance, and the configuration including only the essential components after excluding only the optional components for improving the performance is also included in the scope of the claims of the present invention.
The image dividing section 110 can divide an inputted image into at least one processing unit. In this case, the processing Unit may be a Prediction Unit (Prediction Unit: PU), a Transform Unit (Transform Unit: TU), or a Coding Unit (Coding Unit: CU). The image segmentation unit 110 segments one image by a combination of a plurality of symbolizing units, prediction units, and transform units, and selects one symbolizing unit, a combination of a prediction unit, and a transform unit by a predetermined reference (for example, a cost function), and symbolizes the image.
For example, one image can be divided into a plurality of symbolizing units. A recursive tree structure such as a quadtree structure (Quad Tree Structure) may be used for the image segmentation symbolizing unit, and a symbolizing block segmented into other symbolizing units with one video or the largest symbolizing unit (largest coding unit) as a root (root) may be segmented so as to have the number of sub-nodes equal to the number of segmented symbolizing units. The symbolized unit that cannot be further divided due to a certain restriction becomes a leaf node (leaf node). That is, assuming that only square division can be performed for one code block, one symbolizing unit can be divided into at most 4 different symbolizing units.
In the following embodiments of the present invention, the symbolizing unit may be used as a meaning of a unit for performing symbolization, or may be used as a meaning of a unit for performing renumbering.
The prediction units may be divided into at least one square or rectangle of the same size in one symbolization unit, or may be divided into one prediction unit and another prediction unit having different shapes and/or sizes.
When generating a prediction unit for performing intra prediction based on a symbolization unit, if the prediction unit is not a minimum symbolization unit, intra prediction may be performed without being divided into a plurality of prediction units NxN.
The prediction units 120 and 125 can include an inter prediction unit 120 that performs inter prediction and an intra prediction unit 125 that performs intra prediction. It is possible to decide whether inter prediction or intra prediction is used for the prediction unit and to decide specific information (e.g., intra prediction mode, motion vector, reference picture, etc.) of each prediction method. At this time, the processing unit performing the prediction and the processing unit determining the prediction method and the specific content may be different. For example, the prediction method, the prediction mode, etc. are determined by the prediction unit, and the prediction may be performed by the transformation unit. The residual value (residual block) between the generated prediction block and the original block may be input to the transformation section 130. Also, prediction mode information, motion vector information, and the like used for prediction may be transmitted to the complex quantizer after being symbolized together with the residual value in the entropy symbolizing unit 165. When the specific coding mode is used, the original block may be directly coded and transmitted to the renumbering unit without generating the prediction block by the prediction units 120 and 125.
The inter-frame prediction unit 120 may predict the prediction unit based on information of at least one of the previous image and the subsequent image of the current image, or may predict the prediction unit based on information of a part of the field in the current image where the coding is completed, as the case may be. The inter prediction unit 120 may include a reference image interpolation unit, a motion prediction unit, and a motion compensation unit.
The reference image interpolation unit may generate pixel information of integer pixels or less in the reference image after receiving the reference image information from the memory 155. In the case of a luminance pixel, a DCT-based 8-order interpolation filter (DCT-based Interpolation Filter) having different filter coefficients is used to generate pixel information of integer pixels or less in 1/4 pixel units. In the case of a color difference signal, a DCT-based 4-order interpolation filter (DCT-based Interpolation Filter) having different filter coefficients may be used to generate pixel information of integer pixels or less in 1/8 pixel units.
The motion prediction section can perform motion prediction based on the reference image interpolated by the reference image interpolation section. As a method for calculating the motion vector, various methods such as FBMA (Full search-based Block Matching Algorithm), TSS (Three Step Search), NTS (New Three-Step Search Algorithm) and the like can be used. The motion vector can have a motion vector value of 1/2 or 1/4 pixel units on the basis of the interpolated pixel. The motion prediction unit may predict the current prediction unit by different motion prediction methods. The motion prediction method may use various methods such as a Skip (Skip) method, a Merge (Merge) method, a AMVP (Advanced Motion Vector Prediction) method, an Intra Block Copy (Intra Block Copy) method, and the like.
The intra prediction unit 125 can generate a prediction unit based on reference pixel information around the current block, which is pixel information in the current image. Since the peripheral blocks of the current prediction unit are blocks in which inter prediction is performed, if the reference pixels are pixels in which inter prediction is performed, the reference pixels included in the blocks in which inter prediction is performed can be replaced with the reference pixel information of the peripheral blocks in which inter prediction is performed. That is, when the reference pixel is not available, at least one of the available reference pixels can be used instead of the unavailable reference pixel information.
In intra prediction, the prediction modes may include a directional prediction mode using reference pixel information according to a prediction direction and a non-directional mode not using the directional information when performing prediction. The number of the directional prediction modes may be equal to or greater than 33 defined in the HEVC standard, and may be extended to a number in the range of 60 to 70, as an example. The mode for predicting luminance information and the mode for predicting color difference information may be different, and for predicting color difference information, intra-prediction mode information used for predicting luminance information or predicted luminance signal information may be utilized.
If the size of the prediction unit is the same as the size of the transform unit when intra prediction is performed, intra prediction can be performed on the prediction unit based on the pixel located on the left side of the prediction unit, the pixel located on the upper end of the left side, and the pixel located on the upper end. However, if the size of the prediction unit is different from the size of the transform unit when performing intra prediction, intra prediction can be performed using the transform unit-based reference pixels. Also, intra prediction using NxN partitions only for the minimum symbolization unit may be used.
The intra prediction method may apply AIS (Adaptive Intra Smoothing) filters to the reference pixels according to the prediction mode to generate a prediction block. The kind of AIS filters suitable for the reference pixels may be different. In order to perform the intra prediction method, an intra prediction mode of a current prediction unit may be predicted according to intra prediction modes of prediction units existing in the periphery of the current prediction unit. When predicting the prediction mode of the current prediction unit using the mode information predicted from the peripheral prediction unit, if the current prediction unit is identical to the intra prediction mode of the peripheral prediction unit, information indicating that the prediction mode of the current prediction unit is identical to the prediction mode of the peripheral prediction unit can be transmitted using preset flag information, and if the prediction modes of the current prediction unit and the peripheral prediction unit are different, entropy symbolization can be performed to symbolize the prediction mode information of the current block.
Further, a Residual block containing Residual value (Residual) information, which is a difference value between a prediction unit for which prediction is performed based on the prediction unit generated by the prediction units 120, 125 and an original block of the prediction unit, may be generated. The generated residual block may be input to the transform section 130.
The transform unit 130 can transform a residual block including the original block and residual value (residual) information of the prediction unit generated by the prediction units 120 and 125 by using a transform method such as DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), and KLT. The DCT, DST, or KLT is applied when transforming the residual block, and the residual block can be determined based on intra-prediction mode information of a prediction unit used to generate the residual block.
The quantization unit 135 can quantize the value converted into the frequency domain by the conversion unit 130. The quantization coefficients may vary according to blocks or according to the importance of the image. The value calculated by the quantization section 135 may be supplied to the inverse quantization section 140 and the reordering section 160.
The reordering unit 160 can perform reordering of coefficient values with respect to quantized residual values.
The reordering unit 160 can change the two-dimensional block form coefficients into one-dimensional vector forms by a coefficient scanning (Coefficient Scanning) method. For example, the reordering unit 160 may Scan the DC coefficient to the coefficient in the high frequency domain by a Zig-Zag Scan method to change the vector into a one-dimensional vector form. Depending on the size of the transform unit and the intra prediction mode, a vertical scan that scans the two-dimensional block morphological coefficients in the column direction and a horizontal scan that scans the two-dimensional block morphological coefficients in the row direction may be used without using a zig-zag scan. That is, it is possible to determine which scanning method of the folding scan, the vertical scanning, and the horizontal scanning is to be used according to the size of the transform unit and the intra prediction mode.
The entropy symbolizing unit 165 can perform entropy symbolization based on the value calculated by the reordering portion 160. Entropy symbolization may use various symbolization methods such as exponential golomb (Exponential Golomb), CAVLC (Context-Adaptive Variable Length Coding), CABAC (Context-Adaptive Binary Arithmetic Coding), and the like.
The entropy symbolizing unit 165 may symbolize various information such as residual coefficient information and block type information, prediction mode information, division unit information, prediction unit information and transmission unit information, motion vector information, reference frame information, interpolation information of blocks, and filter information from symbolizing units of the reordering unit 160 and the prediction units 120 and 125.
The entropy symbolizing unit 165 can entropy symbolize the coefficient value of the symbolizing unit input from the reordering unit 160.
The inverse quantization unit 140 and the inverse transformation unit 145 inversely quantize the quantized value of the quantization unit 135 and inversely transform the transformed value of the transformation unit 130. The Residual values (Residual) generated by the inverse quantization unit 140 and the inverse transform unit 145 are combined with the prediction units predicted by the motion estimation unit, the motion compensation unit, and the intra-frame prediction unit included in the prediction units 120 and 125 to generate a reconstructed block (Reconstructed Block).
The filtering part 150 may include at least one of a deblocking filter, an offset correction part, ALF (Adaptive Loop Filter).
The deblocking filter can remove block distortion generated due to boundaries between blocks in the reconstructed image. In order to determine whether to perform deblocking, it is determined whether a deblocking filter is applied to the current block based on pixels contained in columns or rows contained in the block. When a deblocking Filter is applied to a block, a Strong Filter (Strong Filter) or a Weak Filter (Weak Filter) may be applied according to a desired deblocking Filter strength. When the deblocking filter is applied, the horizontal filtering and the vertical filtering can be processed in parallel when the vertical filtering and the horizontal filtering are performed.
The offset correction section corrects an offset between the image subjected to deblocking and the original image by pixel unit. When performing offset correction for a specific image, a method of dividing pixels included in the image into a predetermined number of areas, determining an area in which offset (offset) is required, and applying offset to the area, or a method of applying offset in consideration of boundary (edge) information of each pixel may be used.
ALF (Adaptive Loop Filtering) can be performed based on a comparison of the filtered reconstructed image and the original image. The pixels included in the image are divided into predetermined groups (groups), and a filter to be applied to the groups is determined and then the filtering is performed differently for each group. For information on whether ALF is applied, a luminance signal may be transmitted by each Coding Unit (CU), and a pattern and a filter coefficient of an ALF filter to be applied may be different for each block. Furthermore, an ALF filter of the same shape (fixed shape) may be applied without being affected by the characteristics of the application target block.
The memory 155 can store the reconstructed block or image calculated by the filtering unit 150, and the stored reconstructed block or image can be supplied to the prediction units 120 and 125 when inter prediction is performed.
Fig. 2 is a block diagram illustrating an image renumbering apparatus according to an embodiment of the present invention.
Referring to fig. 2, the image renumber 200 includes an entropy renumbering unit 210, a reordering portion 215, an inverse quantization portion 220, an inverse transformation portion 225, prediction portions 230 and 235, a filtering portion 240 and a memory 245.
When the video symbolizer inputs a video bitstream, the input bitstream may be repeated by a step opposite to that of the video symbolizer.
The entropy renumbering unit 210 can perform entropy renumbering in reverse of the step in which the entropy renumbering unit of the image symbolizer performs entropy renumbering. For example, various methods such as exponential golomb (Exponential Golomb), CAVLC (Context-Adaptive Variable Length Coding), CABAC (Context-Adaptive Binary Arithmetic Coding) and the like are applied in accordance with the method executed by the video symbolizer.
The entropy renumbering unit 210 can renumber information about intra prediction and inter prediction performed by the symbolizer.
For the bit stream entropy-renumbered by the entropy renumbering unit 210, the reordering portion 215 can perform reordering based on a method of the symbolization portion reordering. Coefficients expressed in a one-dimensional vector form may be reconstructed into coefficients in a two-dimensional block form and reordered. The reordering unit 215 receives information on the coefficient scan performed by the symbolizing unit, and then can perform reordering by a method of inverse scanning based on the scanning order performed by the symbolizing unit.
The inverse quantization unit 220 can perform inverse quantization based on the quantization parameter supplied from the symbolizer and the coefficient value of the reordered block.
For the quantization result performed by the video symbolizer, the inverse transformation section 225 can inverse transform the transformation performed by the transformation section, that is, perform inverse DCT, inverse DST, and inverse KLT on DCT, DST, and KLT. The inverse transform can be performed based on the transmission unit determined by the video symbolizer. The inverse transform unit 225 of the video double-coder can selectively perform a transform method (for example, DCT, DST, KLT) based on a plurality of pieces of information such as a prediction method, a current block size, and a prediction direction.
The prediction units 230 and 235 can generate a prediction block based on information on generation of the prediction block provided by the entropy renumbering unit 210 and previously renumbered block or image information provided by the memory 245.
As described above, when intra prediction is performed in the same manner as the operation in the video symbolizer, intra prediction is performed on the prediction unit on the basis of the pixel located on the left side of the prediction unit, the pixel located on the upper end of the left side, and the pixel located on the upper end, but when intra prediction is performed, the size of the prediction unit is different from the size of the transform unit, intra prediction can be performed using the reference pixel on the basis of the transform unit. Also, intra prediction using NxN division only for the minimum symbolization unit may be used.
The prediction units 230 and 235 may include a prediction unit determination unit, an inter prediction unit, and an intra prediction unit. The prediction unit discriminating unit receives various information such as the prediction unit information, the prediction mode information of the intra-frame prediction method, and the correlation information of the motion prediction of the inter-frame prediction method input from the entropy renumbering unit 210, and then discriminates the prediction unit in the current symbolizing unit, and can discriminate whether the prediction unit performs inter-frame prediction or intra-frame prediction. The inter prediction unit 230 performs inter prediction on the current prediction unit based on information contained in at least one of a previous image or a subsequent image including the current image of the current prediction unit using information required for inter prediction of the current prediction unit provided by the picture symbolizer. Alternatively, inter prediction may be performed based on information of a part of the reconstructed region within the current image including the current prediction unit.
In order to perform inter prediction, a motion prediction method of a prediction unit included in a corresponding symbolization unit is determined as one of Skip Mode (Skip Mode), merge Mode (Merge Mode), AMVP Mode (AMVP Mode), and intra block copy Mode based on the symbolization unit.
An intra-frame prediction unit (235) can generate a prediction block on the basis of pixel information in the current image. If the prediction unit is a prediction unit that performs intra prediction, intra prediction can be performed based on intra prediction mode information of the prediction unit provided by the video symbolizer. The intra prediction part (235) may include a AIS (Adaptive Intra Smoothing) filter, a reference pixel interpolation part, a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and may determine whether to apply the filter according to the prediction mode of the current prediction unit. The AIS filtering may be performed on the reference pixels of the current block using the prediction mode of the prediction unit provided by the video symbolizer and the AIS filter information. The AIS filter may not be applied if the prediction mode of the current block is a mode in which the AIS filtering is not performed.
If the prediction mode of the prediction unit is a prediction unit that performs intra prediction based on a pixel value of the interpolated reference pixel, the reference pixel interpolation unit may interpolate the reference pixel to generate a reference pixel of a pixel unit having an integer value or less. The reference pixel may not be interpolated if the prediction mode of the current prediction unit is a prediction mode in which the prediction block is generated without interpolating the reference pixel. The DC filter can generate a prediction block by filtering when the prediction mode of the current block is a DC mode.
The reconstructed block or image may be provided to a filtering section 240. The filtering part 240 may include a deblocking filter, an offset correction part, an ALF.
Information can be obtained from the video symbolizer whether a deblocking filter is applied to the corresponding block or image, whether a strong filter is applied when a deblocking filter is applied, or whether a weak filter is applied. The deblocking filter of the image complex coder may perform deblocking filtering on the corresponding block after receiving the deblocking filter related information provided by the image symbolizer.
The offset correction unit can perform offset correction on the reconstructed image based on the type of offset correction applied to the image at the time of symbolization, offset value information, and the like.
The ALF can be applied to the symbolizing unit based on the ALF applicability information, ALF coefficient information, and the like provided by the symbolizer. The ALF information may be provided contained in a specific parameter set (set).
The memory 245 holds the reconstructed image or block for use as a reference image or reference block and also provides the reconstructed image to an output unit.
As described above, the following embodiments of the present invention use the term Coding Unit for the sake of convenience of explanation, but it may be a Unit that performs not only symbolization but also renumbering.
Fig. 3 is a block diagram showing an image renumbering method based on intra prediction, to which the present invention is applied.
For convenience of explanation, the embodiments are explained below based on 35 intra prediction modes defined by HEVC. However, even if 35 or more intra prediction modes (i.e., extended intra prediction modes) are used, the embodiments described later can be applied. Meanwhile, the point of the minimum unit constituting the graphic (image) will be referred to as a pixel, a sample, or the like in the embodiments described later.
Referring to fig. 3, if the current block is a block marked with an intra mode, an intra prediction mode for the current block may be determined (step S310).
The intra prediction mode of the current block may be decided with reference to intra prediction modes of neighboring blocks adjacent to the current block. As an example, the candidate mode list may be generated by referring to intra prediction modes of neighboring blocks adjacent to the current block in order to determine the intra prediction mode of the current block. Then, the intra-prediction mode of the current block is determined based on any index (index) (for example, MPM (Most Probable Mode) index) of the intra-prediction modes included in the instruction candidate mode list, or the intra-prediction mode not included in the candidate mode list is determined as the intra-prediction mode of the current block.
When the intra prediction mode of the current block is determined, the first intra prediction can be performed based on the reference pixel information around the current block (step S320). Here, at least one pixel included in a peripheral block adjacent to the current block may be used as a reference pixel for intra prediction of the current block.
The peripheral block may include at least one of blocks adjacent to a left lower end, a left side, an upper end, an upper right side, a right side, or a lower end of the current block. If the reference pixel is not available, the information of the unavailable reference pixel can be replaced with the information of the available reference pixel. The availability performance of the reference pixel is determined based on factors such as whether or not a neighboring block containing the reference pixel is renumbered before the current block, whether or not the neighboring block containing the reference pixel is a block symbolized in an inter mode, whether or not the reference pixel is contained in the same slice (slice) or tile (tile) as the current block, and the like.
The first prediction samples of the current block may be derived by first intra prediction. The process of deriving the first prediction samples of the current block is described below assuming that the intra prediction mode of the current block is a Planar (Planar) mode.
When the intra prediction mode of the current block is a Planar (Planar) mode, a first prediction value of a target sample (i.e., a value of a first prediction sample) included in the current block may be derived using at least one of a first reference pixel variably decided according to a position of the target sample and a second reference pixel fixed without being affected by the position of the target sample. Here, the first reference pixel may include at least one of a reference pixel on the same horizontal line as the target sample (i.e., a reference pixel having the same x-coordinate value as the target sample) or a reference pixel on the same vertical line as the target sample (i.e., a reference pixel having the same y-coordinate value as the target sample). The second reference pixel may include at least one of reference pixels (e.g., a right upper reference pixel and a left lower reference pixel) located in a diagonal direction of a corner of the current block. Or the second reference pixel may include at least one of a pixel located at the rightmost side among a plurality of adjacent pixels adjacent to an upper end boundary of the current block, a pixel located at the bottommost side among a plurality of adjacent pixels adjacent to a left side boundary of the current block, or an adjacent pixel adjacent to a right corner of an upper end of the current block.
Fig. 4 is an explanatory diagram for explaining a first predicted value derivation process of a target sample. In the example shown in fig. 4, the current block is represented by an 8x8 block illustrated with a bold line, and samples outside the bold line are assumed to be adjacent reference pixels of the current block. For convenience of explanation, the reference pixels located in the diagonal direction of the right upper corner of the current block are referred to as right upper reference pixels, and the reference pixels located in the diagonal direction of the left lower corner of the current block are referred to as left lower reference pixels.
Referring to fig. 4, when the coordinates of the upper left hand sample of the current block are defined as (0, 0), the first prediction samples of the target samples located at the (3, 3) coordinates of the current block can be derived on the basis of the reference pixels located at the same horizontal line as the target samples (i.e., the reference pixels located at the (-1, 3) coordinates), the lower left hand reference pixels (i.e., the reference pixels located at the (-1, 8) coordinates), and the reference pixels located at the same vertical line as the target samples (i.e., the reference pixels located at the (3, -1) coordinates) and the upper right hand reference pixels (i.e., the reference pixels located at the (8, -1) coordinates).
When the first predicted value derivation method of the target sample is expressed by a mathematical expression, the following expression 1 can be exemplified.
[ math 1 ]
horPred(x,y)=(nT-1-x)×p(-1,y)+(x+1)×p(nT,-1)
verPred(x,y)=(nT-1-y)×p(x,-1)+(y+1)×p(-1,nT)
predSamples(x,y)=(horPred(x,y)+verPred(x,y)+nT)》(Lig2(nT)+1)
In the above equation 1, nT represents the size or lateral/longitudinal length of the current block, and p (-1, y) and p (x, -1) represent pixel values of the reference pixel. As defined in equation 1, the first predicted value (predsample (x, y)) of the target sample can be derived based on the sum of the horizontal predicted value (horPred (x, y)) based on the upper reference pixel and the left lower reference pixel having the same x-axis coordinates as the target sample and the vertical predicted value (verapred (x, y)) based on the left reference pixel and the right upper reference pixel having the same y-axis coordinates as the target sample.
At this time, the first predicted value of the boundary sample of the current block may be derived using the values of the adjacent reference pixels to the second reference pixel instead of using the values of the second reference pixel. Here, the boundary sample of the current block may be a sample that is contiguous to a boundary (boundary) of the current block among samples of the current block. For example, the boundary samples may be samples located in the rightmost column (most right column) and/or the bottom most row (most bottom row) of the current block.
As an example, according to the above equation 1, the right side edge sample of the current block (i.e., (7, y) Samples) may be derived using only the top right reference pixel as shown in equation 2.
[ formula 2 ]
horPred(7,y)=(8-1-7)×p(-1,y)+(7+1)×p(8,-1)=8p(8,-1)
However, in terms of similarity of boundary samples located in the rightmost column of the current block (hereinafter referred to as "rightmost samples"), the probability that the similarity with a reference pixel located on the same x-axis as the rightmost sample (i.e., (7, -1)) is greater than the similarity with a reference pixel adjacent to the right upper end of the current block (i.e., (8, -1)) is greater.
Thus, the horizontal prediction value of the rightmost sample may be derived using reference pixels (i.e., (7, -1)) that are on the same vertical line as the rightmost sample, instead of using the right upper reference pixel of the current block.
Still further, the vertical prediction value of the rightmost sample may be derived based on the values of reference pixels adjacent to the left lower reference pixel (i.e., reference pixels (-1, 7) having the same y-coordinate as the bottommost line of the current block) instead of the left lower reference pixel.
Similarly, the vertical prediction value of the boundary sample located at the lowermost line of the current block (hereinafter referred to as "lowermost sample") may be derived using a reference pixel located on the same horizontal line as the lowermost sample (i.e., a reference pixel (-1, 7) having the same x-coordinate as the rightmost column of the current block) instead of using the left-side lower reference pixel.
Furthermore, the horizontal prediction value of the lowermost sample can also be derived based on the values of reference pixels adjacent to the right lower reference pixel (i.e., reference pixels (-1, 7) having the same x-coordinate as the rightmost column of the current block) instead of the right upper reference pixel.
The first predicted value of the remaining samples other than the boundary sample of the current block can be derived using the value of the upper reference pixel having the same x-axis coordinate as the rightmost sample or the value of the left reference pixel having the same y-axis coordinate as the bottommost sample.
As another example, when the intra prediction mode of the current block is the planar mode, the first intra prediction of the current block is performed using the left lower end reference pixel and the right upper end reference pixel, but the values of the left lower end reference pixel and the right upper end reference pixel may be set to have values of reference pixels adjacent to each other. As an example, even if the left lower-end reference pixel is in an available state, the value of the left lower-end reference pixel may be set to a value having a reference sample adjacent thereto (i.e., a reference sample having the same y-coordinate as the lowermost row of the current block). Likewise, even if the right upper reference pixel is in an available state, the value of the right upper reference pixel may be set to a value having a reference sample adjacent thereto (i.e., a reference sample having the same x-coordinate as the rightmost column of the current block).
In the above examples, the case where the intra prediction mode of the current block is the planar mode is exemplified as the execution of the first intra prediction, but the embodiments described later can be applied also when the intra prediction mode of the current block is the DC mode or the directional prediction mode.
After the first intra prediction is performed for the current block, a second filter-based intra prediction may be performed (step S330).
The second intra prediction can be performed based on reference pixels adjacent to the current block and first prediction samples derived from the first intra prediction. At this time, whether to perform the second intra prediction can be determined according to factors such as an intra prediction mode of the current block, a size of the current block, a partition mode of the current block, and the like. As an example, the second intra prediction may be performed only when the intra prediction mode of the current block is the planar mode, but the present invention is not limited thereto.
The second intra prediction of the present invention can be regarded as a process of generating a second prediction sample by applying a weighting filter (weighting filter) to a first prediction sample generated by the first intra prediction.
The weighting filter is used to add or subtract a predetermined compensation coefficient to or from the first prediction sample or to apply a predetermined weighting value to the first prediction sample and the reference pixel. The compensation coefficient may be derived from a pixel value variation between the first prediction sample of the current block and the reference pixel and/or a pixel value variation between the reference pixels. The preset weighting value may be a fixed constant value defined on the decoder or a variable derived from the spatial distance between the first prediction sample and the reference pixel.
The weighting filter may be applied to the entire field of the current block, or may be selectively applied to a portion of the field according to an intra prediction mode of the current block. As an example, the application range of the weighting filter may be applied to the boundary samples of the current block. Here, the boundary samples may be samples located in a leftmost column (last left column) and/or an uppermost row (last top row) of the current block. As another example, the weighting filter can be applied to the boundary samples and the first prediction samples adjacent to the boundary samples. Alternatively, the weighting filter can be applied to only a portion of the rows and/or a portion of the columns of the current block.
The range of reference pixels utilized by the weighting filter may vary with the location of the first prediction sample or may be fixed independent of the location of the first prediction sample. It will be assumed that the range of reference pixels utilized by the weighting filter in the embodiments of fig. 5 and 6 described below varies with the position of the first prediction sample. An embodiment of deriving the second prediction samples of the current block using the weighting filter is described in detail below with reference to the accompanying drawings. At this time, it is assumed that the second intra prediction of the current block is performed on the basis of the upper-end reference pixel and the left-side reference pixel.
Fig. 5 and 6 are diagrams for explaining an example of performing the second intra prediction on the current block. Fig. 5 shows an example in which the second intra prediction is performed for a portion of the first prediction samples of the current block located at the boundary where the reference pixels are adjacent, and fig. 6 shows an example in which the second intra prediction is performed for the first prediction samples where the reference pixels are not adjacent.
The samples located on the right side of the diagonal line may perform the second intra prediction based on the upper reference pixel, and the samples located on the lower side of the diagonal line may perform the second intra prediction based on the left reference pixel, with reference to the diagonal line of the current block having the preset angle. Samples located on the diagonal of the current block can then perform a second intra prediction using the top reference pixel and the left reference pixel. When the preset angle is 45 degrees, the samples with the same value of the x-axis coordinate and the y-axis coordinate correspond to the samples positioned on the diagonal line of the current block. Samples located at the diagonal of the current block will be referred to as diagonal samples hereinafter.
As an example, referring to fig. 5, based on the left top sample (i.e., the diagonal sample of the (0, 0) position) of the current block, the top boundary sample located on the right side of the left top sample may perform the second intra prediction using the top reference pixel, and the left boundary sample located on the lower side of the left top sample may perform the second intra prediction using the left reference pixel. The left top sample located on the diagonal can then perform the second intra prediction using the top reference pixel and the left reference pixel.
The filter can also be applied to samples contained in the left-most column of the current block and in the remaining rows or columns other than the uppermost row. As an example, based on diagonal samples of the current block, samples located at the right side may perform the second intra prediction using the upper reference pixel, and samples located at the lower side may perform the second intra prediction using the left reference pixel. The diagonal samples may perform a second intra prediction using the top reference pixel and the left reference pixel.
As an example, referring to fig. 6, a diagonal sample (i.e., a sample of a (1, 1) position) with x-axis coordinate values and y-axis coordinate values as 1 is used as a reference, and a sample included in the same row and positioned on the right side with respect to the diagonal sample is used to perform the second intra prediction using the upper reference pixel, and a sample included in the same column and positioned on the lower side with respect to the diagonal sample is used to perform the second intra prediction using the left reference pixel.
Although not shown in fig. 5 and 6, a weighting filter can be applied to samples having 3 or more in either the x-coordinate or the y-coordinate. Alternatively, the range of samples to which the weighting filter is applied may be variably determined based on the size of the current block, the division form of the current block, the prediction mode used for the first intra prediction, and the like.
Meanwhile, when the second intra prediction is performed with respect to the current block, the left upper end reference pixel may also be utilized. As an example, the second prediction value of the diagonal sample of the current block can be derived based on the left reference pixel, the top reference pixel, and the left top reference pixel.
In the examples shown in fig. 5 and 6, the second intra prediction is performed based on the upper reference pixel and the left reference pixel adjacent to the current block. Unlike the illustrated example, the second intra prediction may be performed using only one of the upper reference pixel and the left reference pixel adjacent to the current block.
At this time, the range of the reference pixel for the second intra prediction may be determined according to factors such as an intra prediction mode of the current block, a size of the current block, or a division form of the current block.
As an example, when the intra prediction mode of the current block is a non-directional mode (e.g., a planar mode), as illustrated in fig. 5 and 6, the second intra prediction may be performed using at least one of the left side reference pixel and the upper side reference pixel according to the position of the target sample. In contrast, when the intra prediction mode of the current block is a directional mode (e.g., a vertical direction mode or a horizontal direction mode), the second intra prediction may be performed using only the left reference pixel or the upper reference pixel without being affected by the position of the target sample.
The weighting filter for the second intra prediction is used to assign a weighting value, i.e. a value of a first prediction sample derived from the first intra prediction, compared to the reference pixel adjacent to the current block. In this case, the filter coefficient of the filter applied to each sample may be a fixed constant or may be a variable that varies with the position of the first predicted sample.
As an example, the coefficient of the weighting filter may be a variable that increases or decreases in proportion to the distance between the first prediction sample and the reference pixel.
The following equation 3 illustrates an equation for deriving a second prediction value (i.e., a value of a second prediction sample) of samples included in an uppermost row (i.e., a first row (row) and a leftmost column (i.e., a first column) of a current block) of the current block.
[ formula 3 ]
predSamples(0,0)=(p(-1,0)+p(0,-1)+2p(0,0)+2)>>2
predSamples(x,0)=(p(x,-1)+3p(x,0)+2)>>2;x>0
predSamples(0,y)=(p(-1,y)+3p(0,y)+2>>2;y>0
In the above-described expression 3, the filter coefficients for the samples located at the upper left end of the current block where the uppermost row overlaps with the leftmost row (i.e., the leftmost samples of the uppermost row or the leftmost samples of the leftmost column) are exemplified as [1,2,1], the filter coefficients for the samples contained in the leftmost column of the current block other than the diagonal samples at the upper left end are exemplified as [3,1], and the filter coefficients for the samples contained in the uppermost row of the current block other than the diagonal samples at the upper left end are exemplified as [3,1].
The following equation 4 shows a derived example of the second predicted value of the samples contained in 2 rows or 2 columns of the current block.
[ math figure 4 ]
predSamples(1,1)=(p(-1,0)+p(0,-1)+3p(0,0)+2)>>2
predSamples(x,1)=(p(x,-1)+4p(x,0)+2>>2;x>1
predSamples(1,y)=(p(-1,y)+4p(0,y)+2)>>2;y>1
In the above equation 4, the filter coefficients for the diagonal samples of the (1, 1) coordinates are exemplified as [1,3,1], the filter coefficients for the samples included in 2 rows of the current block located on the right side of the diagonal samples are exemplified as [4,1], and the filter coefficients for the samples included in 2 columns of the current block located on the lower side of the diagonal samples are exemplified as [4,1].
As illustrated in equations 3 and 4, the further the distance from the upper reference pixel is, the more the weighted value of the first prediction value for the sample is increased for the sample located on the right (or upper) side of the diagonal sample. For samples located to the left (or lower) of the diagonal sample, the further away from the left reference pixel the weighting value for the first predictor of the sample can be increased.
The range of applicable filters within the current block may also be limited to a portion of the rows or a portion of the columns of the current block. As an example, the filter may be applied only to the first row and the first column of the current block, or the filter may be applied only from the first row to the nth row of the current block, or the filter may be applied from the first column to the mth column.
At this time, the application range of the filter may be determined according to factors such as an intra prediction mode of the current block, a size of the current block, or a division form of the current block.
As an example, if the prediction mode of the current block is a non-directional mode (for example, a planar mode), the filter may be applied only to the first line and the first column of the current block, and if the prediction mode of the current block is a directional mode, the filter may be applied to the first line to the nth line of the current block, or the filter may be applied to the first column to the mth column.
If the filter is applied to only a portion of the current block, the value of the second prediction sample (i.e., the second prediction value) to which the filter is not applied may be set to be the same as the value of the first prediction sample (i.e., the first prediction value).
When the current block includes a plurality of sub-blocks, the first intra prediction and the second intra prediction may be performed in sub-block units. In this case, the order of execution of the first intra prediction and the second intra prediction may be the same or different from each other.
When the first intra prediction and the second intra prediction are performed in sub-block units, at least one of the first intra prediction and the second intra prediction may be performed in a predetermined order.
At least one of the first intra prediction and the second intra prediction may be performed in accordance with an intra prediction mode of the current block. This is described in detail below in conjunction with fig. 7.
Fig. 7 is a diagram for explaining an intra prediction order according to an intra prediction mode of a current block. For convenience of explanation, it is assumed that the first intra prediction and the second intra prediction are performed in the same order.
If the intra prediction mode of the current block is a non-directional mode or corresponds to any one of the horizontal direction mode to the vertical direction mode (for example, if the intra prediction mode corresponds to 10-26), the intra prediction of the current block can be performed in the order of the "Z" form starting from the upper left terminal block as shown in fig. 7 (a).
If the intra prediction mode of the current block is a directional mode and the intra prediction mode has a number smaller than that of the horizontal direction mode (for example, if the intra prediction mode is 2-9), the intra prediction of the current block can be performed in reverse (reverse) Z order from the left lower terminal block as shown in fig. 7 (b).
If the intra prediction mode of the current block is a directional mode and the intra prediction mode has a number greater than that of the vertical direction mode (for example, if the intra prediction mode is 27-34), the intra prediction of the current block can be performed in reverse Z order from the terminal block on the right side as shown in fig. 7 (c).
Referring to fig. 3, if the second prediction samples of the current block are derived by the first intra prediction and the second intra prediction, the reconstructed samples of the current block may be derived after the second prediction samples are added with the residual samples (reconstruction sample) (step S340).
Residual samples may be derived by selectively performing at least one of inverse quantization or inverse transformation on transform coefficients (transform coefficients) (or residual coefficients) of a current block derived from the bitstream. In this case, a transformation form for inverse transformation may be DCT, DST, KLT or the like. At this time, any one of the above transformation forms may be selectively utilized in consideration of factors such as a prediction mode of the current block, a size of the current block (e.g., PU, TU), luminance/color difference components, and the like.
A loop filter may be applied to the reconstructed samples derived by adding the second prediction samples and the residual samples (step S350). The loop filter may include at least one of a deblocking filter, a SAO (Sample Adaptive Offset) filter, and ALF (Adaptive Loop Filter).
The components described by the foregoing embodiments of the present invention can be implemented by at least one of DSP (Digital Signal Processor), a processor, a controller, a programmable logic element (programmable logic element) such as asic (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array), other electronic devices, and combinations thereof.
Alternatively, at least one function or processes described by the above-described embodiments of the present invention can be implemented by software and the software can be recorded to a recording medium. The recording medium includes such as: magnetic media such as hard disks, floppy disks, and magnetic tape; optical recording media such as CD-ROM and DVD; magneto-optical media (magneto-optical media) such as a magneto-optical disk (floptical disk); and special hardware devices such as ROM, RAM, flash memory, etc. that store and execute program command words. The program command words include machine language code such as produced by a compiler, high-level language code that causes a computer to execute using an interpreter, etc. The hardware device may be operatively configured by more than one software module for performing the processing operations required by the present invention, as well as the inverse operations. The components, functions, processes, and the like described in the embodiments of the present invention may be realized by a combination of hardware and software.
While the present invention has been described above with reference to specific matters, such as specific components, and limited embodiments and drawings, it is only provided to facilitate the overall understanding of the present invention, and the present invention is not limited to the above embodiments, and various modifications and changes may be made by those skilled in the art from the above description.
The spirit of the invention is therefore not limited to the embodiments described above, but the scope of the spirit and scope of the invention shall include not only the claims but also all equivalents and modifications equivalent to the claims.
Industrial use
The invention is applicable to the symbolization/renumbering of images.

Claims (9)

1. A video decoding method, comprising the steps of:
determining an intra prediction mode of the current block;
generating a prediction pixel in the current block by performing direct current intra prediction on the current block based on at least one neighboring pixel to the current block when the intra prediction mode is a direct current mode; a kind of electronic device with high-pressure air-conditioning system
Applying a weighting filter to filter the predicted pixels of the current block,
wherein the predicted pixel filtered by the weighting filter is determined based on a weighted sum of an upper reference pixel, a left reference pixel, and the predicted pixel, and
the left reference pixel of the predicted pixel is included in the same row as the predicted pixel, and the upper reference pixel of the predicted pixel is included in the same column as the predicted pixel,
when at least one of the first weighted value of the upper reference pixel and the second weighted value of the left reference pixel is greater than zero, the predicted pixel is filtered by the weighting filter,
The first weighted value is determined to be zero when a distance from the upper reference pixel is greater than a preset value, and the second weighted value is determined to be zero when the distance from the left reference pixel is greater than the preset value; a kind of electronic device with high-pressure air-conditioning system
The preset value is determined according to the size of the current block.
2. The video decoding method of claim 1, wherein,
a decision is made whether to filter the prediction pixels based on the intra prediction mode of the current block.
3. The video decoding method of claim 1, wherein,
the second weight of the left reference pixel is derived based on a distance between the predicted pixel and the left reference pixel, and the first weight of the upper reference pixel is derived from a distance between the predicted pixel and the upper reference pixel.
4. The video decoding method of claim 1, wherein,
the left reference pixel and the upper reference pixel are included in the neighboring pixels.
5. A video encoding method comprising the steps of:
determining an intra prediction mode of the current block;
Generating a prediction pixel in the current block by performing direct current intra prediction on the current block based on at least one neighboring pixel to the current block when the intra prediction mode is a direct current mode; a kind of electronic device with high-pressure air-conditioning system
Applying a weighting filter to filter the predicted pixels of the current block,
wherein the predicted pixel filtered by the weighting filter is determined based on a weighted sum of an upper reference pixel, a left reference pixel, and the predicted pixel, and
when the intra prediction mode of the current block is a non-directional mode, the left reference pixel of the prediction pixel is included in the same row as the prediction pixel, and the upper reference pixel of the prediction pixel is included in the same column as the prediction pixel,
when at least one of the first weighted value of the upper reference pixel and the second weighted value of the left reference pixel is greater than zero, the predicted pixel is filtered by the weighting filter,
the first weighted value is determined to be zero when a distance from the upper reference pixel is greater than a preset value, and the second weighted value is determined to be zero when the distance from the left reference pixel is greater than the preset value; a kind of electronic device with high-pressure air-conditioning system
The preset value is determined according to the size of the current block.
6. The video coding method of claim 5, wherein,
a decision is made whether to filter the prediction pixels based on the intra prediction mode of the current block.
7. The video coding method of claim 5, wherein,
the second weight of the left reference pixel is derived based on a distance between the predicted pixel and the left reference pixel, and the first weight of the upper reference pixel is derived from a distance between the predicted pixel and the upper reference pixel.
8. The video coding method of claim 5, wherein,
the left reference pixel and the upper reference pixel are included in the neighboring pixels.
9. A method of transmitting a bitstream, the bitstream being generated by a video encoding method, the video encoding method comprising the steps of:
determining an intra prediction mode of the current block;
generating a prediction pixel in the current block by performing direct current intra prediction on the current block based on at least one neighboring pixel to the current block when the intra prediction mode is a direct current mode; a kind of electronic device with high-pressure air-conditioning system
Applying a weighting filter to filter the predicted pixels of the current block,
wherein the predicted pixel filtered by the weighting filter is determined based on a weighted sum of an upper reference pixel, a left reference pixel, and the predicted pixel, and
when the intra prediction mode of the current block is a non-directional mode, the left reference pixel of the prediction pixel is included in the same row as the prediction pixel, and the upper reference pixel of the prediction pixel is included in the same column as the prediction pixel,
when at least one of the first weighted value of the upper reference pixel and the second weighted value of the left reference pixel is greater than zero, the predicted pixel is filtered by the weighting filter,
the first weighted value is determined to be zero when a distance from the upper reference pixel is greater than a preset value, and the second weighted value is determined to be zero when the distance from the left reference pixel is greater than the preset value; a kind of electronic device with high-pressure air-conditioning system
The preset value is determined according to the size of the current block.
CN201780017936.7A 2016-03-17 2017-03-17 Video signal processing method and device based on intra-frame prediction Active CN108781285B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202310868214.XA CN116668682A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310868397.5A CN116668683A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310866170.7A CN116668681A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310869748.4A CN116668684A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2016-0032142 2016-03-17
KR1020160032142A KR20170108367A (en) 2016-03-17 2016-03-17 Method and apparatus for processing a video signal based on intra prediction
PCT/KR2017/002899 WO2017160117A1 (en) 2016-03-17 2017-03-17 Method and apparatus for processing intra-prediction-based video signal

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN202310868214.XA Division CN116668682A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310868397.5A Division CN116668683A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310866170.7A Division CN116668681A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310869748.4A Division CN116668684A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method

Publications (2)

Publication Number Publication Date
CN108781285A CN108781285A (en) 2018-11-09
CN108781285B true CN108781285B (en) 2023-08-01

Family

ID=59850445

Family Applications (5)

Application Number Title Priority Date Filing Date
CN202310868397.5A Pending CN116668683A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310869748.4A Pending CN116668684A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN201780017936.7A Active CN108781285B (en) 2016-03-17 2017-03-17 Video signal processing method and device based on intra-frame prediction
CN202310868214.XA Pending CN116668682A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310866170.7A Pending CN116668681A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202310868397.5A Pending CN116668683A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310869748.4A Pending CN116668684A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202310868214.XA Pending CN116668682A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method
CN202310866170.7A Pending CN116668681A (en) 2016-03-17 2017-03-17 Video decoding method, video encoding method and bit stream transmission method

Country Status (4)

Country Link
US (3) US11228755B2 (en)
KR (1) KR20170108367A (en)
CN (5) CN116668683A (en)
WO (1) WO2017160117A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PL2704435T3 (en) * 2011-04-25 2019-08-30 Lg Electronics Inc. Intra-prediction method, and encoder and decoder using same
CN113810712A (en) * 2016-04-29 2021-12-17 世宗大学校产学协力团 Method and apparatus for encoding and decoding image signal
FR3066873A1 (en) * 2017-05-29 2018-11-30 Orange METHODS AND DEVICES FOR ENCODING AND DECODING A DATA STREAM REPRESENTATIVE OF AT LEAST ONE IMAGE
US11909961B2 (en) * 2017-11-22 2024-02-20 Intellectual Discovery Co., Ltd. Image encoding/decoding method and apparatus, and recording medium for storing bitstream that involves performing intra prediction using constructed reference sample
WO2019182292A1 (en) * 2018-03-19 2019-09-26 주식회사 케이티 Method and apparatus for video signal processing
CA3095769C (en) 2018-04-01 2023-12-12 Ki Baek Kim Method and apparatus for encoding/decoding image
WO2019199077A1 (en) * 2018-04-11 2019-10-17 엘지전자 주식회사 Intra-prediction mode-based image processing method and device therefor
WO2019199031A1 (en) * 2018-04-11 2019-10-17 엘지전자 주식회사 Method for decoding image according to adaptively applied scan order, and device therefor
US20210144402A1 (en) * 2018-06-21 2021-05-13 Kt Corporation Video signal processing method and device
CN110971911B (en) * 2018-09-30 2023-09-19 北京三星通信技术研究有限公司 Method and apparatus for intra prediction in video coding and decoding
WO2020088691A1 (en) * 2018-11-02 2020-05-07 Beijing Bytedance Network Technology Co., Ltd. Harmonization between geometry partition prediction mode and other tools
CN113228656B (en) * 2018-12-21 2023-10-31 北京字节跳动网络技术有限公司 Inter prediction using polynomial model
US11856194B2 (en) 2018-12-21 2023-12-26 Hfi Innovation Inc. Method and apparatus of simplified triangle merge mode candidate list derivation
WO2020142447A1 (en) 2018-12-30 2020-07-09 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatus of video coding for triangle prediction
KR20210100741A (en) * 2019-02-21 2021-08-17 엘지전자 주식회사 Video decoding method and apparatus using intra prediction in video coding system
CA3229435A1 (en) * 2019-03-22 2020-10-01 Lg Electronics Inc. Image decoding method and device and image encoding method and device in image coding system
WO2021110568A1 (en) * 2019-12-05 2021-06-10 Interdigital Vc Holdings France, Sas Intra sub partitions for video encoding and decoding combined with multiple transform selection, matrix weighted intra prediction or multi-reference-line intra prediction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103597831A (en) * 2011-06-20 2014-02-19 联发科技(新加坡)私人有限公司 Method and apparatus of directional intra prediction
CN104584550A (en) * 2012-08-31 2015-04-29 高通股份有限公司 Intra prediction improvements for scalable video coding

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101411315B1 (en) * 2007-01-22 2014-06-26 삼성전자주식회사 Method and apparatus for intra/inter prediction
KR20110068792A (en) * 2009-12-16 2011-06-22 한국전자통신연구원 Adaptive image coding apparatus and method
KR20110113561A (en) 2010-04-09 2011-10-17 한국전자통신연구원 Method and apparatus for intra prediction encoding and decoding using adaptive filter
KR20110125153A (en) * 2010-05-12 2011-11-18 에스케이 텔레콤주식회사 Method and apparatus for filtering image and encoding/decoding of video data using thereof
KR20120012385A (en) * 2010-07-31 2012-02-09 오수미 Intra prediction coding apparatus
KR101263090B1 (en) 2010-11-08 2013-05-09 성균관대학교산학협력단 Methods of encoding and decoding using multi-level prediction and apparatuses for using the same
CN105959706B (en) 2011-01-12 2021-01-08 三菱电机株式会社 Image encoding device and method, and image decoding device and method
US20120218432A1 (en) * 2011-02-28 2012-08-30 Sony Corporation Recursive adaptive intra smoothing for video coding
CN102843555B (en) * 2011-06-24 2017-07-14 中兴通讯股份有限公司 A kind of intra-frame prediction method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103597831A (en) * 2011-06-20 2014-02-19 联发科技(新加坡)私人有限公司 Method and apparatus of directional intra prediction
CN104584550A (en) * 2012-08-31 2015-04-29 高通股份有限公司 Intra prediction improvements for scalable video coding

Also Published As

Publication number Publication date
CN116668684A (en) 2023-08-29
US11647183B2 (en) 2023-05-09
CN116668683A (en) 2023-08-29
US11228755B2 (en) 2022-01-18
CN116668681A (en) 2023-08-29
US20230239468A1 (en) 2023-07-27
CN108781285A (en) 2018-11-09
US20190104304A1 (en) 2019-04-04
WO2017160117A1 (en) 2017-09-21
CN116668682A (en) 2023-08-29
US20220086436A1 (en) 2022-03-17
KR20170108367A (en) 2017-09-27

Similar Documents

Publication Publication Date Title
CN108781285B (en) Video signal processing method and device based on intra-frame prediction
CN109314789B (en) Method and apparatus for video signal processing based on intra prediction
US20210235115A1 (en) Intra prediction method and apparatus using the method
CN109804627B (en) Image encoding/decoding method and apparatus
RU2696237C2 (en) Video decoding method
JP6050478B2 (en) Data encoding and decoding
KR102030384B1 (en) A method and an apparatus for encoding/decoding residual coefficient
US11184639B2 (en) Method and device for video signal processing
CN113170130A (en) Image signal encoding/decoding method and apparatus thereof
CN114303369A (en) Video signal processing method and apparatus
KR20190140820A (en) A method and an apparatus for processing a video signal based on reference between components
KR102009634B1 (en) Method and apparatus for coding image compensation information and decoding using the same
RU2805056C2 (en) Method and device for encoding/decoding image signals
KR102125969B1 (en) Intra prediction method and apparatus using the method
KR20190143418A (en) A method and an apparatus for encoding/decoding residual coefficient

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant