KR20140129632A - Method and apparatus for processing moving image - Google Patents
Method and apparatus for processing moving image Download PDFInfo
- Publication number
- KR20140129632A KR20140129632A KR1020130048168A KR20130048168A KR20140129632A KR 20140129632 A KR20140129632 A KR 20140129632A KR 1020130048168 A KR1020130048168 A KR 1020130048168A KR 20130048168 A KR20130048168 A KR 20130048168A KR 20140129632 A KR20140129632 A KR 20140129632A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- unit
- image
- parameter
- processing
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
BACKGROUND OF THE
As the need for UHD has arisen, it has become difficult to accommodate the size of the storage medium and the bandwidth of the transmission medium with the current moving image compression technology. Therefore, a new compression standard technology for compressing UHD moving image has been required. Standardization was completed in January.
However, the HEVC can also be used for a video stream that is served over the internet and networks such as 3G and LTE. In this case, not only UHD but also FHD or HD class can be compressed with HEVC.
UHD TV also expects 4K 30fps in the short term, but 4K 60fps / 120fps, 8K 30fps / 60fps / ... The number of pixels to be processed per second is expected to increase.
In order to cost-effectively cope with various resolutions, frame rates, etc. according to such applications, it is necessary to have a video decoding apparatus that can be easily extended according to the performance and functions required in an application.
SUMMARY OF THE INVENTION It is an object of the present invention to provide a moving picture processing method and apparatus for calculating parameters for boundary processing according to Multi V-Core.
According to an aspect of the present invention, there is provided an apparatus for processing moving images, the apparatus comprising: an image central processing unit for parsing parameter information or slice header information from moving image data input from the host; And a plurality of image processors for processing the moving image according to the parsed information, wherein the plurality of image processors are configured to process information allocated to each of the image processors and to process information required for boundary processing between the image processors, And the information stored in the memory is used to calculate a filtering parameter together with the information parsed in the image central processing unit.
According to another aspect of the present invention, there is provided a method of processing moving images in a moving image processing apparatus having an image processing unit and a plurality of image processing units, Parsing parameter information or slice header information from the moving picture data; processing the moving picture according to the parsed information under the control of the image central processing unit by the plurality of image processing units; Further comprising the step of storing information necessary for boundary processing between the image processing units in a memory while processing the area in which the image is processed by the image processing unit, Lt; / RTI >
The moving picture processing method may be embodied as a computer-readable recording medium having recorded thereon a program for execution on a computer.
According to various embodiments of the present invention, it is possible to provide a video processing apparatus and method capable of effectively processing the number of pixels to be processed per second (4K 60 fps / 120 fps, 8K 30 fps / 60 fps / have.
1 is a block diagram illustrating a configuration of a moving picture encoding apparatus according to an embodiment of the present invention.
FIG. 2 is a diagram for explaining an example of a method of dividing and processing an image into blocks.
3 is a block diagram showing an embodiment of an arrangement for performing inter prediction in an encoding apparatus.
4 is a block diagram illustrating a configuration of a moving picture decoding apparatus according to an embodiment of the present invention.
5 is a block diagram showing an embodiment of a configuration for performing inter prediction in a decoding apparatus.
6 and 7 are views showing an example of the configuration of a sequence parameter set (SPS).
8 and 9 are diagrams showing an example of the configuration of a picture parameter set (PPS).
10 to 12 are views showing an example of the configuration of a slice header (SH).
13 is a layer structure of a moving picture decoding apparatus according to an embodiment of the present invention.
FIG. 14 is a timing diagram illustrating a moving picture decoding operation of a VPU according to an embodiment of the present invention.
15 is a diagram illustrating a detailed operation of a V-CPU according to an embodiment of the present invention.
16 to 17 are diagrams for explaining a boundary process performed in a post-process in a Multi V-Core according to an embodiment of the present invention.
18 to 21 illustrate a method of allocating a Multi V-Core to be bounded on a boundary portion according to an embodiment of the present invention.
22 to 23 are diagrams showing input parameters for calculating Deblock and SAO parameters.
24 to 25 are diagrams for explaining examples of a method for calculating Deblock and SAO parameters necessary for filtering.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.
Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.
Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.
Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise. The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.
Throughout this specification, the term " combination thereof " included in the expression of the machine form means one or more combinations or combinations selected from the group consisting of the constituents described in the expression of the machine form, And the like.
As an example of a method of encoding an actual image and its depth information map, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) having the highest coding efficiency among the video coding standards developed so far jointly standardize Encoding is performed using HEVC (High Efficiency Video Coding), but the present invention is not limited thereto.
Generally, the encoding apparatus includes an encoding process and a decoding process, and the decoding apparatus has a decoding process. The decoding process of the decoding apparatus is the same as the decoding process of the encoding apparatus. Therefore, the encoding apparatus will be mainly described below.
1 is a block diagram illustrating a configuration of a moving image encoding apparatus according to an embodiment of the present invention.
1, a moving picture encoding apparatus 100 according to the present invention includes a
The
The
The picture may be composed of a plurality of slices, and the slice may be composed of a plurality of maximum coding units (LCU).
The LCU can be divided into a plurality of coding units (CUs), and the encoder can add information indicating whether or not to be divided to a bit stream. The decoder can recognize the position of the LCU by using the address (LcuAddr).
The coding unit CU in the case where division is not allowed is regarded as a prediction unit (PU), and the decoder can recognize the position of the PU using the PU index.
The prediction unit PU may be divided into a plurality of partitions. Also, the prediction unit PU may be composed of a plurality of conversion units (TUs).
In this case, the
Referring to FIG. 2, a CTU (Coding Tree Unit) is used as a moving picture encoding unit, and the CTU is defined as various square shapes. The CTU includes a coding unit CU (coding unit).
The coding unit (CU) is a quad tree and has a depth of 0 when the maximum coding unit LCU (Largest Coding Unit) having a size of 64 × 64 is set to 0, , That is, the encoding unit (CU) of 8 × 8 size, is recursively found.
A prediction unit for performing prediction is defined as a PU (Prediction Unit). Each coding unit (CU) is predicted by a unit divided into a plurality of blocks, and is divided into a square and a rectangle to perform prediction.
The transforming
The transformation unit can be transformed by two (horizontal, vertical) one-dimensional transformation matrices. For example, in the case of inter prediction, a predetermined conversion matrix is determined.
On the other hand, in case of the intra prediction, when the intra prediction mode is horizontal, the probability that the residual block has the direction in the vertical direction becomes high. Therefore, the DCT-based integer matrix is applied in the vertical direction, Or a KLT-based integer matrix. When the intra prediction mode is vertical, a DST-based or KLT-based integer matrix is applied in the vertical direction and a DCT-based integer matrix is applied in the horizontal direction.
In case of DC mode, DCT-based integer matrix is applied in both directions. Further, in the case of intra prediction, the transformation matrix may be adaptively determined depending on the size of the conversion unit.
The
The predetermined size may be 8x8 or 16x16. And quantizes the coefficients of the transform block using a quantization matrix determined according to the determined quantization step size and the prediction mode.
The
The
For example, the effective first quantization step size searched in the above order can be determined as a quantization step size predictor. In addition, the average value of the two effective quantization step sizes searched in the above order may be determined as a quantization step size predictor, or when only one is effective, it may be determined as a quantization step size predictor.
When the quantization step size predictor is determined, the difference value between the quantization step size of the current encoding unit and the quantization step size predictor is transmitted to the
On the other hand, there is a possibility that the left coding unit, the upper coding unit, and the upper left coding unit of the current coding unit do not exist. On the other hand, there may be coding units that were previously present on the coding order in the maximum coding unit.
Therefore, the quantization step sizes of the quantization units adjacent to the current coding unit and the quantization unit immediately before the coding order in the maximum coding unit can be candidates.
In this case, 1) the left quantization unit of the current coding unit, 2) the upper quantization unit of the current coding unit, 3) the upper left side quantization unit of the current coding unit, 4) . The order may be changed, and the upper left side quantization unit may be omitted.
The quantized transform block is provided to the
The
The coefficient scanning method may be determined depending on the size of the conversion unit. The scan pattern may vary according to the directional intra prediction mode. The scan order of the quantization coefficients is scanned in the reverse direction.
When the quantized coefficients are divided into a plurality of subsets, the same scan pattern is applied to the quantization coefficients in each subset. The scan pattern between subset applies zigzag scan or diagonal scan. The scan pattern is preferably scanned to the remaining subsets in the forward direction from the main subset containing the DC, but vice versa.
In addition, a scan pattern between subsets can be set in the same manner as a scan pattern of quantized coefficients in a subset. In this case, the scan pattern between the sub-sets is determined according to the intra-prediction mode. On the other hand, the encoder transmits to the decoder information indicating the position of the last non-zero quantization coefficient in the transform unit.
Information that can indicate the position of the last non-zero quantization coefficient in each subset can also be transmitted to the decoder.
The
The
The deblocking filtering process is preferably applied to the boundary of a prediction unit and a conversion unit having a size larger than a predetermined size. The size may be 8x8. The deblocking filtering process may include determining a boundary to be filtered, determining a bounary filtering strength to be applied to the boundary, determining whether to apply a deblocking filter, And selecting a filter to be applied to the boundary if it is determined to apply the boundary.
Whether or not the deblocking filter is applied is determined based on i) whether the boundary filtering strength is greater than 0 and ii) whether a pixel value at a boundary between two blocks adjacent to the boundary to be filtered (P block, Q block) Is smaller than a first reference value determined by the quantization parameter.
The filter is preferably at least two or more. If the absolute value of the difference between two pixels located at the block boundary is greater than or equal to the second reference value, a filter that performs relatively weak filtering is selected.
And the second reference value is determined by the quantization parameter and the boundary filtering strength.
The adaptive offset application process is to reduce a distortion between a pixel in the image to which the deblocking filter is applied and the original pixel. It may be determined whether to perform the adaptive offset applying process in units of pictures or slices.
The picture or slice may be divided into a plurality of offset regions, and an offset type may be determined for each offset region. The offset type may include a predetermined number (e.g., four) of edge offset types and two band offset types.
If the offset type is an edge offset type, the edge type to which each pixel belongs is determined and the corresponding offset is applied. The edge type is determined based on the distribution of two pixel values adjacent to the current pixel.
The adaptive loop filtering process can perform filtering based on a value obtained by comparing a reconstructed image and an original image through a deblocking filtering process or an adaptive offset applying process. The adaptive loop filtering can be applied to the entire pixels included in the 4x4 block or the 8x8 block.
Whether or not the adaptive loop filter is applied can be determined for each coding unit. The size and the coefficient of the loop filter to be applied may vary depending on each coding unit. Information indicating whether or not the adaptive loop filter is applied to each coding unit may be included in each slice header.
In the case of the color difference signal, it is possible to determine whether or not the adaptive loop filter is applied in units of pictures. The shape of the loop filter may have a rectangular shape unlike the luminance.
Adaptive loop filtering can be applied on a slice-by-slice basis. Therefore, information indicating whether or not adaptive loop filtering is applied to the current slice is included in the slice header or the picture header.
If the current slice indicates that adaptive loop filtering is applied, the slice header or picture header additionally includes information indicating the horizontal and / or vertical direction filter length of the luminance component used in the adaptive loop filtering process.
The slice header or picture header may include information indicating the number of filter sets. At this time, if the number of filter sets is two or more, the filter coefficients can be encoded using the prediction method. Accordingly, the slice header or the picture header may include information indicating whether or not the filter coefficients are encoded in the prediction method, and may include predicted filter coefficients when the prediction method is used.
On the other hand, not only luminance but also chrominance components can be adaptively filtered. Accordingly, the slice header or the picture header may include information indicating whether or not each of the color difference components is filtered. In this case, in order to reduce the number of bits, information indicating whether or not to filter Cr and Cb can be joint-coded (i.e., multiplexed coding).
At this time, in the case of chrominance components, since Cr and Cb are not all filtered in order to reduce the complexity, it is most likely to be the most frequent. Therefore, if Cr and Cb are not all filtered, the smallest index is allocated and entropy encoding is performed .
When both Cr and Cb are filtered, the largest index is allocated and entropy encoding is performed.
The
The
Based on the determined reference picture index and motion vector, a prediction block corresponding to a prediction unit to be coded is extracted from a reference picture used for motion estimation among a plurality of reference pictures stored in the
The
The
The
The
FIG. 3 is a block diagram of an embodiment of a configuration for performing inter-prediction in the encoding apparatus. The illustrated inter-prediction encoding apparatus includes a motion
Referring to FIG. 3, the motion
And indicates one of the reference pictures belonging to the list 0 (L0) when the current block is unidirectionally inter-predictive-coded. On the other hand, when the current block is bi-directionally predictive-coded, a reference picture index indicating one of the reference pictures of the list 0 (L0) and a reference picture index indicating one of the reference pictures of the list 1 (L1) .
In addition, when the current block is bi-directionally predictive-coded, it may include an index indicating one or two pictures among the reference pictures of the composite list LC generated by combining the
The motion vector indicates the position of the prediction block in the picture indicated by each reference picture index. The motion vector may be a pixel unit (integer unit) or a sub-pixel unit.
For example, it may have a resolution of 1/2, 1/4, 1/8 or 1/16 pixels. When the motion vector is not an integer unit, the prediction block is generated from the pixels of the integer unit.
The motion information encoding
The skip mode is applied when there is a skip candidate having the same motion information as the current block motion information, and the residual signal is zero. The skip mode is also applied when the current block is the same size as the coding unit. The current block can be viewed as a prediction unit.
The merge mode is applied when there is a merge candidate having the same motion information as the current block motion information. The merge mode is applied when there is a residual signal when the current block is different in size from the coding unit or the size is the same. The merge candidate and the skip candidate can be the same.
AMVP mode is applied when skip mode and merge mode are not applied. The AMVP candidate having the motion vector most similar to the motion vector of the current block is selected as the AMVP predictor.
The motion
The prediction
However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the pixels in the integer unit in the picture indicated by the reference picture index.
In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.
The residual
However, if the current block size used for prediction is 2NxN or Nx2N, a prediction block for each of the 2NxN blocks constituting 2Nx2N is obtained, and the 2Nx2N final prediction block using the 2NxN prediction blocks is calculated Can be generated.
The 2Nx2N residual block may be generated using the 2Nx2N prediction block. It is possible to overlap-smoothing the pixels of the boundary portion to solve the discontinuity of the boundary portion of 2NxN-sized two prediction blocks.
The residual
The residual
The residual
The quantization parameter is determined for each coding unit equal to or larger than a predetermined size. The predetermined size may be 8x8 or 16x16. Therefore, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are encoded in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters. You do not have to.
The coefficients of the transform block are quantized using a quantization matrix determined according to the determined quantization parameter and the prediction mode.
The quantization parameter determined for each coding unit equal to or larger than the predetermined size is predictively encoded using a quantization parameter of a coding unit adjacent to the current coding unit. A quantization parameter predictor of the current coding unit can be generated by searching the left coding unit of the current coding unit, the upper coding unit order, and using one or two valid quantization parameters available.
For example, a valid first quantization parameter retrieved in the above order may be determined as a quantization parameter predictor. In addition, the first coding unit may be searched in order of the coding unit immediately before in the coding order, and the first validation parameter may be determined as a quantization parameter predictor.
The coefficients of the quantized transform block are scanned and converted into one-dimensional quantization coefficients. The scanning scheme can be set differently according to the entropy encoding mode. For example, in the case of CABAC encoding, the inter prediction encoded quantized coefficients can be scanned in a predetermined manner (zigzag or raster scan in the diagonal direction). On the other hand, when encoded by CAVLC, it can be scanned in a different manner from the above method.
For example, the scanning method may be determined according to the intra-prediction mode in the case of interlacing, or the intra-prediction mode in the case of intra. The coefficient scanning method may be determined depending on the size of the conversion unit.
The scan pattern may vary according to the directional intra prediction mode. The scan order of the quantization coefficients is scanned in the reverse direction.
The
That is, in the case of skipping or merge, only the index indicating the predictor is included. However, in the case of AMVP, the reference picture index, the difference motion vector, and the AMVP index of the current block are included.
Hereinafter, an operation of the
First, the prediction mode information and the size of the prediction block are received by the
Next, the reference pixel is read from the
It is determined whether or not the reference pixel is generated by examining whether or not the unavailable reference pixel exists. The reference pixels are used to determine the intra prediction mode of the current block.
If the current block is located at the upper boundary of the current picture, pixels adjacent to the upper side of the current block are not defined. In addition, when the current block is located at the left boundary of the current picture, pixels adjacent to the left side of the current block are not defined.
It is determined that these pixels are not usable pixels. In addition, it is determined that the pixels are not usable even if the current block is located at the slice boundary and pixels adjacent to the upper or left side of the slice are not encoded and reconstructed.
As described above, if there are no pixels adjacent to the left or upper side of the current block, or if there are no pixels that have been previously coded and reconstructed, the intra prediction mode of the current block may be determined using only available pixels.
However, it is also possible to use the available reference pixels of the current block to generate reference pixels of unusable positions. For example, if the pixels of the upper block are not available, the upper pixels may be created using some or all of the left pixels, or vice versa.
That is, available reference pixels at positions closest to the predetermined direction from the reference pixels at unavailable positions can be copied and generated as reference pixels. When there is no usable reference pixel in a predetermined direction, the usable reference pixel at the closest position in the opposite direction can be copied and generated as a reference pixel.
On the other hand, even if the upper or left pixels of the current block exist, the reference pixel may be determined as an unavailable reference pixel according to the encoding mode of the block to which the pixels belong.
For example, if the block to which the reference pixel adjacent to the upper side of the current block belongs is inter-coded and the reconstructed block, the pixels can be determined as unavailable pixels.
In this case, it is possible to generate usable reference pixels by using pixels belonging to the restored block by intra-coded blocks adjacent to the current block. In this case, information indicating that the encoder determines available reference pixels according to the encoding mode must be transmitted to the decoder.
Next, an intra prediction mode of the current block is determined using the reference pixels. The number of intra prediction modes that can be allowed in the current block may vary depending on the size of the block. For example, if the current block size is 8x8, 16x16, or 32x32, there may be 34 intra prediction modes. If the current block size is 4x4, 17 intra prediction modes may exist.
The 34 or 17 intra prediction modes may include at least one non-directional mode and a plurality of directional modes.
The one or more non-directional modes may be a DC mode and / or a planar mode. When the DC mode and the planar mode are included in the non-directional mode, there may be 35 intra-prediction modes regardless of the size of the current block.
At this time, it may include two non-directional modes (DC mode and planar mode) and 33 directional modes.
The planner mode generates a prediction block of the current block using at least one pixel value (or a predicted value of the pixel value, hereinafter referred to as a first reference value) located at the bottom-right of the current block and the reference pixels .
As described above, the configuration of the moving picture decoding apparatus according to an embodiment of the present invention can be derived from the configuration of the moving picture coding apparatus described with reference to FIG. 1 to FIG. 3. For example, The image can be decoded by performing an inverse process of the encoding process.
4 is a block diagram illustrating a configuration of a moving picture decoding apparatus according to an embodiment of the present invention.
4, the moving picture decoding apparatus according to the present invention includes an
The
The
The inverse quantization /
The intraprediction mode is received from an intraprediction unit or an entropy decoding unit.
The inverse quantization /
Then, the reconstructed quantized coefficient is inversely transformed to reconstruct the residual block.
The
The
The
The
The motion
The intra /
FIG. 5 is a block diagram of an embodiment for performing inter prediction in a decoding apparatus. The inter prediction decoding apparatus includes a
Referring to FIG. 5, the
The motion information encoding
The motion information encoding
When the skip_flag of the received bitstream has a value of 0 and the motion information received from the
The merge mode motion
The AMVP mode motion
The
If the motion vector is an integer unit, the block corresponding to the position indicated by the motion vector in the picture indicated by the reference picture index is copied to generate a prediction block of the current block.
However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the integer unit pixels in the picture indicated by the reference picture index. In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.
The residual
That is, the inverse scanning method of the inter-prediction residual signal in case of decoding based on CABAC and decoding based on CAVLC can be changed. For example, in case of decoding based on CABAC, a raster inverse scanning method in a diagonal direction, and a case in which decoding is based on CAVLC, a zigzag reverse scanning method can be applied.
In addition, the inverse scanning method may be determined depending on the size of the prediction block.
The residual
The predetermined size may be 8x8 or 16x16. Accordingly, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are restored in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters, You do not have to.
The quantization parameter of the coding unit adjacent to the current coding unit is used to recover the quantization parameter determined for each coding unit equal to or larger than the predetermined size. The first coding unit of the current coding unit, the upper coding unit order, and determine a valid first quantization parameter as a quantization parameter predictor of the current coding unit.
In addition, the first coding unit may be searched in order of the coding unit immediately before in the coding order, and the first validation parameter may be determined as a quantization parameter predictor. And restores the quantization parameter of the current prediction unit using the determined quantization parameter predictor and the difference quantization parameter.
The residual
The reconstruction
Hereinafter, a process of restoring a current block through intraprediction will be described with reference to FIG.
First, the intra prediction mode of the current block is decoded from the received bitstream. For this, the
The plurality of intra prediction mode tables are tables shared by the encoder and the decoder, and may be any one selected according to the distribution of intra prediction modes of a plurality of blocks adjacent to the current block.
For example, if the intra prediction mode of the left block of the current block and the intra prediction mode of the upper block of the current block are the same, the first intra prediction mode table of the current block is restored by applying the first intra prediction mode table, The first intra prediction mode index of the current block can be restored by applying the second intra prediction mode table.
As another example, when the intra prediction modes of the upper block and the left block of the current block are all the directional intra prediction modes, the direction of the intra prediction mode of the upper block and the intra prediction mode of the left block If the direction is within a predetermined angle, the first intra-prediction mode table of the current block is restored by applying the first intra-prediction mode table. If the direction is outside the predetermined angle, the second intra- The mode index can also be restored.
The
The
However, if the index has a value other than 0, the index indicating the maximum possible mode of the current block is compared with the first intra-prediction mode index. If the first intra-prediction mode index is not smaller than the index indicated by the maximum possible mode of the current block, the intra-prediction mode corresponding to the second intra-prediction mode index obtained by adding 1 to the first intra- The intra prediction mode of the current block is determined as the intra prediction mode corresponding to the first intra prediction mode index.
The intra prediction mode acceptable for the current block may be composed of at least one non-directional mode and a plurality of directional modes.
The one or more non-directional modes may be a DC mode and / or a planar mode. In addition, either the DC mode or the planar mode may be adaptively included in the allowable intra prediction mode set.
To this end, information specifying the non-directional mode included in the allowable intra prediction mode set may be included in the picture header or slice header.
Next, in order to generate an intra prediction block, the
The determination may be made according to the presence or absence of the reference pixels used to generate the intra prediction block by applying the decoded intra prediction mode of the current block.
Next, when it is necessary to generate a reference pixel, the
The definition of a reference pixel that is not available and the method of generating a reference pixel are the same as those in the
Next, the
Since the problem of blocking artifacts increases as the size of the block increases, the larger the size of the block, the larger the number of prediction modes for filtering reference pixels. However, when the block is larger than a predetermined size, it can be regarded as a flat area, so that reference pixels may not be filtered to reduce the complexity.
If it is determined that the filter needs to be applied to the reference pixel, the reference pixels are filtered using a filter.
At least two or more filters may be adaptively applied according to the difference in level difference between the reference pixels. The filter coefficient of the filter is preferably symmetrical.
In addition, the above two or more filters may be adaptively applied according to the size of the current block. That is, when a filter is applied, a filter having a narrow bandwidth may be applied to a block having a small size, and a filter having a wide bandwidth may be applied to a block having a large size.
In the case of the DC mode, since a prediction block is generated with an average value of reference pixels, there is no need to apply a filter. That is, when the filter is applied, only unnecessary calculation amount is increased.
In addition, it is not necessary to apply the filter to the reference pixel in the vertical mode in which the image has vertical correlation. It is not necessary to apply the filter to the reference pixel even in the horizontal mode in which the image is related to the horizontal direction.
Since the filtering is applied to the intra-prediction mode of the current block, the reference pixel can be adaptively filtered based on the intra-prediction mode of the current block and the size of the prediction block.
Next, according to the reconstructed intra prediction mode, a prediction block is generated using the reference pixel or the filtered reference pixels. Since the generation of the prediction block is the same as the operation in the encoder, it is omitted. Even in the planar mode, the operation is the same as that in the encoder, so it is omitted.
Next, it is determined whether to filter the generated prediction block. The determination as to whether to perform the filtering may use information included in the slice header or the encoding unit header. It may also be determined according to the intra prediction mode of the current block.
If it is determined that the generated prediction block is to be filtered, the generated prediction block is filtered. Specifically, a new pixel is generated by filtering pixels at a specific position of a prediction block generated using available reference pixels adjacent to the current block.
This may be applied together at the time of generating the prediction block. For example, in the DC mode, a prediction pixel in contact with reference pixels among prediction pixels is filtered using a reference pixel in contact with the prediction pixel.
Therefore, the predictive pixel is filtered using one or two reference pixels according to the position of the predictive pixel. The filtering of the prediction pixel in the DC mode can be applied to the prediction block of all sizes. In the vertical mode, the prediction pixels adjacent to the left reference pixel among the prediction pixels of the prediction block may be changed using reference pixels other than the upper pixel used to generate the prediction block.
Likewise, in the horizontal mode, the prediction pixels adjacent to the upper reference pixel among the generated prediction pixels may be changed using reference pixels other than the left pixel used to generate the prediction block.
The current block is reconstructed using the predicted block of the current block restored in this manner and the residual block of the decoded current block.
The moving picture bitstream according to an embodiment of the present invention may include PS (parameter sets) and slice data as a unit used to store coded data in one picture.
A PS (parameter set) is divided into a picture parameter set (hereinafter, simply referred to as PPS) and a sequence parameter set (hereinafter simply referred to as SPS) which are data corresponding to the heads of each picture. The PPS and the SPS may include initialization information required to initialize each encoding.
The SPS is common reference information for decoding all pictures coded in a random access unit (RAU), and includes a profile, a maximum number of pictures usable for reference, a picture size, and the like, as shown in Figs. 6 and 7 .
The PPS includes, for each picture coded by the random access unit (RAU), the kind of the variable length coding method as the reference information for decoding the picture, the initial value of the quantization step, and a plurality of reference pictures, 9 as shown in FIG.
On the other hand, the slice header SH includes information on the corresponding slice when coding in units of slices, and can be configured as shown in FIGS. 10 to 12.
Hereinafter, a configuration for scalably processing the above-described moving image encoding and decoding processing using a plurality of processing units will be described in detail.
An apparatus for processing moving images according to an exemplary embodiment of the present invention includes an image center processing unit for parsing parameter information or slice header information from moving image data input from the host, Wherein the plurality of image processing units store information necessary for boundary processing between the image processing units in a memory while processing the respective allocated areas, Is used to calculate the filtering parameters together with the information parsed in the image central processing unit.
The memory may store pixel values, motion vector information, quatization parameters, coefficient presence information, and SAO parameters of a boundary portion.
In addition, the information parsed in the image central processing unit includes picture level parameter information, boundary level parameter information, CTU level parameter information, and 8 * 8 level parameter information .
The filtering parameter may include at least one of a Deblock parameter and a SAO parameter.
Each of the plurality of image processing units includes a first processing unit that communicates with the image central processing unit to perform entropy coding on the moving image data, and a second processing unit that processes the entropy- Unit. ≪ / RTI >
A method of processing a moving image in a moving image processing apparatus having an image processing unit and a plurality of image processing units according to an embodiment of the present invention is characterized in that the image processing unit is configured to extract parameter information or slice header information Processing the moving picture according to the parsed information under the control of the image central processing unit and processing the moving picture by the plurality of image processing units while processing the allocated area, Further comprising storing information necessary for boundary processing in a memory, wherein the information stored in the memory is used to calculate a filtering parameter together with information parsed in the image central processing unit.
The storing step may store pixel values, motion vector information, quatization parameters, coefficient presence information, and SAO parameters of a boundary part in the memory.
In addition, the information parsed in the image central processing unit includes picture level parameter information, boundary level parameter information, CTU level parameter information, and 8 * 8 level parameter information .
The filtering parameter may include at least one of a Deblock parameter and a SAO parameter.
In addition, the plurality of image processing units may include a first processing unit and a second processing unit, respectively, and the first processing unit may communicate with the image central processing unit to perform entropy coding on the moving image data, And the second processing unit may process the entropy-coded moving picture data in units of encoding.
Here, the video processing unit may refer to a
Here, the moving picture processing apparatus may include both a moving picture coding apparatus and a moving picture decoding apparatus. The moving picture decoding apparatus and the moving picture encoding apparatus may be implemented as apparatuses for performing inverse processes as described above with reference to FIGS. 1 to 4. Hereinafter, a moving picture decoding apparatus will be described as an example of a moving picture decoding apparatus do. However, the present invention is not limited to this, and the moving picture processing apparatus may be embodied as a moving picture coding apparatus which performs an inverse process of a moving picture decoding apparatus to be described later.
13 is a diagram illustrating a layer structure of a moving picture decoding apparatus according to an embodiment of the present invention. Referring to FIG. 13, the moving picture decoding apparatus may include a video processing unit (VPU) 300 that performs a moving picture decoding function. The
Here, the
The V-
For example, the V-
Also, the V-
Also, the V-
The V-
The V-
Here, the V-
The
Where the
The
Here, the
FIG. 14 is a timing diagram illustrating a moving picture decoding operation of a VPU according to an embodiment of the present invention. Referring to FIG. 14, as described above, the V-
Hereinafter, the detailed operation of the V-
Specifically, the V-
Also, the V-
In addition, the V-
The 'Picture parameter data structure' may include the following information.
For example, the information contained in the sequence / picture header (eg, picture size, scaling list, CTU, min / max CU size, min / max TU size, etc.) can do.
This Picture parameter data structure can be set once during decoding of one picture.
Slice control data structure may contain the following information.
For example, the information included in the Slice header (eg, slice type, slice / tile area information, reference picture list, weighted prediction parameter, etc.) may be included.
This slice control data structure can be set when the slice changes. The inter-processor communication registers of the V-
Here, the information transferred from the V-
Meanwhile, even if the number of slice control data structures that can be stored in the V-
Meanwhile, when a plurality of tiles are included in one slice and are processed in parallel by the multi V-
In addition, the V-
Also, the V-
In addition, the V-
In addition, the V-
Also, the V-
Also, the V-
Hereinafter, the detailed operation of the
The
In addition, the
The
CUU / CU / PU / TU parameters and coefficients required for decode processing excluding the information (picture size, segment offset / size, ...) common to each block and source / destination address in DMAC and reference pixel data The
In addition, the
When the
Also, the
In addition, it can report to the V-
The
Here, the
According to various embodiments of the present invention described above, it is possible to separate the header parsing and the data processing process, pipeline the separated data processing process, and perform V- CPU can be provided.
Hereinafter, the boundary process performed in the Multi V-
Referring to FIG. 16, the V-
Here, in the column buffer and line buffer, information such as the pixel values corresponding to the boundary portion and the motion vector, qp, coeff, and SAO parameter for obtaining Deblock and SAO parameters can be stored.
Here, as shown in FIG. 16, the boundary part may be a boundary part with an adjacent area generated by decoding the area allocated to each of the Multi V-
The area that the single core processes may include multiple tiles, in which case the boundaries between the tiles may be processed while sequentially decoding the tiles. For example, if
However, when the
Thus, according to an embodiment of the present invention, information for boundary portion processing can be stored for the post-processing (boundary processing) of the boundary portion.
Hereinafter, a method of allocating the Multi V-
Referring to FIGS. 16 through 17, the boundary processing method according to an embodiment of the present invention can perform filtering using parameters required for the Deblock and SAO calculated by the Multi V-
However, in this case, a method of allocating boundary portions to be bounded to each of the Multi V-
For example, when the number of V-cores is two or four, as shown in FIG. 18, the V-
Alternatively, when the number of V-cores is not two or four, the V-
That is, the allocation for the boundary portion can be allocated to the V-
In this case, each of the Multi V-
On the other hand, when the boundary portions allocated to the V-
Further, the processing for the next frame can not be performed before the boundary processing is completed. This is because the next frame can refer to the previous frame.
As described above, when the boundary portion to be boundary processed by the V-
22 to 23 are diagrams showing input parameters for calculating Deblock and SAO parameters. Here, the input parameter may be a parameter according to the VPS, SPS, PPS, SH parsing of the V-
Meanwhile, the V-
24 to 25 are diagrams for explaining examples of a method for calculating Deblock and SAO parameters necessary for filtering. Referring to FIGS. 24 to 25, the V-
The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).
The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.
Claims (10)
An image central processing unit for parsing parameter information or slice header information from moving picture data input from the host; And
And a plurality of image processing units under the control of the image central processing unit and processing moving images in accordance with the parsed information,
Wherein the plurality of image processing units store information necessary for boundary processing between the image processing units in a memory while processing the respective allocated areas,
Wherein the information stored in the memory is used to calculate a filtering parameter for the boundary processing together with information parsed in the image central processing unit.
In the memory,
Wherein the motion vector storage unit stores pixel values of a boundary portion, motion vector information, quatization parameters, coefficient presence information, and SAO parameters.
The information parsed in the image central processing unit is,
A picture level parameter information, a boundary level parameter information, an encoding unit level (CTU level) parameter information, and an 8 * 8 level parameter information.
Wherein the filtering parameter includes at least one of a Deblock parameter and a SAO parameter.
Wherein each of the plurality of image processing units comprises:
A first processing unit communicating with the image central processing unit to perform entropy coding on the moving image data; And
And a second processing unit for processing the entropy-coded moving picture data in units of coding.
Parsing parameter information or slice header information from moving picture data input from the host;
Processing the moving picture according to the parsed information; And
Further comprising the step of storing information necessary for boundary processing between the image processing units in the memory while the areas allocated by the plurality of image processing units are being processed,
Wherein the information stored in the memory is used to calculate a filtering parameter together with information parsed in the image central processing unit.
Wherein the storing step comprises:
Wherein motion vector information, quantization parameters, coefficient presence information, and SAO parameters are stored in the memory.
The information parsed in the image central processing unit is,
A picture level parameter information, a boundary level parameter information, an encoding unit level (CTU level) parameter information, and an 8 * 8 level parameter information.
Wherein the filtering parameter includes at least one of a Deblock parameter and a SAO parameter.
Each of the plurality of image processing units includes a first processing unit and a second processing unit,
The first processing unit communicating with the image central processing unit to perform entropy coding on the moving picture data; And
And the second processing unit processes the entropy-coded moving picture data in a coding unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130048168A KR20140129632A (en) | 2013-04-30 | 2013-04-30 | Method and apparatus for processing moving image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130048168A KR20140129632A (en) | 2013-04-30 | 2013-04-30 | Method and apparatus for processing moving image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140129632A true KR20140129632A (en) | 2014-11-07 |
Family
ID=52454855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130048168A KR20140129632A (en) | 2013-04-30 | 2013-04-30 | Method and apparatus for processing moving image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140129632A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113473120A (en) * | 2015-06-11 | 2021-10-01 | 英迪股份有限公司 | Method for encoding and decoding image using adaptive deblocking filtering and apparatus therefor |
CN115104318A (en) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | Sprite-based image encoding apparatus and method |
-
2013
- 2013-04-30 KR KR1020130048168A patent/KR20140129632A/en not_active Application Discontinuation
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113473120A (en) * | 2015-06-11 | 2021-10-01 | 英迪股份有限公司 | Method for encoding and decoding image using adaptive deblocking filtering and apparatus therefor |
US11849152B2 (en) | 2015-06-11 | 2023-12-19 | Dolby Laboratories Licensing Corporation | Method for encoding and decoding image using adaptive deblocking filtering, and apparatus therefor |
CN115104318A (en) * | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | Sprite-based image encoding apparatus and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101847899B1 (en) | Method and apparatus for processing video | |
KR101895295B1 (en) | Method and apparatus for processing video | |
KR20140129607A (en) | Method and apparatus for processing moving image | |
KR101569912B1 (en) | Method and apparatus for encoding/decoding video | |
KR102354628B1 (en) | A method of video processing for processing coding tree units and coding units, a method and appratus for decoding and encoding video using the processing. | |
KR101586125B1 (en) | Method and apparatus for encoding/decoding video | |
KR102657392B1 (en) | A method of video processing providing independent properties between coding tree units and coding units, a method and appratus for decoding and encoding video using the processing | |
KR20170132038A (en) | Method of Adaptive Loof Filtering based on block for processing video, video encoding and decoding thereof | |
KR101659343B1 (en) | Method and apparatus for processing moving image | |
KR20140129632A (en) | Method and apparatus for processing moving image | |
KR101914667B1 (en) | Method and apparatus for processing moving image | |
KR20170132036A (en) | Method for coding unit partitioning, video encoding and decoding thereof | |
KR101609427B1 (en) | Method and apparatus for encoding/decoding video | |
KR20140130274A (en) | Method and apparatus for processing moving image | |
KR20140130269A (en) | Method and apparatus for processing moving image | |
KR102610188B1 (en) | Method of video processing providing high-throughput arithmetic coding and method and appratus for decoding and encoding video using the processing | |
KR20140129629A (en) | Method and apparatus for processing moving image | |
KR20140130572A (en) | Method and apparatus for processing moving image | |
KR20140130266A (en) | Method and apparatus for processing moving image | |
KR20140130268A (en) | Method and apparatus for processing moving image | |
KR20140130573A (en) | Method and apparatus for processing moving image | |
KR20140130571A (en) | Method and apparatus for processing moving image | |
KR20140130574A (en) | Method and apparatus for processing moving image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |