KR20140124443A - Method for encoding and decoding video using intra prediction, and apparatus thereof - Google Patents

Method for encoding and decoding video using intra prediction, and apparatus thereof Download PDF

Info

Publication number
KR20140124443A
KR20140124443A KR1020130041297A KR20130041297A KR20140124443A KR 20140124443 A KR20140124443 A KR 20140124443A KR 1020130041297 A KR1020130041297 A KR 1020130041297A KR 20130041297 A KR20130041297 A KR 20130041297A KR 20140124443 A KR20140124443 A KR 20140124443A
Authority
KR
South Korea
Prior art keywords
mode
block
unit
prediction
decoding
Prior art date
Application number
KR1020130041297A
Other languages
Korean (ko)
Inventor
문주희
임성원
한종기
Original Assignee
인텔렉추얼디스커버리 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사 filed Critical 인텔렉추얼디스커버리 주식회사
Priority to KR1020130041297A priority Critical patent/KR20140124443A/en
Publication of KR20140124443A publication Critical patent/KR20140124443A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

According to another aspect of the present invention, there is provided a decoding method for decoding a video signal, the decoding method comprising: receiving a bitstream including the video signal; Determining a prediction mode for a block to be decoded of the video signal; Determining a plurality of highest probability mode candidates according to a plurality of reference pixel lines corresponding to the block if the determined prediction mode is an intra mode; Reconstructing the mode information from the bitstream; And predicting and decoding the block based on the mode information and the plurality of highest probability mode candidates.

Figure P1020130041297

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a video encoding / decoding method and apparatus using intra prediction,

The present invention relates to a video codec, and more particularly, to a method and apparatus for providing video encoding that inherits information from neighboring blocks of a coding block during coding and decoding.

In general video coding and decoding, when the current coding block is an inter coded block, the MVP is calculated based on a motion vector MV of a neighboring block and a reference image index, or a merge skip mode is considered. When the current coding block is an intra coded block, the MPM for intra coding the current block is calculated based on the intra prediction modes of the neighboring blocks. However, in this general method, there is a problem that the types of information inherited from neighboring blocks are limited.

Embodiments of the present invention provide a method for inheriting more information from neighboring blocks than is now possible in a video codec.

It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may be present.

According to a first aspect of the present invention, there is provided a method of providing a video encoding method for increasing the number and types of information that can be inherited from neighboring blocks and encoding a plurality of previously encoded and decoded Information is extracted from the pictures and used for encoding.

According to another aspect of the present invention, there is provided a decoding method for decoding a video signal, the decoding method comprising: receiving a bitstream including the video signal; Determining a prediction mode for a block to be decoded of the video signal; Determining a plurality of highest probability mode candidates according to a plurality of reference pixel lines corresponding to the block if the determined prediction mode is an intra mode; Reconstructing the mode information from the bitstream; And predicting and decoding the block based on the mode information and the plurality of highest probability mode candidates.

According to the present invention, more diversified information can be inherited from neighboring blocks to improve coding efficiency

1 is a block diagram showing an example of a configuration of a video encoding apparatus.
2 is a block diagram showing an example of a structure of a video decoding apparatus.
3 is a diagram showing an example of intra prediction modes.
4 is a diagram illustrating a method of setting two lines according to an intra prediction mode according to an embodiment of the present invention
FIG. 5 is a flowchart of an intra prediction mode using multiple lines of reference pixel lines according to an exemplary embodiment of the present invention. Referring to FIG.
6 is a flowchart of an MPM method using a plurality of lines of reference pixel lines according to an exemplary embodiment of the present invention.
7 is a bitstream extracting structure for the proposed technique in the prediction block according to an embodiment of the present invention.
8 is a bitstream extracting structure in which the proposed technique and the conventional technique are mixed in the prediction block according to an embodiment of the present invention

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.

Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.

Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.

Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise. The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.

Throughout this specification, the term " combination thereof " included in the expression of the machine form means one or more combinations or combinations selected from the group consisting of the constituents described in the expression of the machine form, And the like.

As an example of a method of encoding an actual image and its depth information map, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) having the highest coding efficiency among the video coding standards developed so far jointly standardize Encoding can be performed using HEVC (High Efficiency Video Coding).

FIG. 1 is a block diagram illustrating an example of a configuration of a video encoding apparatus. The encoding apparatus includes a coding mode determination unit 110, an intra prediction unit 120, a motion compensation unit 130, a motion estimation unit 131 Quantization unit 140, an entropy encoding unit 150, an inverse quantization / transformation decoding unit 160, a deblocking filtering unit 170, a picture storage unit 180, a subtraction unit 190, (200).

Referring to FIG. 1, an encoding mode determination unit 110 analyzes an input video signal to divide a picture into a predetermined size of an encoding block, and determines a coding mode for a predetermined size of the encoding block. The encoding mode includes intraprediction encoding and inter prediction encoding.

The picture is composed of a plurality of slices, and the slice is composed of a plurality of maximum coding units (LCU). The LCU can be divided into a plurality of coding units (CUs), and the encoder can add information indicating whether or not to be divided to a bit stream. The decoder can recognize the position of the LCU by using the address (LcuAddr). The coding unit CU in the case where division is not allowed is regarded as a prediction unit (PU), and the decoder can recognize the position of the PU using the PU index.

The prediction unit PU may be divided into a plurality of partitions. Also, the prediction unit PU may be composed of a plurality of conversion units (TUs).

The encoding mode determination unit 110 sends the image data to the subtraction unit 190 in units of blocks of a predetermined size (for example, in units of PU or TU) according to the determined encoding mode.

The transform coding / quantizing unit 140 transforms the residual block calculated by the subtracting unit 190 from the spatial domain to the frequency domain.

For example, two-dimensional discrete cosine transform (DCT) or discrete cosine transform (DST) -based transform is performed on the residual block. In addition, the transcoding / quantization unit 140 determines a quantization step size for quantizing the transform coefficient, and quantizes the transform coefficient using the determined quantization step size. The quantization matrix can be determined according to the determined quantization step size and encoding mode.

The quantized two-dimensional transform coefficients are transformed into one-dimensional quantized transform coefficients by one of the predetermined scanning methods.

The transformed one-dimensional sequence of quantization transform coefficients is supplied to the entropy encoding unit 150.

The inverse quantization / conversion decoding unit 160 dequantizes the quantization coefficients quantized by the transcoding / quantization unit 140. Also,

The inverse quantization coefficient obtained by inverse quantization is inversely transformed. Accordingly, the residual block transformed into the frequency domain can be restored into the residual block in the spatial domain.

The deblocking filtering unit 170 receives the inverse quantized and inverse transformed image data from the inverse quantization / inverse transform coding unit 160 and performs filtering to remove a blocking effect.

The picture storage unit 180 receives the filtered image data from the deblocking filtering unit 170 and restores and restores the image in picture units. The picture may be a frame-based image or a field-based image.

The picture storage unit 180 has a buffer (not shown) capable of storing a plurality of pictures. A plurality of pictures stored in the buffer are provided for intra prediction and motion estimation. The pictures provided for intra prediction or motion estimation are referred to as reference pictures.

The motion estimation unit 131 receives the at least one reference picture stored in the picture storage unit 180 and performs motion estimation to output motion data including an index indicating a motion vector and a reference picture and a block mode do.

In order to optimize the prediction precision, a motion vector is determined with a fractional pixel precision, for example, 1/2 or 1/4 pixel accuracy. Since the motion vector can have a fractional pixel precision, the motion compensation unit 130 applies the interpolation filter for calculating the pixel value of the fractional pixel position to the reference picture so that the pixel value of the fractional pixel position .

The motion compensation unit 130 is configured to perform motion compensation on a block to be coded from a reference picture used for motion estimation among a plurality of reference pictures stored in the picture storage unit 180 according to the motion data input from the motion estimation unit 131 And outputs the extracted prediction block.

The motion compensation unit 130 determines a filter characteristic of the adaptive interpolation filter necessary for motion compensation with a decimal precision. The filter characteristic is, for example, information indicating the filter type of the adaptive interpolation filter and information indicating the size of the adaptive interpolation filter. The size of the filter is, for example, the number of taps, which is the number of filter coefficients of the adaptive interpolation filter.

Specifically, the motion compensation unit 130 may determine either a separate type or a non-separable type adaptive filter as an adaptive interpolation filter. Then, the number of taps of the determined adaptive interpolation filter and the value of each filter coefficient are determined. The value of the filter coefficient can be determined differently for each position of the fractional pixel relative to the integer pixel. Also, the motion compensation unit 130 may use a plurality of non-adaptive interpolation filters with fixed filter coefficients.

The motion compensation unit 130 can set the characteristics of the interpolation filter in a predetermined processing unit. For example, it can be set in a fractional pixel unit, a coding basic unit (encoding unit), a slice unit, a picture unit, or a sequence unit. In addition, one characteristic may be set for one video data. Therefore, since the same filter characteristic is used in a predetermined processing unit, the motion compensation unit 130 has a memory that temporarily holds the filter characteristic. This memory maintains filter characteristics, filter coefficients, and the like as needed. For example, the motion compensation unit 130 can determine the filter characteristic for each I picture and determine the filter coefficient for each slice.

The motion compensation unit 130 receives a reference picture from the picture storage unit 180 and applies a filter process using the determined adaptive interpolation filter to generate a prediction reference picture of a decimal precision.

Then, based on the generated reference picture and the motion vector determined by the motion estimation unit 131, motion compensation is performed with a small number of pixels to generate a prediction block.

The subtractor 190 receives the block in the reference picture corresponding to the input block from the motion compensator 130 and performs a difference operation with the input macroblock in the case of performing inter picture prediction coding on the input block to be coded, and outputs a residue signal.

The intraprediction unit 120 performs intraprediction encoding using the reconstructed pixel values in a picture to be predicted. The intra prediction unit receives the current block to be predictively encoded and performs intra prediction by selecting one of a plurality of intra prediction modes preset according to the size of the current block. The intra predictor 120 determines the intra prediction mode of the current block using the previously coded pixels adjacent to the current block, and generates a prediction block corresponding to the determined mode.

The previously encoded region of the current picture is decoded again for use by the intra prediction unit 120 and stored in the picture storage unit 180. [ The intra prediction unit 120 generates a prediction block of a current block using pixels neighboring the current block or non-adjacent but applicable pixels in the previously coded area of the current picture stored in the picture storage unit 180. [

The intra prediction unit 120 may adaptively filter adjacent pixels to predict an intra block. For the same operation in the decoder, it is possible to transmit information indicating whether or not filtering is performed in the encoder. Or the intra-prediction mode of the current block and the size information of the current block.

The prediction type used by the image coding apparatus depends on whether the input block is coded in the intra mode or the inter mode by the coding mode determination unit.

The switching between the intra mode and the inter mode is controlled by the intra / inter selector switch.

The entropy encoding unit 150 entropy-codes the quantization coefficients quantized by the transcoding / quantization unit 140 and the motion information generated by the motion estimation unit 131. [ Also, an intra prediction mode, control data (e.g., quantization step size, etc.), and the like can be coded. Also, the filter coefficient determined by the motion compensation unit 130 is encoded and output as a bit stream.

FIG. 2 is a block diagram illustrating an example of a configuration of a video decoding apparatus. The illustrated decoding apparatus includes an intropy decoding unit 210, an inverse quantization / inverse transform unit 220, an adder 270, a deblocking filter unit 250, a picture storage unit 260, an intra prediction unit 230, a motion compensation prediction unit 240, and an intra / inter changeover switch 280.

Referring to FIG. 2, the intropy decoding unit 210 decodes an encoded bitstream transmitted from a moving picture encoding apparatus into an intra-prediction mode index, motion information, and a quantized coefficient sequence. The intropy decoding unit 210 supplies the decoded motion information to the motion compensation prediction unit 240. [ The intropy decoding unit 210 supplies the intra prediction mode index to the intraprediction unit 230 and the inverse quantization / inverse transformation unit 220. Also, the intropy decoding unit 210 supplies the inverse quantization coefficient sequence to the inverse quantization / inverse transformation unit 220.

The inverse quantization / inverse transform unit 220 transforms the quantized coefficient sequence into an inverse quantization coefficient of the two-dimensional array. One of a plurality of scanning patterns is selected for the conversion. One of a plurality of scanning patterns is selected based on the prediction mode of the current block (i.e., any one of intra prediction and inter prediction), the intra prediction mode, and the size of the conversion block.

The intraprediction mode is received from the intra prediction unit or intropy decoding unit 210.

The inverse quantization / inverse transform unit 220 restores the quantization coefficients using the selected quantization matrix among the plurality of quantization matrices to the inverse quantization coefficients of the two-dimensional array. The quantization matrix may be determined using information received from the encoder.

Different quantization matrices may be applied depending on the size of the current block (transform block) to be restored, and a quantization matrix may be selected based on at least one of a prediction mode and an intra prediction mode of the current block with respect to blocks of the same size. Then, the reconstructed quantized coefficient is inversely transformed to reconstruct the residual block.

The adder 270 reconstructs the image block by adding the residual block reconstructed by the inverse quantization / inverse transforming unit 220 to the intra prediction unit 230 or the prediction block generated by the motion compensation prediction unit 240.

The deblocking filter unit 250 performs deblocking filter processing on the reconstructed image generated by the adder 270. [ Accordingly, the deblocking artifact due to the video loss due to the quantization process can be reduced.

The picture storage unit 260 is a frame memory for holding a local decoded picture subjected to deblocking filter processing by the deblocking filter unit 250. [

The intraprediction unit 230 restores the intra prediction mode of the current block based on the intra prediction mode index received from the Intropy decoding unit 210. A prediction block is generated according to the restored intra prediction mode.

The motion compensation prediction unit 240 generates a prediction block for the current block from the picture stored in the picture storage unit 260 based on the motion vector information. When motion compensation with a decimal precision is applied, a prediction block is generated by applying a selected interpolation filter.

The intra / inter selector switch 280 provides the adder 270 with a prediction block generated in either the intra prediction unit 230 or the motion compensation prediction unit 260 based on the encoding mode.

In the standardized video codec techniques to date, pixel values within one picture are coded on a block basis. If pixel values of a current block to be coded are similar to neighboring blocks in the same image, intra coding can be performed using the similarity.

On the other hand, when the current coding block is an intra coded block, the current block is predicted by referring to the pixel values of the blocks already encoded in the surroundings, and then the prediction residual signal is encoded. HEVC performs spatial prediction coding using 35 prediction modes.

FIG. 3 shows an example of intra-prediction modes, and shows prediction modes and prediction directions of intra prediction considered by the HEVC.

Referring to FIG. 3, the number of intra prediction modes may vary according to the size of a block. For example, if the current block size is 8x8, 16x16, or 32x32, there may be 34 intra prediction modes. If the current block size is 4x4, 17 intra prediction modes may exist. The 34 or 17 intra prediction modes may include at least one non-directional mode and a plurality of directional modes.

The one or more non-directional modes may be a DC mode and / or a planar mode. When the DC mode and the planar mode are included in the non-directional mode, there may be 35 intra-prediction modes regardless of the size of the current block. At this time, it may include two non-directional modes (DC mode and planar mode) and 33 directional modes.

The planner mode generates a prediction block of the current block using at least one pixel value (or a predicted value of the pixel value, hereinafter referred to as a first reference value) located at the bottom-right of the current block and the reference pixels .

As shown in FIG. 3, the intra-prediction mode is constituted by equal angular intervals from 45 degrees in the lower left direction to 45 degrees in the upper right direction, and predictive coding is performed.

However, the number of intra predicion modes currently performing encoding is about four times higher than that of H.264 having only nine modes, resulting in a disadvantage that the complexity of the encoder rapidly increases.

Intra prediction method using multiple reference pixel lines (coding part)

According to an embodiment of the present invention, a prediction block can be generated using a current block and a plurality of reference pixels around the current block. Hereinafter, for the sake of convenience of description, only two reference lines around the current block are used, but reference lines of three lines, four lines, and so on are also usable.

In FIG. 4, a method of setting two lines according to the intra prediction mode is set. In the left figure, the initial state is set to determine the case of mode 18 to 34. In the right figure, the initial state is set to determine the case of mode 2 to 17. The following process will be described with reference to the flowchart of FIG. First, the surrounding area is divided according to the mode (S501). For example, in the case of dividing into four areas, it can be divided into 2 to 10, 11 to 18, 19 to 26, and 27 to 34 modes. And the variable i indicating mode is set to 2. In HEVC, 0 and 1 are omitted because they represent non-directional mode (DC and Planar). Thereafter, interpolation between the pixels of the A line and the B line is performed in units of 1/32 (S502), and the sum SAD [i] of the differences between the B line and the A line is obtained while shifting the B line to the angle (S503) . Then, the variable i indicating the mode is incremented (S504). If i is not 36 (S505), steps S503 to S505 are repeated until the value is 36. The sum of the SAD [i] obtained according to each mode is summed in a predetermined area to obtain a sum by area. For example, the sum of the regions 11 to 18 is SAD [11] + SAD [12] + ... + SAD [18]. The region having the smallest sum is selected for each region (S506) and only the modes existing therein are used (S507). The information to be transmitted to the decoder transmits a number according to the area, and the number of each area can be transmitted at various positions such as a CTU, a slice header, and a picture parameter set.

MPM method using multiple reference pixel lines (coding section)

According to an embodiment of the present invention, improved MPM candidates can be generated using the current block and surrounding reference lines. Hereinafter, for the sake of convenience of description, only two reference lines around the current block are used, but reference lines of three lines, four lines, and so on are also usable.

In FIG. 4, a method of setting two lines according to the intra prediction mode is set. In the left figure, the initial state is set to determine the case of mode 18 to 34. In the right figure, the initial state is set to determine the case of mode 2 to 17. The subsequent process will be described with reference to the flowchart of FIG. The interpolation between the pixels of the A line and the B line is performed in units of 1/32 (S601), and the variable i indicating the mode is set to 2. In HEVC, 0 and 1 are omitted because they represent non-directional mode (DC and Planar). The sum of the differences between the B line and the A line is obtained while shifting the B line to the angle (S602). Then, the variable i indicating the mode is incremented (S603). If i is not 36 (S604), steps S602 to S604 are repeated until the value is 36. The three lowest values of SAD [i] obtained according to each mode are selected and determined as MPM candidates.

Hereinafter, an embodiment of the present invention will be described with respect to a decoder, and a structure of extracting a bitstream of a bitstream generating and decoding unit of an encoder is the same as that of a decoder and a decoder.

Intra prediction method using multiple reference pixel lines (decoding unit)

7 is a flowchart of a bitstream extracting structure in a prediction block for this embodiment. If it is determined that Intra Coding has been performed in the prediction block, a plurality of reference pixel lines are set in the same manner as in the encoder. After that, a bit representing a region is extracted to determine which region the intra prediction mode is to be used, and existing prediction information is directly extracted (MPM, etc.) and decoded. In this case, since the bit indicating the area is added, if it is determined that the MPM is not the existing prediction information, the remaining mode should be limited to the mode of the corresponding area.

MPM method using multiple reference pixel lines (decoding unit)

The bitstream extracting structure in the prediction block for this embodiment is the same as the conventional technique. If it is determined that Intra Coding has been performed in the prediction block, three MPM candidates are set by setting a plurality of reference pixel lines in the same manner as in the encoder. After that, mode information is restored according to extracted bits, and a prediction block is generated to decode the current block.

Or it may be used in combination with a conventional technique. 8 is a flowchart of a bitstream extracting structure in a prediction block for this embodiment. If the current block is determined as Intra Coding, a bit according to the MPM scheme is extracted (S801). In step S802, it is determined whether the MPM method is used according to the extracted bits. If the proposed MPM setting is used, a plurality of reference pixel lines are set around the current block, and MPM candidates are set in the same manner as in the encoder in step S803. If the existing MPM setting is used, the MPM candidate is set according to the existing method (S804). Then, the existing prediction information is extracted (S805) to decode the current block.

The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).

The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (8)

A method for decoding a video signal,
Receiving a bitstream including the video signal;
Determining a prediction mode for a block to be decoded of the video signal;
Determining a plurality of highest probability mode candidates according to a plurality of reference pixel lines corresponding to the block if the determined prediction mode is an intra mode;
Reconstructing the mode information from the bitstream; And
And predicting and decoding the block based on the mode information and the plurality of highest probability mode candidates.
The method according to claim 1,
Wherein the mode information includes bit information indicating which maximum probability mode is used.
The method according to claim 1,
Wherein the plurality of reference pixel lines include at least two reference pixels located around the to-be-decoded block.
The method according to claim 1,
Wherein the mode information includes number information according to a highest probability mode and is extracted from at least one of a CTU, a slice header, and a picture parameter set included in the bitstream.
An apparatus for decoding a video signal, the apparatus comprising:
A receiver for receiving a bitstream including the video signal;
A mode decision unit for deciding a prediction mode for a block to be decoded of the video signal;
A candidate determination unit for determining a plurality of highest probability mode candidates according to a plurality of reference pixel lines corresponding to the block when the determined prediction mode is an intra mode;
A reconstruction unit for reconstructing the mode information from the bitstream; And
And a decoding unit that predictively decodes the block based on the mode information and the plurality of highest probability mode candidates.
6. The method of claim 5,
Wherein the mode information includes bit information indicating which maximum probability mode is used.
6. The method of claim 5,
Wherein the plurality of reference pixel lines include at least two reference pixels located around the block to be decoded.
6. The method of claim 5,
Wherein the mode information includes number information according to a highest probability mode and is extracted from at least one of a CTU, a slice header, and a picture parameter set included in the bitstream.
KR1020130041297A 2013-04-15 2013-04-15 Method for encoding and decoding video using intra prediction, and apparatus thereof KR20140124443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130041297A KR20140124443A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using intra prediction, and apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130041297A KR20140124443A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using intra prediction, and apparatus thereof

Publications (1)

Publication Number Publication Date
KR20140124443A true KR20140124443A (en) 2014-10-27

Family

ID=51994653

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130041297A KR20140124443A (en) 2013-04-15 2013-04-15 Method for encoding and decoding video using intra prediction, and apparatus thereof

Country Status (1)

Country Link
KR (1) KR20140124443A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017090993A1 (en) * 2015-11-24 2017-06-01 삼성전자 주식회사 Method and device for video decoding and method and device for video encoding
WO2017176030A1 (en) * 2016-04-06 2017-10-12 주식회사 케이티 Method and apparatus for processing video signal
WO2017190288A1 (en) * 2016-05-04 2017-11-09 Microsoft Technology Licensing, Llc Intra-picture prediction using non-adjacent reference lines of sample values
WO2018124332A1 (en) * 2016-12-28 2018-07-05 엘지전자(주) Intra prediction mode-based image processing method, and apparatus therefor
WO2020013609A1 (en) * 2018-07-11 2020-01-16 인텔렉추얼디스커버리 주식회사 Intra-frame prediction-based video coding method and device
WO2020209671A1 (en) * 2019-04-10 2020-10-15 한국전자통신연구원 Method and device for signaling prediction mode-related signal in intra prediction
US11128862B2 (en) 2016-09-13 2021-09-21 Research and Telecommunications Research Insitute Video encoding/decoding method and device, and recording medium having bitstream stored therein
CN113840140A (en) * 2017-05-09 2021-12-24 华为技术有限公司 Intra prediction using multiple reference lines
CN113873240A (en) * 2016-06-24 2021-12-31 株式会社Kt Method and apparatus for processing video signal

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017090993A1 (en) * 2015-11-24 2017-06-01 삼성전자 주식회사 Method and device for video decoding and method and device for video encoding
CN108293116A (en) * 2015-11-24 2018-07-17 三星电子株式会社 Video encoding/decoding method and equipment and method for video coding and equipment
US10659812B2 (en) 2015-11-24 2020-05-19 Samsung Electronics Co., Ltd. Method and device for video decoding and method and device for video encoding
WO2017176030A1 (en) * 2016-04-06 2017-10-12 주식회사 케이티 Method and apparatus for processing video signal
US11785257B2 (en) 2016-04-06 2023-10-10 Kt Corporation Method and apparatus for processing video signal
KR20180123674A (en) * 2016-04-06 2018-11-19 주식회사 케이티 Method and apparatus for video signal processing
US11259047B2 (en) 2016-04-06 2022-02-22 Kt Corporation Method and apparatus for processing video signal
WO2017190288A1 (en) * 2016-05-04 2017-11-09 Microsoft Technology Licensing, Llc Intra-picture prediction using non-adjacent reference lines of sample values
US10764576B2 (en) 2016-05-04 2020-09-01 Microsoft Technology Licensing, Llc Intra-picture prediction using non-adjacent reference lines of sample values
CN113873240A (en) * 2016-06-24 2021-12-31 株式会社Kt Method and apparatus for processing video signal
US11128862B2 (en) 2016-09-13 2021-09-21 Research and Telecommunications Research Insitute Video encoding/decoding method and device, and recording medium having bitstream stored therein
US11805247B2 (en) 2016-09-13 2023-10-31 Electronics And Telecommunications Research Institute Video encoding/decoding method and device, and recording medium having bitstream stored therein
WO2018124332A1 (en) * 2016-12-28 2018-07-05 엘지전자(주) Intra prediction mode-based image processing method, and apparatus therefor
CN113840140A (en) * 2017-05-09 2021-12-24 华为技术有限公司 Intra prediction using multiple reference lines
CN113852812A (en) * 2017-05-09 2021-12-28 华为技术有限公司 Intra prediction using multiple reference lines
CN113852812B (en) * 2017-05-09 2023-10-20 华为技术有限公司 Intra prediction using multiple reference lines
CN113840140B (en) * 2017-05-09 2023-10-20 华为技术有限公司 Intra prediction using multiple reference lines
US11800094B2 (en) 2017-05-09 2023-10-24 Futurewei Technologies, Inc. Intra-prediction with multiple reference lines
WO2020013609A1 (en) * 2018-07-11 2020-01-16 인텔렉추얼디스커버리 주식회사 Intra-frame prediction-based video coding method and device
US11509932B2 (en) 2018-07-11 2022-11-22 Intellectual Discovery Co., Ltd. Intra-frame prediction-based video coding method and device
WO2020209671A1 (en) * 2019-04-10 2020-10-15 한국전자통신연구원 Method and device for signaling prediction mode-related signal in intra prediction

Similar Documents

Publication Publication Date Title
US11057618B2 (en) Method and apparatus for processing a video signal
US11533470B2 (en) Method and apparatus for encoding/decoding an image signal
KR20140124443A (en) Method for encoding and decoding video using intra prediction, and apparatus thereof
KR20180059443A (en) Motion vector refinement based inter prediction method and apparatus in video coding system
KR20180058224A (en) Modeling-based image decoding method and apparatus in video coding system
US10602155B2 (en) Intra prediction method and apparatus
KR102548881B1 (en) Methods and apparatus for video transform encoding/decoding
KR20230110701A (en) Method and apparatus for encoding/decoding a video signal
US11909990B2 (en) Method and apparatus for encoding/decoding image signals using weight prediction parameter sets based on neighboring regions
KR20140124448A (en) Method for encoding and decoding video using intra prediction, and apparatus thereof
KR20140129423A (en) Method for intra-prediction, and apparatus thereof
KR101569912B1 (en) Method and apparatus for encoding/decoding video
CN109479142B (en) Method and apparatus for encoding/decoding image signal
KR20140124442A (en) Method for encoding and decoding video using intra prediction, and apparatus thereof
KR20140129417A (en) Method for encoding and decoding image using a plurality of transforms, and apparatus thereof
CN110495175B (en) Image decoding method and image encoding method
KR20150105347A (en) Intra prediction method and intra prediction apparatus using residual transform
KR20150105348A (en) Method and apparatus for encoding/decoding images using transform
KR20210156256A (en) Method for intra-prediction, and apparatus thereof
KR101659343B1 (en) Method and apparatus for processing moving image
KR102217225B1 (en) Method for intra-prediction, and apparatus thereof
KR20140129421A (en) Method for intra-prediction, and apparatus thereof
KR102557740B1 (en) Method and apparatus for encoding/decoding a video signal
KR102557797B1 (en) Method and apparatus for encoding/decoding a video signal
KR102585178B1 (en) Image encoding/decoding method and apparatus using efficient motion information prediction

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination