CN101217669A - An intra-frame predictor method and device - Google Patents
An intra-frame predictor method and device Download PDFInfo
- Publication number
- CN101217669A CN101217669A CN 200810056207 CN200810056207A CN101217669A CN 101217669 A CN101217669 A CN 101217669A CN 200810056207 CN200810056207 CN 200810056207 CN 200810056207 A CN200810056207 A CN 200810056207A CN 101217669 A CN101217669 A CN 101217669A
- Authority
- CN
- China
- Prior art keywords
- pix
- predicted
- piece
- value
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses an intra-prediction method and a device thereof. When the vertical mode prediction of a prediction block needs to be carried out, in case a left block, an upper block and an upper-left block which are adjacent to the prediction block finish reconstruction, the vertical mode prediction of the prediction block is done according to the pixels adjacent to the prediction block in the left block, the upper block and the upper-left block; when the prediction block needs to be carried out with a horizontal mode prediction, in case the left block, the upper block and the upper-left block which are adjacent to the prediction block finish reconstruction, the horizontal mode prediction of the prediction block is done according to the pixels adjacent to the prediction block in the left block, the upper block and the upper-left block. The accuracy of the intra-prediction and further the coding efficiency are improved since the distance relations among the pixels are fully utilized.
Description
Technical field
The present invention relates to technical field of video processing, particularly intra-frame prediction method and device.
Background technology
Infra-frame prediction is an important step in the video coding H.264, H.264 the infra-frame prediction in the standard has adopted 16 * 16 and 4 * 4 two kinds of block prediction modes, 16 * 16 block modes have used 4 kinds of directional prediction modes, have used 9 kinds of directional prediction modes in 4 * 4 block modes.When doing infra-frame prediction, can calculate the rate distortion costs under all predictive modes successively.Wherein the title of 9 of 4 * 4 block modes kinds of directional prediction modes and its corresponding pattern numbering can be referring to following tables 1.
The pattern numbering | |
0 | The |
1 | The horizontal forecast pattern |
2 | The direct current |
3 | Diagonal is predictive mode left |
4 | Diagonal is predictive mode to the right |
5 | The vertical- |
6 | The downward predictive mode of |
7 | The vertical left |
8 | The level predictive mode that makes progress |
Table 1,9 kinds of directional prediction modes titles and associative mode numbering
Fig. 1 is 9 kinds of directional prediction modes schematic diagrames of 4 * 4 block modes, a kind of directional prediction modes of each digitized representation, and wherein the direct current predictive mode adopts mean prediction, and therefore independent of direction does not mark.Fig. 2 is the existing reference pixel schematic diagram that pixel in 4 * 4 is predicted, as shown in Figure 2, pixel in one 4 * 4 is a~p, the left piece that is adjacent, on piece, on left piece and on have in the right piece A~L, Q totally 13 pixels as reference pixel, P_A, P_B, P_C, P_D ... P_L, P_Q represent that these reference pixels are through the value after the reconstruct.Pixel a~p can utilize these reference pixels to obtain by the formula prediction.
Under the vertical prediction pattern, the predicted value of pixel to be predicted is by formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]; I, j=0,1,2,3; Calculate, wherein Pre_Pix[i] predicted value of pixel to be predicted in current 4 * 4 of [j] expression, (﹠amp; P_A) [j] expression value of getting reference pixel by horizontal direction according to the difference of j value, is got different reference pixel values, is 0 as j, and then the expression value of getting P_A is 1 as j, and then the expression value of getting P_B is 2 as j, then represents the value of getting P_C, by that analogy.In addition, wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted; As pixel to be predicted is a, and its position in 4 * 4 is (0,0), so i, and j is 0, its predicted value Pre_Pix[0] [0]=P_A; As pixel to be predicted is b, and its position in 4 * 4 is (1,0), so i=0, j=1, its predicted value Pre_Pix[0] [1]=P_B; As pixel to be predicted is e, and its position in 4 * 4 is (0,1), so i=1, j=0, its predicted value Pre_Pix[1] [0]=P_A; By that analogy.
And for example under the horizontal forecast pattern, the predicted value of pixel to be predicted is by formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]; I, j=0,1,2,3; Calculate; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted.As pixel to be predicted is a, and its position in 4 * 4 is (0,0), so i, and j is 0, its predicted value Pre_Pix[0] [0]=P_I; As pixel to be predicted is b, and its position in 4 * 4 is (1,0), so j=0, i=1, its predicted value Pre_Pix[0] [1]=P_I; As pixel to be predicted is e, and its position in 4 * 4 is (0,1), so j=1, i=0, its predicted value Pre_Pix[1] [0]=P_J; By that analogy.
From above given example as can be seen, in existing vertical mode, the value that pixel a, e, i, m calculate by the predictor calculation formula is the same, all is P_A, the value that pixel b, f, j, n calculate by the predictor calculation formula also is the same, all is P_B.In like manner, under horizontal pattern, the predicted value of pixel a, b, c, d all is P_I, and the predicted value of pixel e, f, g, h all is P_J.Identical situation diagonal left, vertical left, level also can occur under first-class predictive mode, the predicted value that a lot of pixels are arranged all is identical.
But, if current block just in time is in the edge of moving object or the boundary of two objects, correlation is very low between these borderline pixels, the intra-frame prediction method of existing standard obviously can not accurately dope the pixel value of current block, predicted value and actual value will have than big difference, the prediction residual that obtains will be bigger, increased code check, reduced code efficiency.
Summary of the invention
The embodiment of the invention provides a kind of intra-frame prediction method, can improve the accuracy of infra-frame prediction, improves code efficiency.
The embodiment of the invention provides a kind of infra-frame prediction device, can improve the accuracy of infra-frame prediction, improves code efficiency.
For achieving the above object, technical scheme of the present invention specifically is achieved in that
A kind of intra-frame prediction method, this method comprises:
When to be predicted needs carry out the vertical mode prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the vertical mode prediction to be predicted with to be predicted adjacent pixels.
Preferably, described according to carrying out the vertical mode prediction with to be predicted adjacent pixels to be predicted in left piece, last piece and the last left piece, comprising:
According to formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], each pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted;
Weight_v[i] [j] by formula W eight_v[i] [j]=((﹠amp; P_I) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this method further comprises:
When to be predicted needs carry out the horizontal pattern prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the horizontal pattern prediction to be predicted with to be predicted adjacent pixels.
Preferably, described according to carrying out the horizontal pattern prediction with to be predicted adjacent pixels to be predicted in left piece, last piece and the last left piece, comprising:
According to formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], each pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted;
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this method further comprises:
When to be predicted needs carry out diagonal left during model prediction,, then utilize following predictor formula to carry out diagonal model prediction left to be predicted if with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct:
Pre_Pix[0][0]=(P_B+P_J+1)/2;
And/or Pre_Pix[0] [1]=(4P_C+2P_K+3)/6;
And/or Pre_Pix[1] [0]=(2P_C+4P_K+3)/6;
And/or Pre_Pix[0] [2]=(3P_D+P_L+2)/4;
And/or Pre_Pix[2] [0]=(P_D+3P_L+2)/4;
And/or Pre_Pix[1] [1]=(P_D+P_L+1)/2;
And/or Pre_Pix[0] [3]=(8P_E+2P_L+5)/10;
And/or Pre_Pix[3] [0]=(2P_E+8P_L+5)/10;
And/or Pre_Pix[2] [3]=(8P_G+6P_L+7)/14;
And/or Pre_Pix[3] [2]=(6P_G+8P_L+7)/14;
And/or Pre_Pix[3] [3]=(P_G+P_L+1)/2;
And/or Pre_Pix[2] [2]=(P_F+P_L+1)/2;
And/or Pre_Pix[2] [1]=(4P_E+6P_L+5)/10;
And/or Pre_Pix[1] [2]=(6P_E+4P_L+5)/10;
And/or Pre_Pix[1] [3]=(4P_F+2P_L+3)/6;
And/or Pre_Pix[3] [1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
Preferably, this method further comprises:
When to be predicted needs carry out the vertical left model prediction,, then utilize following predictor formula to carry out the vertical left model prediction to be predicted if finished reconstruct with to be predicted adjacent last piece and last right piece:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
And/or Pre_Pix[0] [1]=(P_B+P_C+1)/2;
And/or Pre_Pix[0] [2]=(P_C+P_D+1)/2;
And/or Pre_Pix[0] [3]=(P_D+P_E+1)/2;
And/or Pre_Pix[1] [0]=(P_A+2P_B+P_C+2)/4;
And/or Pre_Pix[1] [1]=(P_B+2P_C+P_D+2)/4;
And/or Pre_Pix[1] [2]=(P_C+2P_D+P_E+2)/4;
And/or Pre_Pix[1] [3]=(P_D+2P_E+P_F+2)/4;
And/or Pre_Pix[2] [0]=(P_A+3P_B+3P_C+P_D+4)/8;
And/or Pre_Pix[2] [1]=(P_B+3P_C+3P_D+P_E+4)/8;
And/or Pre_Pix[2] [2]=(P_C+3P_D+3P_E+P_F+4)/8;
And/or Pre_Pix[2] [3]=(P_D+3P_E+3P_F+P_G+4)/8;
And/or Pre_Pix[3] [0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/1 6;
And/or Pre_Pix[3] [1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
And/or Pre_Pix[3] [2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
And/or Pre_Pix[3] [3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16;
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value.
Preferably, this method further comprises:
When to be predicted needs carry out level when making progress model prediction,, then utilize following predictor formula to carry out the level model prediction that makes progress to be predicted if finished reconstruct with to be predicted adjacent left piece:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
And/or Pre_Pix[2] [0]=(P_K+P_L+1)/2;
And/or Pre_Pix[0] [1]=(P_I+2P_J+P_K+2)/4;
And/or Pre_Pix[2] [1]=(P_K+2P_L+P_L+2)/4;
And/or Pre_Pix[0] [2]=(P_I+3P_J+3P_K+P_L+4)/8;
And/or Pre_Pix[1] [2]=(P_J+3P_K+4P_L+4)/8;
And/or Pre_Pix[2] [2]=(P_K+7P_L+4)/8;
And/or Pre_Pix[0] [3]=(P_I+4P_J+6P_K+5P_L+8)/16;
And/or Pre_Pix[1] [3]=(P_J+4P_K+11P_L+8)/16;
And/or Pre_Pix[2] [3]=(P_K+15P_L+8)/16;
And/or Pre_Pix[1] [0]=(P_J+P_K+1)/2;
And/or Pre_Pix[1] [1]=(P_J+2P_K+P_L+2)/4;
And/or Pre_Pix[3] [2]=Pre_Pix[3] [3]=Pre_Pix[3] [0]=Pre_Pix[3] [1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
A kind of infra-frame prediction device, this device comprises:
The predictive mode judging module is used for judging when to be predicted needs and carries out the vertical mode prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then sends the vertical prediction instruction to the vertical prediction module;
The vertical prediction module is used to receive the vertical prediction instruction that the predictive mode judging module sends, according to described left piece, go up piece and go up in the left piece and carry out the vertical mode prediction with to be predicted adjacent pixels to be predicted.
Preferably, described vertical prediction module is according to formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted;
Weight_v[i] [j] by formula W eigh_v[i] [j]=((﹠amp; P_I) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this device also comprises:
The horizontal forecast module is used to receive the horizontal forecast instruction that the predictive mode judging module sends, according to described left piece, go up piece and go up in the left piece and carry out the horizontal pattern prediction with to be predicted adjacent pixels to be predicted;
Described predictive mode judging module is judged when to be predicted needs carry out the horizontal pattern prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then sends horizontal forecast to the horizontal forecast module and instructs.
Preferably, described horizontal forecast module is according to formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted;
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this device also comprises:
Diagonal is prediction module left, is used to receive diagonal that the predictive mode judging module sends predict command left, carries out diagonal model prediction left according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_B+P_J+1)/2;
And/or Pre_Pix[0] [1]=(4P_C+2P_K+3)/6;
And/or Pre_Pix[1] [0]=(2P_C+4P_K+3)/6;
And/or Pre_Pix[0] [2]=(3P_D+P_L+2)/4;
And/or Pre_Pix[2] [0]=(P_D+3P_L+2)/4;
And/or Pre_Pix[1] [1]=(P_D+P_L+1)/2;
And/or Pre_Pix[0] [3]=(8P_E+2P_L+5)/10;
And/or Pre_Pix[3] [0]=(2P_E+8P_L+5)/10;
And/or Pre_Pix[2] [3]=(8P_G+6P_L+7)/14;
And/or Pre_Pix[3] [2]=(6P_G+8P_L+7)/14;
And/or Pre_Pix[3] [3]=(P_G+P_L+1)/2;
And/or Pre_Pix[2] [2]=(P_F+P_L+1)/2;
And/or Pre_Pix[2] [1]=(4P_E+6P_L+5)/10;
And/or Pre_Pix[1] [2]=(6P_E+4P_L+5)/10;
And/or Pre_Pix[1] [3]=(4P_F+2P_L+3)/6;
And/or Pre_Pix[3] [1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs and is carried out diagonal left during model prediction, if with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct, then to diagonal prediction module transmission left diagonal predict command left.
Preferably, this device also comprises:
The vertical left prediction module is used to receive the vertical left predict command that the predictive mode judging module sends, and carries out the vertical left model prediction according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
And/or Pre_Pix[0] [1]=(P_B+P_C+1)/2;
And/or Pre_Pix[0] [2]=(P_C+P_D+1)/2;
And/or Pre_Pix[0] [3]=(P_D+P_E+1)/2;
And/or Pre_Pix[1] [0]=(P_A+2P_B+P_C+2)/4;
And/or Pre_Pix[1] [1]=(P_B+2P_C+P_D+2)/4;
And/or Pre_Pix[1] [2]=(P_C+2P_D+P_E+2)/4;
And/or Pre_Pix[1] [3]=(P_D+2P_E+P_F+2)/4;
And/or Pre_Pix[2] [0]=(P_A+3P_B+3P_C+P_D+4)/8;
And/or Pre_Pix[2] [1]=(P_B+3P_C+3P_D+P_E+4)/8;
And/or Pre_Pix[2] [2]=(P_C+3P_D+3P_E+P_F+4)/8;
And/or Pre_Pix[2] [3]=(P_D+3P_E+3P_F+P_G+4)/8;
And/or Pre_Pix[3] [0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/16;
And/or Pre_Pix[3] [1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
And/or Pre_Pix[3] [2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
And/or Pre_Pix[3] [3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16;
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs carry out the vertical left model prediction, if finished reconstruct with to be predicted adjacent last piece and last right piece, then sends the vertical left predict command to the vertical left prediction module.
Preferably, this device also comprises:
The level prediction module that makes progress is used to receive level that the predictive mode judging module the sends predict command that makes progress, and carries out the level model prediction that makes progress according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
And/or Pre_Pix[2] [0]=(P_K+P_L+1)/2;
And/or Pre_ix[0] [1]=(P_I+2P_J+P_K+2)/4;
And/or Pre_Pix[2] [1]=(P_K+2P_L+P_L+2)/4;
And/or Pre_Pix[0] [2]=(P_I+3P_J+3P_K+P_L+4)/8;
And/or Pre_Pix[1] [2]=(P_J+3P_K+4P_L+4)/8;
And/or Pre_Pix[2] [2]=(P_K+7P_L+4)/8;
And/or Pre_Pix[0] [3]=(P_I+4P_J+6P_K+5P_L+8)/16;
And/or Pre_Pix[1] [3]=(P_J+4P_K+11P_L+8)/16;
And/or Pre_Pix[2] [3]=(P_K+15P_L+8)/16;
And/or Pre_Pix[1] [0]=(P_J+P_K+1)/2;
And/or Pre_Pix[1] [1]=(P_J+2P_K+P_L+2)/4;
And/or Pre_Pix[3] [2]=Pre_Pix[3] [3]=Pre_Pix[3] [0]=Pre_Pix[3] [1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs and is carried out level when making progress model prediction, as if having finished reconstruct with to be predicted adjacent left piece, then to the level prediction module transmission level predict command that makes progress that makes progress.
As seen from the above technical solutions, this judgement of the present invention is if to be predicted needs carry out the vertical mode prediction, and finished reconstruct with to be predicted adjacent left piece, last piece and last left piece, then according to carrying out vertical mode forecast method and device with to be predicted adjacent pixels to be predicted in described left piece, last piece and the last left piece, because of having made full use of the distance relation between the pixel, improve the accuracy of infra-frame prediction, and then improved code efficiency.
Description of drawings
Fig. 1 is 9 kinds of directional prediction modes schematic diagrames of 4 * 4 block modes for Fig. 1;
Fig. 2 is the existing reference pixel schematic diagram that pixel in 4 * 4 is predicted;
Fig. 3 is the Forecasting Methodology schematic diagram under the vertical prediction pattern in the intra-frame prediction method of the embodiment of the invention;
Fig. 4 is the Forecasting Methodology schematic diagram under the horizontal forecast pattern in the intra-frame prediction method of the embodiment of the invention;
Fig. 5 is the infra-frame prediction structure drawing of device of the embodiment of the invention.
Embodiment
For making purpose of the present invention, technical scheme and advantage clearer, below with reference to the accompanying drawing embodiment that develops simultaneously, the present invention is described in more detail.
The embodiment of the invention mainly is when to be predicted needs carry out the prediction of different mode, if satisfy the condition that forecasting institute needs, then utilize the spatial relation between the pixel to be predicted in to be predicted, pixel to be predicted in to be predicted is predicted, thereby improve the accuracy of infra-frame prediction, and then improve code efficiency.
To be example with the infra-frame prediction under 4 * 4 block modes below, specifically introduce the intra-frame prediction method of the embodiment of the invention.
For vertical mode (mode 0), as shown in Figure 3, when with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the vertical mode prediction to be predicted with to be predicted adjacent pixels.
Concrete formula is: Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], i, j=0,1,2,3.
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted.
Weight_v[i] [j] by formula W eight_v[i] [j]=((﹠amp; P_I) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
For example the coordinate of pixel a is (0,0), so its predicted value Pre_Pix[0] [0]=(﹠amp; P_A) [0]+Weight_v[0] [0], i.e. Pre_Pix[0] [0]=P_A+ ((P_I-P_Q)>>1).
The coordinate that c is ordered is (2,0), so its predicted value Pre_Pix[0] [2]=(﹠amp; P_A) [2]+Weight_v[0] [2], i.e. Pre_Pix[0] [2]=P_C+ ((P_I-P_Q)>>3).
The coordinate that e is ordered is (0,1), so its predicted value Pre_Pix[1] [0]=(﹠amp; P_A) [0]+Weight_v[1] [0], i.e. Pre_Pix[1] [0]=P_A+ ((P_J-P_Q)>>1).
For horizontal pattern (mode 1), as shown in Figure 4, similar with vertical mode, when if to be predicted needs carry out the horizontal pattern prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the horizontal pattern prediction to be predicted with to be predicted adjacent pixels.
Concrete formula is: Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], i, j=0,1,2,3.
Pre_Pix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted.
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
For example the coordinate of pixel a is (0,0), so its predicted value Pre_Pix[0] [0]=(﹠amp; P_I) [0]+Weight_h[0] [0], i.e. Pre_Pix[0] [0]=P_I+ ((P_A-P_Q)>>1).
The coordinate that c is ordered is (2,0), so its predicted value Pre_Pix[0] [2]=(﹠amp; P_I) [0]+Weight_h[0] [2], i.e. Pre_Pix[0] [2]=P_I+ ((P_C-P_Q)>>1).
The coordinate that e is ordered is (0,1), so its predicted value Pre_Pix[1] [0]=(﹠amp; P_I) [1]+Weight_h[1] [0], i.e. Pre_Pix[1] [0]=P_J+ ((P_A-P_Q)>>2).
For diagonal pattern left, when with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct, then can utilize following predictor formula to carry out diagonal model prediction left to be predicted:
Pre_Pix[0][0]=(P_B+P_J+1)/2;
Pre_Pix[0][1]=(4P_C+2P_K+3)/6;
Pre_Pix[1][0]=(2P_C+4P_K+3)/6;
Pre_Pix[0][2]=(3_P_D+P_L+2)/4;
Pre_Pix[2][0]=(P_D+3P_L+2)/4;
Pre_Pix[1][1]=(P_D+P_L+1)/2;
Pre_Pix[0][3]=(8P_E+2P_L+5)/10;
Pre_Pix[3][0]=(2P_E+8P_L+5)/10;
Pre_Pix[2][3]=(8P_G+6P_L+7)/14;
Pre_Pix[3][2]=(6P_G+8P_L+7)/14;
Pre_Pix[3][3]=(P_G+P_L+1)/2;
Pre_Pix[2][2]=(P_F+P_L+1)/2;
Pre_Pix[2][1]=(4P_E+6P_L+5)/10;
Pre_Pix[1][2]=(6P_E+4P_L+5)/10;
Pre_Pix[1][3]=(4P_F+2P_L+3)/6;
Pre_Pix[3][1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
For the vertical left pattern,, then utilize following predictor formula to carry out the vertical left model prediction to be predicted when having finished reconstruct with to be predicted adjacent last piece and last right piece:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
Pre_Pix[0][1]=(P_B+P_C+1)/2;
Pre_Pix[0][2]=(P_C+P_D+1)/2;
Pre_Pix[0][3]=(P_D+P_E+1)/2;
Pre_Pix[1][0]=(P_A+2P_B+P_C+2)/4;
Pre_Pix[1][1]=(P_B+2P_C+P_D+2)/4;
Pre_Pix[1][2]=(P_C+2P_D+P_E+2)/4;
Pre_Pix[1][3]=(P_D+2P_E+P_F+2)/4;
Pre_Pix[2][0]=(P_A+3P_B+3P_C+P_D+4)/8;
Pre_Pix[2][1]=(P_B+3P_C+3P_D+P_E+4)/8;
Pre_Pix[2][2]=(P_C+3P_D+3P_E+P_F+4)/8;
Pre_Pix[2][3]=(P_D+3P_E+3P_F+P_G+4)/8;
Pre_Pix[3][0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/16;
Pre_Pix[3][1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
Pre_Pix[3][2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
Pre_Pix[3][3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16。
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value.
For the level pattern that makes progress,, then utilize following predictor formula to carry out the level model prediction that makes progress to be predicted when having finished reconstruct with to be predicted adjacent left piece:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
Pre_Pix[2][0]=(P_K+P_L+1)/2;
Pre_Pix[0][1]=(P_I+2P_J+P_K+2)/4;
Pre_Pix[2][1]=(P_K+2P_L+P_L+2)/4;
Pre_Pix[0][2]=(P_I+3P_J+3P_K+P_L+4)/8;
Pre_Pix[1][2]=(P_J+3P_K+4P_L+4)/8;
Pre_Pix[2][2]=(P_K+7P_L+4)/8;
Pre_Pix[0][3]=(P_I+4P_J+6P_K+5P_L+8)/16;
Pre_Pix[1][3]=(P_J+4P_K+11P_L+8)/16;
Pre_Pix[2][3]=(P_K+15P_L+8)/16;
Pre_Pix[1][0]=(P_J+P_K+1)/2;
Pre_Pix[1][1]=(P_J+2P_K+P_L+2)/4;
Pre_Pix[3][2]=Pre_Pix[3][3]=Pre_Pix[3][0]=Pre_Pix[3][1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
Certainly, in the above predictor formula of introducing, can adopt wherein that the part formula also can adopt whole formula, can decide according to real needs.
Introduce the intra-frame prediction method of the embodiment of the invention above, will introduce the infra-frame prediction device of the embodiment of the invention below in detail.
Fig. 5 is the infra-frame prediction structure drawing of device of the embodiment of the invention; As shown in Figure 5, this device comprises predictive mode judging module 501 and vertical prediction module 502.
Predictive mode judging module 501 is used for judging when to be predicted needs carry out the vertical mode prediction, if with to be predicted adjacent left piece and last piece finished reconstruct, then send vertical prediction and instruct to the vertical prediction module.
Specifically, vertical prediction module 502 is according to formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction.
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted.
Weight_v[i] [j] by formula W eight_v[i] [j]=((﹠amp; P_I) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this device can also comprise horizontal forecast module 503.
Correspondingly, predictive mode judging module 501 is carried out the horizontal pattern prediction if judge to be predicted needs, and with to be predicted adjacent left piece and last piece finished reconstruct, then send the horizontal forecasts instruction to horizontal forecast module 503.
Specifically, horizontal forecast module 503 is according to formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction.
Pre_Pix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted.
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
Preferably, this device can also comprise diagonal prediction module 504 left.
Pre_Pix[0][0]=(P_B+P_J+1)/2;
Pre_Pix[0][1]=(4P_C+2P_K+3)/6;
Pre_Pix[1][0]=(2P_C+4P_K+3)/6;
Pre_Pix[0][2]=(3P_D+P_L+2)/4;
Pre_Pix[2][0]=(P_D+3P_L+2)/4;
Pre_Pix[1][1]=(P_D+P_L+1)/2;
Pre_Pix[0][3]=(8P_E+2P_L+5)/10;
Pre_Pix[3][0]=(2P_E+8P_L+5)/10;
Pre_Pix[2][3]=(8P_G+6P_L+7)/14;
Pre_Pix[3][2]=(6P_G+8P_L+7)/14;
Pre_Pix[3][3]=(P_G+P_L+1)/2;
Pre_Pix[2][2]=(P_F+P_L+1)/2;
Pre_Pix[2][1]=(4P_E+6P_L+5)/10;
Pre_Pix[1][2]=(6P_E+4P_L+5)/10;
Pre_Pix[1][3]=(4P_F+2P_L+3)/6;
Pre_Pix[3][1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
Correspondingly, predictive mode judging module 501, judge when to be predicted needs and carry out diagonal left during model prediction, if with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct, then to diagonal prediction module 504 transmission diagonal predict command left left.
Preferably, this device can also comprise vertical left prediction module 505.
Vertical left prediction module 505 is used to receive the vertical left predict command that predictive mode judging module 501 sends, and carries out the vertical left model prediction according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
Pre_Pix[0][1]=(P_B+P_C+1)/2;
Pre_Pix[0][2]=(P_C+P_D+1)/2;
Pre_Pix[0][3]=(P_D+P_E+1)/2;
Pre_Pix[1][0]=(P_A+2P_B+P_C+2)/4;
Pre_Pix[1][1]=(P_B+2P_C+P_D+2)/4;
Pre_Pix[1][2]=(P_C+2P_D+P_E+2)/4;
Pre_Pix[1][3]=(P_D+2P_E+P_F+2)/4;
Pre_Pix[2][0]=(P_A+3P_B+3P_C+P_D+4)/8;
Pre_Pix[2][1]=(P_B+3P_C+3P_D+P_E+4)/8;
Pre_Pix[2][2]=(P_C+3P_D+3P_E+P_F+4)/8;
Pre_Pix[2][3]=(P_D+3P_E+3P_F+P_G+4)/8;
Pre_Pix[3][0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/16;
Pre_Pix[3][1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
Pre_Pix[3][2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
Pre_Pix[3][3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16;
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value.
Correspondingly, predictive mode judging module 501 is judged when to be predicted needs carry out the vertical left model prediction, if finished reconstruct with to be predicted adjacent last piece and last right piece, then sends the vertical left predict command to vertical left prediction module 505.
Preferably, this device also comprises the level prediction module 506 that makes progress.
The level prediction module 506 that makes progress is used to receive level that predictive mode judging module 501 the sends predict command that makes progress, and carries out the level model prediction that makes progress according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
Pre_Pix[2][0]=(P_K+P_L+1)/2;
Pre_Pix[0][1]=(P_I+2P_J+P_K+2)/4;
Pre_Pix[2][1]=(P_K+2P_L+P_L+2)/4;
Pre_Pix[0][2]=(P_I+3P_J+3P_K+P_L+4)/8;
Pre_Pix[1][2]=(P_J+3P_K+4P_L+4)/8;
Pre_Pix[2][2]=(P_K+7P_L+4)/8;
Pre_Pix[0][3]=(P_I+4P_J+6P_K+5P_L+8)/16;
Pre_Pix[1][3]=(P_J+4P_K+11P_L+8)/16;
Pre_Pix[2][3]=(P_K+15P_L+8)/16;
Pre_Pix[1][0]=(P_J+P_K+1)/2;
Pre_Pix[1][1]=(P_J+2P_K+P_L+2)/4;
Pre_Pix[3][2]=Pre_Pix[3][3]=Pre_Pix[3][0]=Pre_Pix[3][1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value;
Correspondingly, described predictive mode judging module 501 is judged when to be predicted needs and is carried out level when making progress model prediction, as if having finished reconstruct with to be predicted adjacent left piece, then to the level prediction module 506 transmission levels predict command that makes progress that makes progress.
By the above embodiments as seen, when this judgement of the present invention is carried out the vertical mode prediction when to be predicted needs, if with to be predicted adjacent left piece and last piece finished reconstruct, then according to described left piece and last piece in carry out the vertical mode prediction with to be predicted adjacent pixels to be predicted; When to be predicted needs carry out the horizontal pattern prediction, if finished reconstruct with to be predicted adjacent left piece, last piece and last left piece, then according to carrying out horizontal pattern forecast method and device with to be predicted adjacent pixels to be predicted in described left piece, last piece and the last left piece, because made full use of the distance relation between the pixel, improve the accuracy of infra-frame prediction, and then improved code efficiency.
Institute is understood that; the above is a better embodiment of the present invention only, and is not intended to limit the scope of the invention, and is within the spirit and principles in the present invention all; any modification of being made, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (14)
1. an intra-frame prediction method is characterized in that, this method comprises:
When to be predicted needs carry out the vertical mode prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the vertical mode prediction to be predicted with to be predicted adjacent pixels.
2. intra-frame prediction method as claimed in claim 1 is characterized in that, and is described according to carrying out the vertical mode prediction with to be predicted adjacent pixels to be predicted in left piece, last piece and the last left piece, comprising:
According to formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], each pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted;
Weight_v[i] [j] by formula W eight_v[i] [j]=((﹠amp; P_I) [i] P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
3. intra-frame prediction method as claimed in claim 1 is characterized in that, this method further comprises:
When to be predicted needs carry out the horizontal pattern prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then according to described left piece, go up piece and go up in the left piece and carry out the horizontal pattern prediction to be predicted with to be predicted adjacent pixels.
4. intra-frame prediction method as claimed in claim 3 is characterized in that, and is described according to carrying out the horizontal pattern prediction with to be predicted adjacent pixels to be predicted in left piece, last piece and the last left piece, comprising:
According to formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], each pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted;
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
5. intra-frame prediction method as claimed in claim 1 is characterized in that, this method further comprises:
When to be predicted needs carry out diagonal left during model prediction,, then utilize following predictor formula to carry out diagonal model prediction left to be predicted if with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct:
Pre_Pix[0][0]=(P_B+P_J+1)/2;
And/or Pre_Pix[0] [1]=(4P_C+2P_K+3)/6;
And/or Pre_Pix[1] [0]=(2P_C+4P_K+3)/6;
And/or Pre_Pix[0] [2]=(3P_D+P_L+2)/4;
And/or Pre_Pix[2] [0]=(P_D+3P_L+2)/4;
And/or Pre_Pix[1] [1]=(P_D+P_L+1)/2;
And/or Pre_Pix[0] [3]=(8P_E+2P_L+5)/10;
And/or Pre_Pix[3] [0]=(2P_E+8P_L+5)/10;
And/or Pre_Pix[2] [3]=(8P_G+6P_L+7)/14;
And/or Pre_Pix[3] [2]=(6P_G+8P_L+7)/14;
And/or Pre_Pix[3] [3]=(P_G+P_L+1)/2;
And/or Pre_Pix[2] [2]=(P_F+P_L+1)/2;
And/or Pre_Pix[2] [1]=(4P_E+6P_L+5)/10;
And/or Pre_Pix[1] [2]=(6P_E+4P_L+5)/10;
And/or Pre_Pix[1] [3]=(4P_F+2P_L+3)/6;
And/or Pre_Pix[3] [1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
6. intra-frame prediction method as claimed in claim 1 is characterized in that, this method further comprises:
When to be predicted needs carry out the vertical left model prediction,, then utilize following predictor formula to carry out the vertical left model prediction to be predicted if finished reconstruct with to be predicted adjacent last piece and last right piece:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
And/or Pre_Pix[0] [1]=(P_B+P_C+1)/2;
And/or Pre_Pix[0] [2]=(P_C+P_D+1)/2;
And/or Pre_Pix[0] [3]=(P_D+P_E+1)/2;
And/or Pre_Pix[1] [0]=(P_A+2P_B+P_C+2)/4;
And/or Pre_Pix[1] [1]=(P_B+2P_C+P_D+2)/4;
And/or Pre_Pix[1] [2]=(P_C+2P_D+P_E+2)/4;
And/or Pre_Pix[1] [3]=(P_D+2P_E+P_F+2)/4;
And/or Pre_Pix[2] [0]=(P_A+3P_B+3P_C+P_D+4)/8;
And/or Pre_Pix[2] [1]=(P_B+3P_C+3P_D+P_E+4)/8;
And/or Pre_Pix[2] [2]=(P_C+3P_D+3P_E+P_F+4)/8;
And/or Pre_Pix[2] [3]=(P_D+3P_E+3P_F+P_G+4)/8;
And/or Pre_Pix[3] [0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/16;
And/or Pre_Pix[3] [1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
And/or Pre_Pix[3] [2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
And/or Pre_Pix[3] [3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16;
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value.
7. intra-frame prediction method as claimed in claim 1 is characterized in that, this method further comprises:
When to be predicted needs carry out level when making progress model prediction,, then utilize following predictor formula to carry out the level model prediction that makes progress to be predicted if finished reconstruct with to be predicted adjacent left piece:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
And/or Pre_Pix[2] [0]=(P_K+P_L+1)/2;
And/or Pre_Pix[0] [1]=(P_I+2P_J+P_K+2)/4;
And/or Pre_Pix[2] [1]=(P_K+2P_L+P_L+2)/4;
And/or Pre_Pix[0] [2]=(P_I+3P_J+3P_K+P_L+4)/8;
And/or Pre_Pix[1] [2]=(P_J+3P_K+4P_L+4)/8;
And/or Pre_Pix[2] [2]=(P_K+7P_L+4)/8;
And/or Pre_Pix[0] [3]=(P_I+4P_J+6P_K+5P_L+8)/16;
And/or Pre_Pix[1] [3]=(P_J+4P_K+11P_L+8)/16;
And/or Pre_Pix[2] [3]=(P_K+15P_L+8)/16;
And/or Pre_Pix[1] [0]=(P_J+P_K+1)/2;
And/or Pre_Pix[1] [1]=(P_J+2P_K+P_L+2)/4;
And/or Pre_Pix[3] [2]=Pre_Pix[3] [3]=Pre_Pix[3] [0]=Pre_Pix[3] [1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value.
8. an infra-frame prediction device is characterized in that, this device comprises:
The predictive mode judging module is used for judging when to be predicted needs and carries out the vertical mode prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then sends the vertical prediction instruction to the vertical prediction module;
The vertical prediction module is used to receive the vertical prediction instruction that the predictive mode judging module sends, according to described left piece, go up piece and go up in the left piece and carry out the vertical mode prediction with to be predicted adjacent pixels to be predicted.
9. infra-frame prediction device as claimed in claim 8 is characterized in that, described vertical prediction module is according to formula Pre_Pix[i] [j]=(﹠amp; P_A) [j]+Weight_v[i] [j], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_Pix[i wherein] predicted value of [j] expression pixel to be predicted, (﹠amp; P_A) [j] expression is according to the value of j, by the value that horizontal direction is got reference pixel in the piece, Weight_v[i] [j] represent the distance matrix of pixel to be predicted in vertical direction; Wherein i is according to the vertical coordinate value of pixel to be predicted, and j is according to the horizontal coordinate value of pixel to be predicted;
Weight_v[i] [j] by formula W eight_v[i] [j]=((﹠amp; P_I) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_I) [i] expression is according to the value of i, and by the value that vertical direction is got reference pixel in the left piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
10. infra-frame prediction device as claimed in claim 8 is characterized in that, this device also comprises:
The horizontal forecast module is used to receive the horizontal forecast instruction that the predictive mode judging module sends, according to described left piece, go up piece and go up in the left piece and carry out the horizontal pattern prediction with to be predicted adjacent pixels to be predicted;
Described predictive mode judging module is judged when to be predicted needs carry out the horizontal pattern prediction, if with to be predicted adjacent left piece, go up piece and go up left piece and finished reconstruct, then sends horizontal forecast to the horizontal forecast module and instructs.
11. infra-frame prediction device as claimed in claim 10 is characterized in that, described horizontal forecast module is according to formula Pre_Pix[j] [i]=(﹠amp; P_I) [j]+Weight_h[j] [i], the pixel to be predicted among the i, j=0,1,2,3 couple to be predicted is carried out the vertical mode prediction;
Pre_ix[j wherein] predicted value of [i] expression pixel to be predicted, (﹠amp; P_I) [j] expression is according to the value of j, by the value that vertical direction is got reference pixel in the left piece, Weight_h[j] [i] expression pixel to be predicted distance matrix in the horizontal direction; Wherein, j is according to the vertical coordinate value of pixel to be predicted, and i is according to the horizontal coordinate value of pixel to be predicted;
Weight_h[j] [i] by formula W eight_h[j] [i]=((﹠amp; P_A) [i]-P_Q)>>(j+1) calculate (﹠amp wherein; P_A) [i] expression is according to the value of i, and by the value that horizontal direction is got reference pixel in the piece, P_Q is the pixel value in the lower right corner in the to be predicted last left piece.
12. infra-frame prediction device as claimed in claim 8 is characterized in that, this device also comprises:
Diagonal is prediction module left, is used to receive diagonal that the predictive mode judging module sends predict command left, carries out diagonal model prediction left according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_B+P_J+1)/2;
And/or Pre_Pix[0] [1]=(4P_C+2P_K+3)/6;
And/or Pre_Pix[1] [0]=(2P_C+4P_K+3)/6;
And/or Pre_Pix[0] [2]=(3P_D+P_L+2)/4;
And/or Pre_Pix[2] [0]=(P_D+3P_L+2)/4;
And/or Pre_Pix[1] [1]=(P_D+P_L+1)/2;
And/or Pre_Pix[0] [3]=(8P_E+2P_L+5)/10;
And/or Pre_Pix[3] [0]=(2P_E+8P_L+5)/1 0;
And/or Pre_Pix[2] [3]=(8P_G+6P_L+7)/14;
And/or Pre_Pix[3] [2]=(6P_G+8P_L+7)/14;
And/or Pre_Pix[3] [3]=(P_G+P_L+1)/2;
And/or Pre_Pix[2] [2]=(P_F+P_L+1)/2;
And/or Pre_Pix[2] [1]=(4P_E+6P_L+5)/10;
And/or Pre_Pix[1] [2]=(6P_E+4P_L+5)/10;
And/or Pre_Pix[1] [3]=(4P_F+2P_L+3)/6;
And/or Pre_Pix[3] [1]=(2P_F+4P_L+3)/6;
Wherein P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G be described in the right piece with to be predicted adjacent pixels reconstruct after value; P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs and is carried out diagonal left during model prediction, if with to be predicted adjacent left piece, go up piece and go up right piece and finished reconstruct, then to diagonal prediction module transmission left diagonal predict command left.
13. infra-frame prediction device as claimed in claim 8 is characterized in that, this device also comprises:
The vertical left prediction module is used to receive the vertical left predict command that the predictive mode judging module sends, and carries out the vertical left model prediction according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_A+P_B+1)/2;
And/or Pre_Pix[0] [1]=(P_B+P_C+1)/2;
And/or Pre_Pix[0] [2]=(P_C+P_D+1)/2;
And/or Pre_Pix[0] [3]=(P_D+P_E+1)/2;
And/or Pre_Pix[1] [0]=(P_A+2P_B+P_C+2)/4;
And/or Pre_Pix[1] [1]=(P_B+2P_C+P_D+2)/4;
And/or Pre_Pix[1] [2]=(P_C+2P_D+P_E+2)/4;
And/or Pre_Pix[1] [3]=(P_D+2P_E+P_F+2)/4;
And/or Pre_Pix[2] [0]=(P_A+3P_B+3P_C+P_D+4)/8;
And/or Pre_Pix[2] [1]=(P_B+3P_C+3P_D+P_E+4)/8;
And/or Pre_Pix[2] [2]=(P_C+3P_D+3P_E+P_F+4)/8;
And/or Pre_Pix[2] [3]=(P_D+3P_E+3P_F+P_G+4)/8;
And/or Pre_Pix[3] [0]=(P_A+4P_B+6P_C+4P_D+P_E+8)/16;
And/or Pre_Pix[3] [1]=(P_B+4P_C+6P_D+4P_E+P_F+8)/16;
And/or Pre_Pix[3] [2]=(P_C+4P_D+6P_E+4P_F+P_G+8)/16;
And/or Pre_Pix[3] [3]=(P_D+4P_E+6P_F+4P_G+P_H+8)/16;
Wherein P_A, P_B, P_C, P_D be described go up in the piece with to be predicted adjacent pixels reconstruct after value; P_E, P_F, P_G, P_H be described in the right piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs carry out the vertical left model prediction, if finished reconstruct with to be predicted adjacent last piece and last right piece, then sends the vertical left predict command to the vertical left prediction module.
14. infra-frame prediction device as claimed in claim 8 is characterized in that, this device also comprises:
The level prediction module that makes progress is used to receive level that the predictive mode judging module the sends predict command that makes progress, and carries out the level model prediction that makes progress according to following predictor formula to be predicted:
Pre_Pix[0][0]=(P_I+P_J+1)/2;
And/or Pre Pix[2] [0]=(P_K+P_L+1)/2;
And/or Pre_Pix[0] [1]=(P_I+2P_J+P_K+2)/4;
And/or Pre_Pix[2] [1]=(P_K+2P_L+P_L+2)/4;
And/or Pre_Pix[0] [2]=(P_I+3P_J+3P_K+P_L+4)/8;
And/or Pre_Pix[1] [2]=(P_J+3P_K+4P_L+4)/8;
And/or Pre_Pix[2] [2]=(P_K+7P_L+4)/8;
And/or Pre_Pix[0] [3]=(P_I+4P_J+6P_K+5P_L+8)/16;
And/or Pre_Pix[1] [3]=(P_J+4P_K+11P_L+8)/16;
And/or Pre_Pix[2] [3]=(P_K+15P_L+8)/16;
And/or Pre_Pix[1] [0]=(P_J+P_K+1)/2;
And/or Pre_Pix[1] [1]=(P_J+2P_K+P_L+2)/4;
And/or Pre_Pix[3] [2]=Pre_Pix[3] [3]=Pre_Pix[3] [0]=Pre_Pix[3] [1]=P_L;
Wherein P_I, P_L, P_J, P_K be in the described left piece with to be predicted adjacent pixels reconstruct after value;
Described predictive mode judging module is judged when to be predicted needs and is carried out level when making progress model prediction, as if having finished reconstruct with to be predicted adjacent left piece, then to the level prediction module transmission level predict command that makes progress that makes progress.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200810056207 CN101217669A (en) | 2008-01-15 | 2008-01-15 | An intra-frame predictor method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200810056207 CN101217669A (en) | 2008-01-15 | 2008-01-15 | An intra-frame predictor method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101217669A true CN101217669A (en) | 2008-07-09 |
Family
ID=39624022
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200810056207 Pending CN101217669A (en) | 2008-01-15 | 2008-01-15 | An intra-frame predictor method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101217669A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867814A (en) * | 2009-04-14 | 2010-10-20 | 索尼公司 | Image encoding apparatus, image encoding method, and computer program |
CN101877792A (en) * | 2010-06-17 | 2010-11-03 | 北京中星微电子有限公司 | Intra mode prediction method and device and coder |
CN101895751A (en) * | 2010-07-06 | 2010-11-24 | 北京大学 | Method and device for intra-frame prediction and intra-frame prediction-based encoding/decoding method and system |
CN101350927B (en) * | 2008-07-29 | 2011-07-13 | 北京中星微电子有限公司 | Method and apparatus for forecasting and selecting optimum estimation mode in a frame |
CN102186086A (en) * | 2011-06-22 | 2011-09-14 | 武汉大学 | Audio-video-coding-standard (AVS)-based intra-frame prediction method |
CN103152577A (en) * | 2009-08-17 | 2013-06-12 | 三星电子株式会社 | Method and apparatus for encoding video, and method and apparatus for decoding video |
CN103826134A (en) * | 2014-03-21 | 2014-05-28 | 华为技术有限公司 | Image intra-frame prediction method and apparatus |
CN104104959A (en) * | 2013-04-10 | 2014-10-15 | 乐金电子(中国)研究开发中心有限公司 | Depth image intraframe prediction method and apparatus |
CN104954805A (en) * | 2011-06-28 | 2015-09-30 | 三星电子株式会社 | Method and apparatus for image encoding and decoding using intra prediction |
CN105430402A (en) * | 2010-04-12 | 2016-03-23 | 松下电器(美国)知识产权公司 | Image coding method and image coding device |
CN106658013A (en) * | 2011-06-24 | 2017-05-10 | 三菱电机株式会社 | Image encoding device and method, image decoding device and method and recording medium |
CN110248194A (en) * | 2019-06-28 | 2019-09-17 | 广东中星微电子有限公司 | Angle prediction technique and equipment in a kind of frame |
-
2008
- 2008-01-15 CN CN 200810056207 patent/CN101217669A/en active Pending
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350927B (en) * | 2008-07-29 | 2011-07-13 | 北京中星微电子有限公司 | Method and apparatus for forecasting and selecting optimum estimation mode in a frame |
CN101867814A (en) * | 2009-04-14 | 2010-10-20 | 索尼公司 | Image encoding apparatus, image encoding method, and computer program |
CN101867814B (en) * | 2009-04-14 | 2013-03-27 | 索尼公司 | Image encoding apparatus, image encoding method, and computer program |
US9313503B2 (en) | 2009-08-17 | 2016-04-12 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US9313502B2 (en) | 2009-08-17 | 2016-04-12 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US9392283B2 (en) | 2009-08-17 | 2016-07-12 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
CN103152577A (en) * | 2009-08-17 | 2013-06-12 | 三星电子株式会社 | Method and apparatus for encoding video, and method and apparatus for decoding video |
US9369715B2 (en) | 2009-08-17 | 2016-06-14 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
CN103152577B (en) * | 2009-08-17 | 2016-05-25 | 三星电子株式会社 | Method and apparatus to Video coding and the method and apparatus to video decode |
US9319686B2 (en) | 2009-08-17 | 2016-04-19 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US9277224B2 (en) | 2009-08-17 | 2016-03-01 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
CN105430402A (en) * | 2010-04-12 | 2016-03-23 | 松下电器(美国)知识产权公司 | Image coding method and image coding device |
CN105430402B (en) * | 2010-04-12 | 2018-08-03 | 太阳专利托管公司 | Image encoding method and picture coding device |
CN101877792A (en) * | 2010-06-17 | 2010-11-03 | 北京中星微电子有限公司 | Intra mode prediction method and device and coder |
CN101895751A (en) * | 2010-07-06 | 2010-11-24 | 北京大学 | Method and device for intra-frame prediction and intra-frame prediction-based encoding/decoding method and system |
CN102186086A (en) * | 2011-06-22 | 2011-09-14 | 武汉大学 | Audio-video-coding-standard (AVS)-based intra-frame prediction method |
CN106658013A (en) * | 2011-06-24 | 2017-05-10 | 三菱电机株式会社 | Image encoding device and method, image decoding device and method and recording medium |
CN106658013B (en) * | 2011-06-24 | 2019-07-19 | 三菱电机株式会社 | Picture coding device and method, picture decoding apparatus and method and recording medium |
US10075730B2 (en) | 2011-06-28 | 2018-09-11 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
US10085037B2 (en) | 2011-06-28 | 2018-09-25 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
US9813727B2 (en) | 2011-06-28 | 2017-11-07 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
CN104954805A (en) * | 2011-06-28 | 2015-09-30 | 三星电子株式会社 | Method and apparatus for image encoding and decoding using intra prediction |
US10045043B2 (en) | 2011-06-28 | 2018-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
US10045042B2 (en) | 2011-06-28 | 2018-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
US10506250B2 (en) | 2011-06-28 | 2019-12-10 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
US9788006B2 (en) | 2011-06-28 | 2017-10-10 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using intra prediction |
CN104954805B (en) * | 2011-06-28 | 2019-01-04 | 三星电子株式会社 | Method and apparatus for using intra prediction to carry out image coding and decoding |
CN104104959B (en) * | 2013-04-10 | 2018-11-20 | 乐金电子(中国)研究开发中心有限公司 | Depth image intra-frame prediction method and device |
CN104104959A (en) * | 2013-04-10 | 2014-10-15 | 乐金电子(中国)研究开发中心有限公司 | Depth image intraframe prediction method and apparatus |
CN103826134A (en) * | 2014-03-21 | 2014-05-28 | 华为技术有限公司 | Image intra-frame prediction method and apparatus |
CN110248194A (en) * | 2019-06-28 | 2019-09-17 | 广东中星微电子有限公司 | Angle prediction technique and equipment in a kind of frame |
CN110248194B (en) * | 2019-06-28 | 2022-07-29 | 广东中星微电子有限公司 | Intra-frame angle prediction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101217669A (en) | An intra-frame predictor method and device | |
CN106851293B (en) | For the device of the decoding image of intra prediction | |
CN106060547B (en) | The method and apparatus of decoding high resolution image | |
CN101888549B (en) | Intra-frame 4*4 forecasting model selection method based on transform domain information | |
CN100374291C (en) | Artificial wood grain floor and its press-paste technique and equipment | |
CN103561270B (en) | A kind of coding control method for HEVC and device | |
CN103067704B (en) | A kind of method for video coding of skipping in advance based on coding unit level and system | |
CN105120290B (en) | A kind of deep video fast encoding method | |
CN109146118B (en) | Prefabricated part storage yard optimization system based on optimization algorithm and optimization method thereof | |
CN105898325A (en) | Method and device for deriving sub-candidate set of motion vector prediction | |
CN105898332B (en) | For the fast deep figure frame mode decision method of 3D-HEVC coding standards | |
CN105306956B (en) | A kind of method of raising HEVC encoder discrete cosine transform processing speeds | |
CN101877792B (en) | Intra mode prediction method and device and coder | |
CN104244002A (en) | Method and device for obtaining motion information in video coding/decoding process | |
CN107600314B (en) | Integral hoisting method is built in thin plate multilayer | |
CN101431675A (en) | Image element motion estimating method and apparatus | |
CN104284186A (en) | Fast algorithm suitable for HEVC standard intra-frame prediction mode judgment process | |
CN108174204A (en) | A kind of interframe fast schema selection method based on decision tree | |
CN103888770B (en) | A kind of video code conversion system efficiently and adaptively based on data mining | |
CN109510987A (en) | The determination method, apparatus and encoding device of code tree node division mode | |
CN102883164B (en) | A kind of decoding method of enhancement layer block unit, corresponding device | |
CN101895761B (en) | Quick intraframe prediction algorithm | |
CN105712206A (en) | Container spreader pose detection system and method | |
CN117236037A (en) | Simulation system and method for evaluating warehouse layout of contact net parts | |
CN112699452A (en) | Construction method for improving flatness of large-area terrace |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080709 |