CN103391443A - Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block - Google Patents

Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block Download PDF

Info

Publication number
CN103391443A
CN103391443A CN2013103373945A CN201310337394A CN103391443A CN 103391443 A CN103391443 A CN 103391443A CN 2013103373945 A CN2013103373945 A CN 2013103373945A CN 201310337394 A CN201310337394 A CN 201310337394A CN 103391443 A CN103391443 A CN 103391443A
Authority
CN
China
Prior art keywords
piece
row
block
sub
present encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103373945A
Other languages
Chinese (zh)
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd filed Critical SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority to CN2013103373945A priority Critical patent/CN103391443A/en
Publication of CN103391443A publication Critical patent/CN103391443A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention provides an intra-frame prediction encoding and decoding method and system of a brightness transformation domain of a big size block. According to the method and the system, high energy gathering performance of the transformation domain and the correlation between a current encoding code block and an adjacent encoded block are utilized, and the redundancy of intra-frame information is eliminated; and an encoding process can be roughly divided into modules of predication, transformation, quantification, reordering, entropy encoding and the like. The transformation utilized by the method is just a transformation algorithm of a transformation module of a subsequent encoding process, and the calculated amount is not increased additionally; besides, compared with a spatial domain, the transformation domain has the characteristic of the high energy gathering performance, so that the method is improved in the intra-frame prediction performance.

Description

A kind of luminance transformation territory infra-frame prediction decoding method and system of large scale piece
Technical field
The present invention relates to the coding and decoding video field, relate in particular to a kind of luminance transformation territory infra-frame prediction decoding method and system of large scale piece.
Background technology
As the reference frame of encoded predicted frame between subsequent frame, the compression quality of intraframe predictive coding frame will have influence on the compression quality of follow-up some inter prediction encoding frames, so the intraframe predictive coding technology is occupied very important status in whole coding techniques.On the other hand, the redundancy of frame internal information, much smaller than the redundancy of inter-frame information, makes encoder relative also higher to the requirement of the performance of infra-prediction techniques.
At present, conventional video coding technique generally, by the spatial domain infra-frame prediction, is eliminated frame internal information redundancy.Whole infra-frame prediction is calculated method system and is usually comprised: be applicable to flat site large scale piece (as 16x16) brightness infra-frame prediction, be applicable to small size piece (as 4x4) the brightness infra-frame prediction of complex region and be applicable to the colourity infra-frame prediction of chrominance information.Such spatial domain intra-frame prediction method is all to utilize around the present encoding piece reconstruction value of the neighborhood pixels of having encoded to predict.Yet because the cumulative of spatial-domain information is poor, this just makes the optimization realization that is difficult to reach distortion performance based on such infra-prediction techniques of a small amount of predictive mode.Although and the optimized algorithm that adopts the increase predictive mode can reach the lifting of prediction effect,, optimum prediction mode obtains the rate distortion costs that often needs to travel through each coding mode of comparison and realizes.Thereby make this type of improve the effect of algorithm on distortion performance promotes, with amount of calculation growth that it brings, compare and seem barely satisfactory.On the platform of resource-constrained, will more restrict the enforcement in actual applications of this type of algorithm.
Summary of the invention
The purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory intra-frame predictive encoding method of large scale piece, the optimization that is intended to solve the video coding technique of prior art routine or is difficult to reach distortion performance realizes, although can reach the lifting of prediction effect, the problem that amount of calculation is huge.
The embodiment of the present invention is achieved in that a kind of luminance transformation territory intra-frame predictive encoding method of large scale piece said method comprising the steps of:
Press transformation matrix size in conversion module, the spatial domain monochrome information piece of present encoding piece is divided into matrix in block form;
Each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Each transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction;
Under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
The present encoding piece is carried out conventional RDO obtain the optimum frame inner estimation mode, complete the transform domain infra-frame prediction of present encoding piece.
The second purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory infra-frame prediction coding/decoding method of large scale piece, said method comprising the steps of:
Code stream to current decoding block first carries out the entropy decoding, reorders, and then carries out inverse quantization;
According to the intra prediction mode of current decoding block, carry out the transform domain infra-frame prediction by four kinds of predictive modes in following the second predictive mode group, obtain the transform domain intra prediction value of current decoding block;
, with the data accumulation of the transform domain intra prediction value of current decoding block with the current decoding block inverse quantization that obtains, obtain the transform domain reconstruction value of current decoding block;
The transform domain reconstruction value of current decoding block is carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block;
The spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
The 3rd purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory intraframe predictive coding system of large scale piece, described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
The infra-frame prediction error calculating module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, be used for carrying out conventional RDO, obtains the optimum frame inner estimation mode, completes the transform domain infra-frame prediction of present encoding piece.
The 4th purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory infra-frame prediction decode system of large scale piece, described system comprises: entropy decoder module, the module that reorders, inverse quantization module, decoding block transform domain intra prediction value acquisition module, decoding block transform domain reconstruction value acquisition module, decoding block spatial domain reconstruction value acquisition module, filtration module
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
Inverse quantization module, be used for the code stream of the rear current decoding block that reorders is carried out inverse quantization;
Decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the second predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
Decoding block transform domain reconstruction value acquisition module, be used for the bit stream data of the transform domain intra prediction value of current decoding block and the current decoding block after the inverse quantization module inverse quantization is added up, and obtains the transform domain reconstruction value of current decoding block;
Decoding block spatial domain reconstruction value acquisition module, be used for the inverse transformation that the transform domain reconstruction value of current decoding block is carried out, and obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
The 5th purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory intra-frame predictive encoding method of large scale piece, and described method comprises:
Press transformation matrix size in conversion module, the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Transform domain information piece to the present encoding piece quantizes;
Each transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction;
Under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
The present encoding piece is carried out conventional RDO obtain the optimum frame inner estimation mode.
The 6th purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory infra-frame prediction coding/decoding method of large scale piece, said method comprising the steps of:
Code stream to current decoding block carries out the entropy decoding, reorders;
According to the intra prediction mode of current decoding block, carry out the transform domain infra-frame prediction by four kinds of predictive modes in the 4th predictive mode group, obtain the transform domain intra prediction value of current decoding block;
, with the transform domain intra prediction value of current decoding block and the data accumulation that reorders, obtain the transform domain reconstruction value of current decoding block;
The transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block;
The spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
The 7th purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory intraframe predictive coding system of large scale piece, and described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, quantization modules, the second transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module;
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Quantization modules, be used for the transform domain information piece of present encoding piece is quantized;
The second transform domain intra-framed prediction module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, carry out conventional RDO to the present encoding piece and obtain the optimum frame inner estimation mode.
The 8th purpose of the embodiment of the present invention is to propose a kind of luminance transformation territory infra-frame prediction decode system of large scale piece, described system comprises: entropy decoder module, the module that reorders, the second decoding block transform domain intra prediction value acquisition module, the second decoding block transform domain reconstruction value acquisition module, the second decoding block spatial domain reconstruction value acquisition module, filtration module
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
The second decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the 4th predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
The second decoding block transform domain reconstruction value acquisition module, be used for the data that the current decoding block that the transform domain intra prediction value of current decoding block and the module that reorders are obtained reorders are added up, and obtains the transform domain reconstruction value of current decoding block;
The second decoding block spatial domain reconstruction value acquisition module, be used for the transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
Beneficial effect of the present invention
The present invention proposes a kind of luminance transformation territory infra-frame prediction decoding method and system of large scale piece.Embodiment of the present invention method is utilized the high cumulative of transform domain and the correlation of present encoding piece and contiguous coded block, eliminates the redundancy of frame internal information; Module that the flow process of wherein encoding is rough can be divided into " prediction, conversion, quantize, reorder, entropy coding etc. ", the conversion that embodiment of the present invention method is used is the mapping algorithm of next code flow process " conversion module " just, so additionally do not increase amount of calculation; In addition,, because transform domain is compared to spatial domain, have the characteristics of high cumulative, thereby make embodiment of the present invention method reach lifting on the infra-frame prediction performance.
Description of drawings
Fig. 1 is the luminance transformation territory intra-frame predictive encoding method flow chart of 1 one kinds of large scale pieces of the preferred embodiment of the present invention;
Fig. 2 is the method flow diagram of step S103 in Fig. 1 flow chart;
Fig. 3 is present encoding piece and the location diagram of predicting piece;
Fig. 4 is the luminance transformation territory infra-frame prediction coding/decoding method flow chart of 2 one kinds of large scale pieces of the preferred embodiment of the present invention;
Fig. 5 is the luminance transformation territory intraframe predictive coding system construction drawing of 3 one kinds of large scale pieces of the preferred embodiment of the present invention;
Fig. 6 is the detailed structure view of transform domain intra-framed prediction module in Fig. 5 coded system;
Fig. 7 is the luminance transformation territory infra-frame prediction decode system structure chart of 4 one kinds of large scale pieces of the preferred embodiment of the present invention;
Fig. 8 is the luminance transformation territory intra-frame predictive encoding method flow chart of 5 one kinds of large scale pieces of the preferred embodiment of the present invention;
Fig. 9 is the method flow diagram of step S304 in Fig. 8 flow chart;
Figure 10 is the luminance transformation territory infra-frame prediction coding/decoding method flow chart of 6 one kinds of large scale pieces of the preferred embodiment of the present invention;
Figure 11 is the luminance transformation territory intraframe predictive coding system construction drawing of 7 one kinds of large scale pieces of the preferred embodiment of the present invention;
Figure 12 is the detailed structure view of the second transform domain intra-framed prediction module in Figure 11 coded system;
Figure 13 is the luminance transformation territory infra-frame prediction decode system structure chart of 8 one kinds of large scale pieces of the preferred embodiment of the present invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated, for convenience of explanation, only show the part relevant to the embodiment of the present invention.Should be appreciated that the specific embodiment that this place is described, only be used for explaining the present invention, not in order to limit the present invention.
The present invention proposes a kind of luminance transformation territory infra-frame prediction decoding method and system of large scale piece.Embodiment of the present invention method is utilized the high cumulative of transform domain and the correlation of present encoding piece and contiguous coded block, eliminates the redundancy of frame internal information; Module that the flow process of wherein encoding is rough can be divided into " prediction, conversion, quantize, reorder, entropy coding etc. ", the conversion that embodiment of the present invention method is used is the mapping algorithm of next code flow process " conversion module " just, so additionally do not increase amount of calculation; In addition,, because transform domain is compared to spatial domain, have the characteristics of high cumulative, thereby make embodiment of the present invention method reach lifting on the infra-frame prediction performance.
Embodiment one (coding method 1)
Fig. 1 is the luminance transformation territory intra-frame predictive encoding method flow chart of a kind of large scale piece of the preferred embodiment of the present invention; Said method comprising the steps of: S101: press transformation matrix size in conversion module, the spatial domain monochrome information piece of present encoding piece is (former
The beginning piece) be divided into matrix in block form; Specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k。
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix (referred to as the spatial-domain information piece of present encoding piece) of expression present encoding piece, it is the matrix of (m) * (m), and m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block (referred to as the present encoding capable j column space of piece i territory sub-block) of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), it is the matrix of (k) * (k), k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' are three intermediate variable symbols that arrange in order to facilitate representation formula;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
The effect of step S101 is exactly with the original matrix of (m) * (m) in fact y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) Be divided into the matrix in block form that is formed for (k) * (k) sub-block by (m/k) * (m/k) individual size Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) , Then represent original matrix with matrix in block form.
S102: each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece.
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k。
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix; Transformation matrix can be selected according to the conversion module of different coding device, for example the dct transform matrix in H.264.
In full, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j column space of piece i territory sub-block; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j rank transformation of piece i territory sub-block, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
S103: each the transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction.
Described " each the transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction " specifically comprises the following steps (method flow diagram that Fig. 2 is step S103):
S1030: initialization number.
Number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, by encoder, is arranged voluntarily, and the number initial value is larger, and amount of calculation is larger, and the corresponding lifting of performance is more, general 0≤number≤k*k/2.
S1031: number data that find the Current Transform territory sub-block intermediate value maximum of present encoding piece.
S1032: to each data in a described number data, utilize four kinds of predictive modes in the first predictive mode group to carry out respectively the centre infra-frame prediction; And then to remaining all data in the Current Transform territory sub-block of present encoding piece, utilize four kinds of predictive mode unifications in the first predictive mode group to carry out the centre infra-frame prediction.
Fig. 3 is present encoding piece and the location diagram of predicting piece, and in Fig. 3, thick line box indicating size is (m) * (m) present encoding piece; Double solid line line box indicating size is the sub-block of the capable j row of the i of (k) * (k) present encoding piece; All the other each square frames are sub-blocks of prediction piece, and its size is (k) * (k);
In full, A represents the sub-block of the capable m/k row of upper left side prediction piece m/k of present encoding piece; B1, B2 ..., Bj ..., Bm ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of present encoding piece, capable the 2nd row of upside prediction piece m/k of present encoding piece ..., the present encoding piece the capable j row of upside prediction piece m/k sub-block ..., the present encoding piece the sub-block sub-block of the capable m/k row of upside prediction piece m/k; E represents the sub-block of capable the 1st row of upper right side prediction piece m/k of present encoding piece; D1, D2 ..., Di ..., Dm ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of present encoding piece, left side prediction piece the 2nd row m/k row of present encoding piece ..., the present encoding piece the capable m/k row of left side prediction piece i sub-block ..., the present encoding piece the sub-block of the capable m/k row of left side prediction piece m/k.
Wherein, described the first predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, p_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one.
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the first predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data.
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the first predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
S1033:: whether each the transform domain sub-block that judges the present encoding piece has carried out the transform domain infra-frame prediction, if enter S104; If not, enter S1031, namely to the next transform domain sub-block of present encoding piece through carrying out the transform domain infra-frame prediction.
S104: under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode.
Each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt in described the first predictive mode group any one in four kinds of patterns, in a described number data pattern of each the data can be identical also can be different; Described remaining data also can adopt in described the first predictive mode group any one in four kinds of patterns.
For example: suppose number=3, a described number data are respectively a, b, c, and described remaining data are d; Wherein a kind of predictive mode is combined as: " a at pattern one+b at pattern three+c in pattern four "+d is in pattern two; A, b, three data of c can adopt respectively any one in described four kinds of patterns, and the d data also can adopt any one in described four kinds of patterns.
S105: the present encoding piece is carried out conventional RDO(Rate Distortion Optimisation) obtain the optimum frame inner estimation mode, complete the transform domain infra-frame prediction of present encoding piece.
Embodiment two (coding/decoding method 1)
The luminance transformation territory infra-frame prediction coding/decoding method flow chart (embodiment two is based on the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece of embodiment one, proposes the luminance transformation territory infra-frame prediction coding/decoding method of corresponding a kind of large scale piece) of a kind of large scale piece of the preferred embodiment of the present invention as shown in Figure 4; Described coding/decoding method comprises the following steps:
S201: the code stream to current decoding block first carries out the entropy decoding, reorders, and then carries out inverse quantization.
S202:, according to the intra prediction mode of current decoding block, by four kinds of predictive modes in following the second predictive mode group, carry out the transform domain infra-frame prediction, obtain the transform domain intra prediction value of current decoding block.
Described the second predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
In full, the sub-block of the capable m/k row of upper left side prediction piece m/k of the current decoding block of A ' expression; B1 ', B2 ' ..., Bj ' ..., Bm ' ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of current decoding block, capable the 2nd row of upside prediction piece m/k of current decoding block ..., current decoding block the capable j row of upside prediction piece m/k sub-block ..., current decoding block the sub-block sub-block of the capable m/k row of upside prediction piece m/k; The sub-block of capable the 1st row of upper right side prediction piece m/k of the current decoding block of E ' expression; D1 ', D2 ' ..., Di ' ..., Dm ' ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of current decoding block, left side prediction piece the 2nd row m/k row of current decoding block ..., current decoding block the capable m/k row of left side prediction piece i sub-block ..., current decoding block the sub-block of the capable m/k row of left side prediction piece m/k;
Wherein, p_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k.
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
S203:, with the data accumulation of the current decoding block inverse quantization that obtains in the transform domain intra prediction value of current decoding block and S201, obtain the transform domain reconstruction value of current decoding block.
S204: the transform domain reconstruction value of current decoding block is carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block.
S205: the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
Embodiment three (system of coding method 1 correspondence)
Fig. 5 is the luminance transformation territory intraframe predictive coding system construction drawing of 3 one kinds of large scale pieces of the preferred embodiment of the present invention; Described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module,
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
The infra-frame prediction error calculating module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, be used for carrying out conventional RDO, obtains the optimum frame inner estimation mode, completes the transform domain infra-frame prediction of present encoding piece.
Further, describedly " press in conversion module the transformation matrix size, the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form; " specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 2 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k。
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix (referred to as the spatial-domain information piece of present encoding piece) of expression present encoding piece, it is the matrix of (m) * (m), and m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block (referred to as the present encoding capable j column space of piece i territory sub-block) of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), it is the matrix of (k) * (k), k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' are three intermediate variable symbols that arrange in order to facilitate representation formula;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
Further, described " each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece " is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k。
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix; Transformation matrix can be selected according to the conversion module of different coding device, for example the dct transform matrix in H.264.
In full, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j column space of piece i territory sub-block; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j rank transformation of piece i territory sub-block, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
Further, described transform domain intra-framed prediction module also comprises number value initialization module, searches module, middle intra-framed prediction module, judge module, (Fig. 6 is the detailed structure view of transform domain intra-framed prediction module)
Number value initialization module, be used for initialization number;
Number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, by encoder, is arranged voluntarily, and the number initial value is larger, and amount of calculation is larger, and the corresponding lifting of performance is more, general 0≤number≤k*k/2.
Search module, for number data of the Current Transform territory sub-block intermediate value maximum that finds the present encoding piece;
Middle intra-framed prediction module, be used for utilizing four kinds of predictive modes in the first predictive mode group to carry out respectively the centre infra-frame prediction to each data of a described number data; And then to remaining all data in the Current Transform territory sub-block of present encoding piece, utilize four kinds of predictive mode unifications in the first predictive mode group to carry out the centre infra-frame prediction;
Wherein, specifically comprise following four kinds of predictive modes in described the first predictive mode group:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, p_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one in the first predictive mode group.
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the first predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data.
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the first predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
Judge module, be used for judging whether each transform domain sub-block of present encoding piece has carried out the transform domain infra-frame prediction, if enter the infra-frame prediction error calculating module; If not, enter and search module;
Embodiment four (system of coding/decoding method 1 correspondence)
Fig. 7 is the luminance transformation territory infra-frame prediction decode system structure chart of 4 one kinds of large scale pieces of the preferred embodiment of the present invention; Described system comprises: entropy decoder module, the module that reorders, inverse quantization module, decoding block transform domain intra prediction value acquisition module, decoding block transform domain reconstruction value acquisition module, decoding block spatial domain reconstruction value acquisition module, filtration module.
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
Inverse quantization module, be used for the code stream of the rear current decoding block that reorders is carried out inverse quantization;
Decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the second predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
Decoding block transform domain reconstruction value acquisition module, be used for the bit stream data of the transform domain intra prediction value of current decoding block and the current decoding block after the inverse quantization module inverse quantization is added up, and obtains the transform domain reconstruction value of current decoding block.
Decoding block spatial domain reconstruction value acquisition module, be used for the inverse transformation that the transform domain reconstruction value of current decoding block is carried out, and obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
Further, in described the first decoding block transform domain intra prediction value acquisition module, described the second predictive mode group comprises following four kinds of patterns:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Wherein, p_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k.
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
Embodiment 5(coding method 2)
Fig. 8 is the luminance transformation territory intra-frame predictive encoding method flow chart of 5 one kinds of large scale pieces of the preferred embodiment of the present invention; Described coding method comprises the following steps,
S301: press transformation matrix size in conversion module, the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form; Specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k。
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix (referred to as the spatial-domain information piece of present encoding piece) of expression present encoding piece, it is the matrix of (m) * (m), and m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block (referred to as the present encoding capable j column space of piece i territory sub-block) of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), it is the matrix of (k) * (k), k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' are three intermediate variable symbols that arrange in order to facilitate representation formula;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
The effect of step S301 is exactly with the original matrix of (m) * (m) in fact y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) Be divided into the matrix in block form that is formed for (k) * (k) sub-block by (m/k) * (m/k) individual size Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) , Then represent original matrix with matrix in block form.
S302: each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece.
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k。
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix; Transformation matrix can be selected according to the conversion module of different coding device, for example the dct transform matrix in H.264.
In full, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j column space of piece i territory sub-block; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j rank transformation of piece i territory sub-block, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
S303: the transform domain information piece to the present encoding piece quantizes.
S304: each the transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction.Specifically comprise the following steps, Fig. 9 is the method flow diagram of step S304 in Fig. 8 flow chart;
S3040: initialization number.
Number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, by encoder, is arranged voluntarily, and the number initial value is larger, and amount of calculation is larger, and the corresponding lifting of performance is more, general 0≤number≤k*k/2.
S3041: number data that find the Current Transform territory sub-block intermediate value maximum of present encoding piece.
S3042: to each data in a described number data, utilize four kinds of predictive modes in the 3rd predictive mode group to carry out respectively the centre infra-frame prediction; And then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilize four kinds of predictive mode unifications in the 3rd predictive mode group to carry out the centre infra-frame prediction.
Wherein, four kinds of predictive modes in the 3rd predictive mode group are specific as follows:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, p_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure BDA00003622010900241
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one in the 3rd predictive mode group.
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the 3rd predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data.
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the 3rd predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
S3043: whether each the transform domain sub-block that judges the present encoding piece has carried out the transform domain infra-frame prediction, if enter S305; If not, enter S3041, namely to the next transform domain sub-block of present encoding piece through carrying out the transform domain infra-frame prediction.
S305: under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode.
Each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt any one in four kinds of patterns in described the 3rd predictive mode group, in a described number data pattern of each the data can be identical also can be different; Described remaining data also can adopt any one in four kinds of patterns in described the 3rd predictive mode group.
For example: suppose number=3, a described number data are respectively a, b, c, and described remaining data are d; Wherein a kind of predictive mode is combined as: " a at pattern one+b at pattern three+c in pattern four "+d is in pattern two; A, b, three data of c can adopt respectively any one in four kinds of patterns in described the 3rd predictive mode group, and the d data also can adopt any one in described four kinds of patterns.
S306: the present encoding piece is carried out conventional RDO(Rate Distortion Optimisation) obtain the optimum frame inner estimation mode, complete the transform domain infra-frame prediction of present encoding piece.
Embodiment six (coding/decoding method 2)
Figure 10 is the luminance transformation territory infra-frame prediction coding/decoding method flow chart (embodiment six is based on the luminance transformation territory intra-frame predictive encoding method of embodiment kind on May Day large scale piece, proposes the luminance transformation territory infra-frame prediction coding/decoding method of corresponding a kind of large scale piece) of 6 one kinds of large scale pieces of the preferred embodiment of the present invention; Said method comprising the steps of:
S401: the code stream to current decoding block carries out the entropy decoding, reorders.
S402:, according to the intra prediction mode of current decoding block, by four kinds of predictive modes in the 4th predictive mode group, carry out the transform domain infra-frame prediction, obtain the transform domain intra prediction value of current decoding block.
Described the 4th predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Wherein, p_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k.
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
S403:, with the data accumulation that the current decoding block that obtains in the transform domain intra prediction value of current decoding block and S401 reorders, obtain the transform domain reconstruction value of current decoding block.
S404: the transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block.
S405: the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
The system of embodiment 7(coding method 2 correspondences)
Figure 11 is the luminance transformation territory intraframe predictive coding system construction drawing of 7 one kinds of large scale pieces of the preferred embodiment of the present invention; Described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, quantization modules, the second transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module;
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Quantization modules, be used for the transform domain information piece of present encoding piece is quantized;
The second transform domain intra-framed prediction module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, carry out conventional RDO(Rate Distortion Optimisation to the present encoding piece) obtain the optimum frame inner estimation mode, complete the transform domain infra-frame prediction of present encoding piece.
Further, the described transformation matrix size of pressing in conversion module, be divided into matrix in block form with the spatial domain monochrome information piece (original block) of present encoding piece; Specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k。
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix (referred to as the spatial-domain information piece of present encoding piece) of expression present encoding piece, it is the matrix of (m) * (m), and m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block (referred to as the present encoding capable j column space of piece i territory sub-block) of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), it is the matrix of (k) * (k), k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' are three intermediate variable symbols that arrange in order to facilitate representation formula;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
Further, described each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece, is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k。
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix; Transformation matrix can be selected according to the conversion module of different coding device, for example the dct transform matrix in H.264.
In full, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j column space of piece i territory sub-block; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j rank transformation of piece i territory sub-block, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
Further, described the second transform domain intra-framed prediction module also comprises: number initial value module, search module, intra-framed prediction module, judge module (Figure 12 is the detailed structure view of the second transform domain intra-framed prediction module in Figure 11 coded system) in the middle of second, number initial value module, be used for initialization number.
Number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, by encoder, is arranged voluntarily, and the number initial value is larger, and amount of calculation is larger, and the corresponding lifting of performance is more, general 0≤number≤k*k/2.
Search module, for number data of the Current Transform territory sub-block intermediate value maximum that finds the present encoding piece;
Intra-framed prediction module in the middle of second, be used for utilizing four kinds of predictive modes in the 3rd predictive mode group to carry out respectively the centre infra-frame prediction to each data of a described number data; And then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilize four kinds of predictive mode unifications in the 3rd predictive mode group to carry out the centre infra-frame prediction;
Judge module, be used for judging whether each transform domain sub-block of present encoding piece has carried out the transform domain infra-frame prediction, if enter the infra-frame prediction error calculating module; If not, enter and search module;
Further, four kinds of predictive modes in the 3rd predictive mode group are specific as follows:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, p_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure BDA00003622010900312
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one.
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the 3rd predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data.
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the 3rd predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
Further, in described " infra-frame prediction error calculating module ", described each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt any one in four kinds of patterns in described the 3rd predictive mode group, in a described number data pattern of each the data can be identical also can be different; Described remaining data also can adopt any one in four kinds of patterns in described the 3rd predictive mode group.
The system of embodiment 8(coding/decoding method 2 correspondences)
Figure 13 is the luminance transformation territory infra-frame prediction decode system structure chart of 8 one kinds of large scale pieces of the preferred embodiment of the present invention; Described system comprises: entropy decoder module, the module that reorders, the second decoding block transform domain intra prediction value acquisition module, the second decoding block transform domain reconstruction value acquisition module, the second decoding block spatial domain reconstruction value acquisition module, filtration module.
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
The second decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the 4th predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
The second decoding block transform domain reconstruction value acquisition module, be used for the data that the current decoding block that the transform domain intra prediction value of current decoding block and the module that reorders are obtained reorders are added up, and obtains the transform domain reconstruction value of current decoding block;
The second decoding block spatial domain reconstruction value acquisition module, be used for the transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
Further, described the 4th predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k.
Wherein, p_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k.
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure BDA00003622010900335
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
Those having ordinary skill in the art will appreciate that, realize that all or part of step in above-described embodiment method can complete by the program command related hardware, described program can be stored in a computer read/write memory medium, and described storage medium can be ROM, RAM, disk, CD etc.
The foregoing is only preferred embodiment of the present invention,, not in order to limit the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (35)

1. the luminance transformation territory intra-frame predictive encoding method of a large scale piece, is characterized in that, said method comprising the steps of:
Press transformation matrix size in conversion module, the spatial domain monochrome information piece of present encoding piece is divided into matrix in block form;
Each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Each transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction;
Under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
The present encoding piece is carried out conventional RDO obtain the optimum frame inner estimation mode, complete the transform domain infra-frame prediction of present encoding piece.
2. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 1, is characterized in that, described " press transformation matrix size in conversion module, the spatial domain monochrome information piece of present encoding piece is divided into matrix in block form " is specially:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k;
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix of expression present encoding piece, it is the matrix of (m) * (m), m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), and it is the matrix of (k) * (k), and k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' are three intermediate variable symbols;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
3. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 1, is characterized in that, described " each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece " is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k;
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix;
Wherein, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
4. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 1, is characterized in that, described " each the transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction " comprises step:
Initialization number, number represent that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, 0≤number≤k*k/2;
Search number data of the Current Transform territory sub-block intermediate value maximum of present encoding piece;
Utilize four kinds of predictive modes in the first predictive mode group to carry out respectively the centre infra-frame prediction to each data in a described number data; And then to remaining all data in the Current Transform territory sub-block of present encoding piece, utilize four kinds of predictive mode unifications in the first predictive mode group to carry out the centre infra-frame prediction;
Whether each the transform domain sub-block that judges the present encoding piece has carried out the transform domain infra-frame prediction, if, enter step " under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode "; If not, enter step and " search number data of the Current Transform territory sub-block intermediate value maximum of present encoding piece ".
5. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 4, is characterized in that, described the first predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, A represents the sub-block of the capable m/k row of upper left side prediction piece m/k of present encoding piece; B1, B2 ..., Bj ..., Bm ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of present encoding piece, capable the 2nd row of upside prediction piece m/k of present encoding piece ..., the present encoding piece the capable j row of upside prediction piece m/k sub-block ..., the present encoding piece the sub-block sub-block of the capable m/k row of upside prediction piece m/k; E represents the sub-block of capable the 1st row of upper right side prediction piece m/k of present encoding piece; D1, D2 ..., Di ..., Dm ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of present encoding piece, left side prediction piece the 2nd row m/k row of present encoding piece ..., the present encoding piece the capable m/k row of left side prediction piece i sub-block ..., the present encoding piece the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
6. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as described in claim 5 or 4, is characterized in that,
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one;
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the first predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data;
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the first predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
7. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 6, is characterized in that,
Described each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt in described the first predictive mode group any one in four kinds of patterns, in a described number data pattern of each the data can be identical also can be different;
Described remaining data also can adopt in described the first predictive mode group any one in four kinds of patterns.
8. the luminance transformation territory infra-frame prediction coding/decoding method of a large scale piece, is characterized in that, said method comprising the steps of:
Code stream to current decoding block first carries out the entropy decoding, reorders, and then carries out inverse quantization;
According to the intra prediction mode of current decoding block, carry out the transform domain infra-frame prediction by four kinds of predictive modes in following the second predictive mode group, obtain the transform domain intra prediction value of current decoding block;
, with the data accumulation of the transform domain intra prediction value of current decoding block with the current decoding block inverse quantization that obtains, obtain the transform domain reconstruction value of current decoding block;
The transform domain reconstruction value of current decoding block is carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block;
The spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
9. the luminance transformation territory infra-frame prediction coding/decoding method of a kind of large scale piece as claimed in claim 8, is characterized in that,
Described the second predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k。
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, the sub-block of the capable m/k row of upper left side prediction piece m/k of the current decoding block of A ' expression; B1 ', B2 ' ..., Bj ' ..., Bm ' ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of current decoding block, capable the 2nd row of upside prediction piece m/k of current decoding block ..., current decoding block the capable j row of upside prediction piece m/k sub-block ..., current decoding block the sub-block sub-block of the capable m/k row of upside prediction piece m/k; The sub-block of capable the 1st row of upper right side prediction piece m/k of the current decoding block of E ' expression; D1 ', D2 ' ..., Di ' ..., Dm ' ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of current decoding block, left side prediction piece the 2nd row m/k row of current decoding block ..., current decoding block the capable m/k row of left side prediction piece i sub-block ..., current decoding block the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
10. the luminance transformation territory intraframe predictive coding system of a large scale piece, it is characterized in that, described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
The infra-frame prediction error calculating module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, be used for carrying out conventional RDO, obtains the optimum frame inner estimation mode, completes the transform domain infra-frame prediction of present encoding piece.
11. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 10, it is characterized in that, described " press transformation matrix size in conversion module, the spatial domain monochrome information piece (original block) of present encoding piece be divided into matrix in block form; " specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k;
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix of expression present encoding piece, it is the matrix of (m) * (m), m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), and it is the matrix of (k) * (k), and k represents line number and the columns of transformation matrix in conversion module, k≤16; I, J, m' are three intermediate variable symbols;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
12. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 10, is characterized in that, described " each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece " is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k。
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix;
Wherein, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
13. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 10, is characterized in that, described transform domain intra-framed prediction module also comprises number value initialization module, searches module, middle intra-framed prediction module, judge module,
Number value initialization module, be used for initialization number, and number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, 0≤number≤k*k/2.
Search module, for number data of the Current Transform territory sub-block intermediate value maximum that finds the present encoding piece;
Middle intra-framed prediction module, be used for utilizing four kinds of predictive modes in the first predictive mode group to carry out respectively the centre infra-frame prediction to each data of a described number data; And then to remaining all data in the Current Transform territory sub-block of present encoding piece, utilize four kinds of predictive mode unifications in the first predictive mode group to carry out the centre infra-frame prediction;
Judge module, be used for judging whether each transform domain sub-block of present encoding piece has carried out the transform domain infra-frame prediction, if enter the infra-frame prediction error calculating module; If not, enter and search module.
14. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 13, is characterized in that, specifically comprises following four kinds of predictive modes in described the first predictive mode group:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, A represents the sub-block of the capable m/k row of upper left side prediction piece m/k of present encoding piece; B1, B2 ..., Bj ..., Bm ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of present encoding piece, capable the 2nd row of upside prediction piece m/k of present encoding piece ..., the present encoding piece the capable j row of upside prediction piece m/k sub-block ..., the present encoding piece the sub-block sub-block of the capable m/k row of upside prediction piece m/k; E represents the sub-block of capable the 1st row of upper right side prediction piece m/k of present encoding piece; D1, D2 ..., Di ..., Dm ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of present encoding piece, left side prediction piece the 2nd row m/k row of present encoding piece ..., the present encoding piece the capable m/k row of left side prediction piece i sub-block ..., the present encoding piece the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
15. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as described in claim 14 or 13, is characterized in that,
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one in the first predictive mode group;
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the first predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data;
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the first predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
16. the luminance transformation territory infra-frame prediction decode system of a large scale piece, it is characterized in that, described system comprises: entropy decoder module, the module that reorders, inverse quantization module, decoding block transform domain intra prediction value acquisition module, decoding block transform domain reconstruction value acquisition module, decoding block spatial domain reconstruction value acquisition module, filtration module
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
Inverse quantization module, be used for the code stream of the rear current decoding block that reorders is carried out inverse quantization;
Decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the second predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
Decoding block transform domain reconstruction value acquisition module, be used for the bit stream data of the transform domain intra prediction value of current decoding block and the current decoding block after the inverse quantization module inverse quantization is added up, and obtains the transform domain reconstruction value of current decoding block;
Decoding block spatial domain reconstruction value acquisition module, be used for the inverse transformation that the transform domain reconstruction value of current decoding block is carried out, and obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
17. the luminance transformation territory infra-frame prediction decode system of a kind of large scale piece as claimed in claim 16, is characterized in that, in described the first decoding block transform domain intra prediction value acquisition module, described the second predictive mode group comprises following four kinds of patterns:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yt 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, the sub-block of the capable m/k row of upper left side prediction piece m/k of the current decoding block of A ' expression; B1 ', B2 ' ..., Bj ' ..., Bm ' ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of current decoding block, capable the 2nd row of upside prediction piece m/k of current decoding block ..., current decoding block the capable j row of upside prediction piece m/k sub-block ..., current decoding block the sub-block sub-block of the capable m/k row of upside prediction piece m/k; The sub-block of capable the 1st row of upper right side prediction piece m/k of the current decoding block of E ' expression; D1 ', D2 ' ..., Di ' ..., Dm ' ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of current decoding block, left side prediction piece the 2nd row m/k row of current decoding block ..., current decoding block the capable m/k row of left side prediction piece i sub-block ..., current decoding block the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ' the transform domain information piece m-k+i of the capable m/k row of expression current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k.
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2) expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
18. the luminance transformation territory intra-frame predictive encoding method of a large scale piece, is characterized in that, described method comprises:
Press transformation matrix size in conversion module, the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Transform domain information piece to the present encoding piece quantizes;
Each transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction;
Under the combination of each predictive mode, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
The present encoding piece is carried out conventional RDO obtain the optimum frame inner estimation mode.
19. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 18, it is characterized in that, described " press transformation matrix size in conversion module, the spatial domain monochrome information piece of present encoding piece is divided into matrix in block form " is specially:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) ;
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k;
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix of expression present encoding piece, it is the matrix of (m) * (m), m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), and it is the matrix of (k) * (k), and k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' represent three intermediate variable symbols;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
20. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 18, is characterized in that, described " each the spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece " is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k;
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, CT represent the transposed matrix of transformation matrix;
Wherein, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
21. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 18, is characterized in that, described " each the transform domain sub-block to the present encoding piece is carried out the transform domain infra-frame prediction " comprises the following steps:
Initialization number, number represent that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, 0≤number≤k*k/2;
Search number data of the Current Transform territory sub-block intermediate value maximum of present encoding piece;
Utilize four kinds of predictive modes in the 3rd predictive mode group to carry out respectively the centre infra-frame prediction to each data in a described number data; And then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilize four kinds of predictive mode unifications in the 3rd predictive mode group to carry out the centre infra-frame prediction;
Whether each the transform domain sub-block that judges the present encoding piece has carried out the transform domain infra-frame prediction, if, progressive rapid " under each predictive mode combination, error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode "; If not, enter " number the data of searching the Current Transform territory sub-block intermediate value maximum of present encoding piece ".
22. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 21, is characterized in that, four kinds of predictive modes in described the 3rd predictive mode group are specific as follows:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, A represents the sub-block of the capable m/k row of upper left side prediction piece m/k of present encoding piece; B1, B2 ..., Bj ..., Bm ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of present encoding piece, capable the 2nd row of upside prediction piece m/k of present encoding piece ..., the present encoding piece the capable j row of upside prediction piece m/k sub-block ..., the present encoding piece the sub-block sub-block of the capable m/k row of upside prediction piece m/k; E represents the sub-block of capable the 1st row of upper right side prediction piece m/k of present encoding piece; D1, D2 ..., Di ..., Dm ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of present encoding piece, left side prediction piece the 2nd row m/k row of present encoding piece ..., the present encoding piece the capable m/k row of left side prediction piece i sub-block ..., the present encoding piece the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure FDA00003622010800185
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
23. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as described in claim 22 or 21, is characterized in that,
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one in the 3rd predictive mode group;
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the 3rd predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data;
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the 3rd predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
24. the luminance transformation territory intra-frame predictive encoding method of a kind of large scale piece as claimed in claim 23, it is characterized in that, in described " under the combination of each predictive mode; error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode "
Described each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt any one in four kinds of patterns in described the 3rd predictive mode group, in a described number data pattern of each the data can be identical also can be different; Described remaining data also can adopt any one in four kinds of patterns in described the 3rd predictive mode group.
25. the luminance transformation territory infra-frame prediction coding/decoding method of a large scale piece, is characterized in that, said method comprising the steps of:
Code stream to current decoding block carries out the entropy decoding, reorders;
According to the intra prediction mode of current decoding block, carry out the transform domain infra-frame prediction by four kinds of predictive modes in the 4th predictive mode group, obtain the transform domain intra prediction value of current decoding block;
, with the transform domain intra prediction value of current decoding block and the data accumulation that the current decoding block that obtains reorders, obtain the transform domain reconstruction value of current decoding block;
The transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtain the spatial domain reconstruction value of current decoding block;
The spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
26. a kind of luminance transformation territory infra-frame prediction coding/decoding method of large scale piece, is characterized in that as claimed in claim 25, described the 4th predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, the sub-block of the capable m/k row of upper left side prediction piece m/k of the current decoding block of A ' expression; B1 ', B2 ' ..., Bj ' ..., Bm ' ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of current decoding block, capable the 2nd row of upside prediction piece m/k of current decoding block ..., current decoding block the capable j row of upside prediction piece m/k sub-block ..., current decoding block the sub-block sub-block of the capable m/k row of upside prediction piece m/k; The sub-block of capable the 1st row of upper right side prediction piece m/k of the current decoding block of E ' expression; D1 ', D2 ' ..., Di ' ..., Dm ' ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of current decoding block, left side prediction piece the 2nd row m/k row of current decoding block ..., current decoding block the capable m/k row of left side prediction piece i sub-block ..., current decoding block the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ': the transform domain information piece I+i of the capable m/k row of current decoding block left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ' the transform domain information piece m-k+i of the capable m/k row of expression current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2) expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure FDA00003622010800214
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
27. the luminance transformation territory intraframe predictive coding system of a large scale piece, it is characterized in that, described system comprises: spatial domain monochrome information piece is divided module, transform domain sub-block acquisition module, quantization modules, the second transform domain intra-framed prediction module, infra-frame prediction error calculating module, optimum frame inner estimation mode acquisition module;
Spatial domain monochrome information piece is divided module, is used for pressing conversion module transformation matrix size, and the spatial domain monochrome information piece (original block) of present encoding piece is divided into matrix in block form;
Transform domain sub-block acquisition module, be used for each spatial domain sub-block of present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece;
Quantization modules, be used for the transform domain information piece of present encoding piece is quantized;
The second transform domain intra-framed prediction module, be used for each transform domain sub-block of present encoding piece is carried out the transform domain infra-frame prediction;
The infra-frame prediction error calculating module, be used under each predictive mode combination, and error corresponding to cumulative all transform domain sub-blocks of present encoding piece is as the infra-frame prediction error of present encoding piece under the combination of this kind predictive mode;
Optimum frame inner estimation mode acquisition module, carry out conventional RDO to the present encoding piece and obtain the optimum frame inner estimation mode.
28. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 27, is characterized in that,
The described transformation matrix size of pressing in conversion module, be divided into matrix in block form with the spatial domain monochrome information piece (original block) of present encoding piece; Specific as follows:
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) = y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) .
Wherein, Y ( i , j ) = y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) , I=(i-1)*k,J=(j-1)*k,m'=m/k,1≤i≤m/k,1≤j≤m/k;
y ( 1,1 ) y ( 1,2 ) ... y ( 1 , m ) y ( 2,1 ) y ( 2,2 ) . . . y ( 2 , m ) . . . . . . . . . . . . y ( m , 1 ) y ( m , 2 ) . . . y ( m , m ) For original matrix, the spatial domain brightness value matrix of expression present encoding piece, it is the matrix of (m) * (m), m represents line number and the columns of the spatial domain brightness value matrix of present encoding piece, m 〉=16;
y(i 1, j 1) the spatial-domain information piece i of expression present encoding piece 1Row j 1The brightness value of row, 1≤i 1≤ m, 1≤j 1≤ m;
Y ( 1,1 ) Y ( 1,2 ) . . . Y ( 1 , m ′ ) Y ( 2,1 ) Y ( 2,2 ) . . . Y ( 2 , m ′ ) . . . . . . . . . . . . Y ( m ′ , 1 ) Y ( m ′ , 2 ) . . . Y ( m ′ , m ′ ) For matrix in block form, it is the matrix in block form of (m/k) * (m/k);
y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The sub-block (referred to as the present encoding capable j column space of piece i territory sub-block) of the capable j row of spatial-domain information piece i of expression present encoding piece, be designated as Y (i, j), it is the matrix of (k) * (k), k represents line number and the columns of transformation matrix in conversion module, k≤16;
I, J, m' represent three intermediate variable symbols;
Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k.
29. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 27, is characterized in that, described each spatial domain sub-block to the present encoding piece is carried out conversion, obtains the transform domain sub-block of corresponding present encoding piece, is specially:
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) = C * y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) * C T
,I=(i-1)*k,J=(j-1)*k,1≤i≤m/k,1≤j≤m/k;
Wherein, C = c ( 1,1 ) c ( 1,2 ) . . . c ( 1 , k ) c ( 2,1 ) c ( 2,2 ) . . . c ( 2 , k ) . . . . . . . . . . . . c ( k , 1 ) c ( k , 2 ) . . . c ( k , k ) The expression transformation matrix, c (i c, j c) be transformation matrix i cRow j cThe numerical value of row, 1≤i c≤ k, 1≤j c≤ k, C TThe transposed matrix of expression transformation matrix;
Wherein, y ( I + 1 , J + 1 ) y ( I + 1 , J + 2 ) . . . y ( I + 1 , J + k ) y ( I + 2 , J + 1 ) y ( I + 2 , J + 2 ) . . . y ( I + 2 , J + k ) . . . . . . . . . . . . y ( I + k , J + 1 ) y ( I + k , J + 2 ) . . . y ( I + k , J + k ) The capable j row of the spatial-domain information piece i sub-block of expression present encoding piece, referred to as the present encoding capable j column space of piece i territory sub-block; Y (I+i 2, J+j 2) the spatial-domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yt ( I + 1 , J + 1 ) yt ( I + 1 , J + 2 ) . . . yt ( I + 1 , J + k ) yt ( I + 2 , J + 1 ) yt ( I + 2 , J + 2 ) . . . yt ( I + 2 , J + k ) . . . . . . . . . . . . yt ( I + k , J + 1 ) yt ( I + k , J + 2 ) . . . yt ( I + k , J + k ) The capable j row of the transform domain information piece i sub-block of expression present encoding piece, yt (I+i 2, J+j 2) the transform domain information piece I+i of expression present encoding piece 2Row J+j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k; Yt (I+i 2, J+j 2) be y (I+i 2, J+j 2) after conversion, the transform domain information piece I+i of the present encoding piece that obtains 2Row J+j 2The brightness value of row.
30. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 27, it is characterized in that, described the second transform domain intra-framed prediction module also comprises: number initial value module, search module, intra-framed prediction module, judge module in the middle of second
Number initial value module, be used for initialization number, and number represents that the transform domain sub-block carries out at most infra-frame prediction optimizing number of times, 0≤number≤k*k/2;
Search module, for number data of the Current Transform territory sub-block intermediate value maximum that finds the present encoding piece;
Intra-framed prediction module in the middle of second, be used for utilizing four kinds of predictive modes in the 3rd predictive mode group to carry out respectively the centre infra-frame prediction to each data of a described number data; And then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilize four kinds of predictive mode unifications in the 3rd predictive mode group to carry out the centre infra-frame prediction;
Judge module, be used for judging whether each transform domain sub-block of present encoding piece has carried out the transform domain infra-frame prediction, if enter the infra-frame prediction error calculating module; If not, enter and search module.
31. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 30, is characterized in that, four kinds of predictive modes in the 3rd predictive mode group are specific as follows:
Pattern one: transform domain left side predictive mode
(if Di encoded),
p_yt(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
(if Bj encoded),
p_yt(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
(if A encoded),
p_yt(I+i 2,J+j2)=yg left_up(m-k+i 2,m-k+j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
(if E encoded),
p_yt(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2),1≤i 2≤k、1≤j 2≤k;
Otherwise, p_yt (I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, A represents the sub-block of the capable m/k row of upper left side prediction piece m/k of present encoding piece; B1, B2 ..., Bj ..., Bm ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of present encoding piece, capable the 2nd row of upside prediction piece m/k of present encoding piece ..., the present encoding piece the capable j row of upside prediction piece m/k sub-block ..., the present encoding piece the sub-block sub-block of the capable m/k row of upside prediction piece m/k; E represents the sub-block of capable the 1st row of upper right side prediction piece m/k of present encoding piece; D1, D2 ..., Di ..., Dm ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of present encoding piece, left side prediction piece the 2nd row m/k row of present encoding piece ..., the present encoding piece the capable m/k row of left side prediction piece i sub-block ..., the present encoding piece the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt (I+i 2, J+j 2) expression yt (I+i 2, J+j 2) the transform domain predicted value;
yg Left(I+i 2, m-k+j 2): the transform domain information piece I+i of the capable m/k row of expression present encoding piece left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2): the transform domain information piece m-k+i of the capable j row of expression present encoding piece upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row;
yg Left_up(m-k+i 2, m-k+j 2): the transform domain information piece m-k+i of the capable m/k row of expression present encoding piece upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg Right_up(m-k+i 2, j 2): the transform domain information piece m-k+i of capable the 1st row sub-block of expression present encoding piece upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2) expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure FDA00003622010800255
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
32. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as described in claim 31 or 30, is characterized in that,
Described " middle infra-frame prediction " concrete methods of realizing is: ask for and process four errors that object is predicted to pattern four with pattern one;
Described " to number data of described Current Transform territory sub-block intermediate value maximum, utilizing four kinds of predictive modes in the 3rd predictive mode group all to carry out respectively the centre infra-frame prediction ", namely each data in number data are carried out once " middle infra-frame prediction " successively, this moment, each data was exactly once the processing object in " middle infra-frame prediction ", asked for four errors of the correspondence of each data in number data;
Described " and then to remaining all data in the Current Transform territory sub-block to the present encoding piece, utilizing four kinds of predictive modes in the 3rd predictive mode group to unify middle infra-frame prediction ", namely remaining all data are carried out once " middle infra-frame prediction ", this moment, remaining all data were exactly the processing object in " middle infra-frame prediction ", asked for four errors corresponding to remaining all data.
33. the luminance transformation territory intraframe predictive coding system of a kind of large scale piece as claimed in claim 32, is characterized in that,
In described " infra-frame prediction error calculating module ", described each predictive mode combination comprises: " predictive modes of a kind of number data " and " a kind of predictive modes of remaining data ";
In " predictive modes of a kind of number data ", in a described number data, each data can adopt any one in four kinds of patterns in described the 3rd predictive mode group, in a described number data pattern of each the data can be identical also can be different; Described remaining data also can adopt any one in four kinds of patterns in described the 3rd predictive mode group.
34. the luminance transformation territory infra-frame prediction decode system of a large scale piece, it is characterized in that, described system comprises: entropy decoder module, the module that reorders, the second decoding block transform domain intra prediction value acquisition module, the second decoding block transform domain reconstruction value acquisition module, the second decoding block spatial domain reconstruction value acquisition module, filtration module
The entropy decoder module, be used for the code stream of current decoding block is carried out the entropy decoding;
The module that reorders, be used for the code stream of the decoded current decoding block of entropy is reordered;
The second decoding block transform domain intra prediction value acquisition module, be used for the intra prediction mode according to current decoding block, by four kinds of predictive modes in the 4th predictive mode group, carries out the transform domain infra-frame prediction, obtains the transform domain intra prediction value of current decoding block;
The second decoding block transform domain reconstruction value acquisition module, be used for the data that the current decoding block that the transform domain intra prediction value of current decoding block and the module that reorders are obtained reorders are added up, and obtains the transform domain reconstruction value of current decoding block;
The second decoding block spatial domain reconstruction value acquisition module, be used for the transform domain reconstruction value of current decoding block is carried out inverse quantization and then carried out the inverse transformation of (k) * (k), obtains the spatial domain reconstruction value of current decoding block;
Filtration module, be used for the spatial domain reconstruction value of current decoding block is carried out filtering, completes the decoding of current decoding block.
35. the luminance transformation territory infra-frame prediction decode system of a kind of large scale piece as claimed in claim 34, is characterized in that, described the 4th predictive mode group specifically comprises following four kinds of predictive modes:
Pattern one: transform domain left side predictive mode
If (Di ' decode),
p_yt dec(I+i 2,J+j 2)=yg left(I+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern two: transform domain upside predictive mode
If (Bj ' decode),
p_yt dec(I+i 2,J+j 2)=yg up(m-k+i 2,J+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern three: transform domain upper left side predictive mode
If (A ' decode),
p_yt dec(I+i 2,J+j 2)=yg left_up(m-k+i 2,m-k+j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Pattern four: transform domain upper right side predictive mode
If (E ' decode),
p_yt dec(I+i 2,J+j 2)=yg right_up(m-k+i 2,j 2)',1≤i 2≤k、1≤j 2≤k;
Otherwise p_yt dec(I+i 2, J+j 2)=yg 128(i 2, j 2), 1≤i 2≤ k, 1≤j 2≤ k;
Wherein, the sub-block of the capable m/k row of upper left side prediction piece m/k of the current decoding block of A ' expression; B1 ', B2 ' ..., Bj ' ..., Bm ' ' represents respectively: the sub-block of the sub-block of capable the 1st row of upside prediction piece m/k of current decoding block, capable the 2nd row of upside prediction piece m/k of current decoding block ..., current decoding block the capable j row of upside prediction piece m/k sub-block ..., current decoding block the sub-block sub-block of the capable m/k row of upside prediction piece m/k; The sub-block of capable the 1st row of upper right side prediction piece m/k of the current decoding block of E ' expression; D1 ', D2 ' ..., Di ' ..., Dm ' ' represents respectively: the sub-block of the sub-block of left side prediction piece the 1st row m/k row of current decoding block, left side prediction piece the 2nd row m/k row of current decoding block ..., current decoding block the capable m/k row of left side prediction piece i sub-block ..., current decoding block the sub-block of the capable m/k row of left side prediction piece m/k;
P_yt dec(I+i 2, J+j 2) expression current decoding block transform domain information piece I+i 2Row J+j 2The transform domain predicted value of the brightness value of row; yg Left(I+i 2, m-k+j 2) ' the transform domain information piece I+i of the capable m/k row of the current decoding block of expression left side prediction piece i sub-block 2Row m-k+j 2The luminance reconstruction value of row;
yg up(m-k+i 2, J+j 2) ': the transform domain information piece m-k+i that represents the capable j row of current decoding block upside prediction piece m/k sub-block 2Row J+j 2The luminance reconstruction value of row; yg Left_up(m-k+i 2, m-k+j 2) ': the transform domain information piece m-k+i that represents the capable m/k row of current decoding block upper left side prediction piece m/k sub-block 2Row m-k+j 2The luminance reconstruction value of row; yg Right_up(m-k+i 2, j 2) ': the transform domain information piece m-k+i that represents capable the 1st row sub-block of current decoding block upper right side prediction piece m/k 2Row j 2The luminance reconstruction value of row; I=(i-1) * k, J=(j-1) * k, 1≤i≤m/k, 1≤j≤m/k;
128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) × ( k ) Expression equals by (k) * (k) is individual the matrix that 128 value forms, and is called fundamental space domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) Expression basic transformation domain information piece;
yt 128 ( 1,1 ) yt 128 ( 1,2 ) . . . yt 128 ( 1 , k ) yt 128 ( 2,1 ) yt 128 ( 2,2 ) . . . yt 128 ( 2 , k ) . . . . . . . . . . . . yt 128 ( k , 1 ) yt 128 ( k , 2 ) . . . yt 128 ( k , k ) = C * 128 128 . . . 128 128 128 . . . 128 . . . . . . . . . . . . 128 128 . . . 128 ( k ) x ( k ) * C T ; yt 128(i 2, j 2): expression basic transformation domain information piece i 2Row j 2The brightness value of row, 1≤i 2≤ k, 1≤j 2≤ k;
yg 128 ( 1,1 ) yg 128 ( 1,2 ) . . . yg 128 ( 1 , k ) yg 128 ( 2,1 ) yg 128 ( 2,2 ) . . . yg 128 ( 2 , k ) . . . . . . . . . . . . yg 128 ( k , 1 ) yg 128 ( k , 2 ) . . . yg 128 ( k , k ) The reconstructed blocks of expression basic transformation domain information piece;
Figure FDA00003622010800285
Namely basic transformation domain information piece is carried out quantization operation, what obtain is exactly the reconstructed blocks of basic transformation domain information piece; yg 128(i, j) is to yt 128The reconstruction value of the capable j row of the reconstructed blocks i of the basic transformation domain information piece that obtains after (i, j) quantizes.
CN2013103373945A 2013-08-05 2013-08-05 Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block Pending CN103391443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103373945A CN103391443A (en) 2013-08-05 2013-08-05 Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103373945A CN103391443A (en) 2013-08-05 2013-08-05 Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block

Publications (1)

Publication Number Publication Date
CN103391443A true CN103391443A (en) 2013-11-13

Family

ID=49535587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103373945A Pending CN103391443A (en) 2013-08-05 2013-08-05 Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block

Country Status (1)

Country Link
CN (1) CN103391443A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113678453A (en) * 2019-04-12 2021-11-19 北京字节跳动网络技术有限公司 Context determination for matrix-based intra prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998020680A1 (en) * 1996-11-07 1998-05-14 Matsushita Electric Industrial Co., Ltd. Image encoder and image decoder
CN101682770A (en) * 2007-06-15 2010-03-24 高通股份有限公司 Adaptive coding of video block prediction mode
CN102984522A (en) * 2012-12-14 2013-03-20 深圳百科信息技术有限公司 Brightness transformation domain intra-frame prediction coding and decoding method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998020680A1 (en) * 1996-11-07 1998-05-14 Matsushita Electric Industrial Co., Ltd. Image encoder and image decoder
CN101682770A (en) * 2007-06-15 2010-03-24 高通股份有限公司 Adaptive coding of video block prediction mode
CN102984522A (en) * 2012-12-14 2013-03-20 深圳百科信息技术有限公司 Brightness transformation domain intra-frame prediction coding and decoding method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113678453A (en) * 2019-04-12 2021-11-19 北京字节跳动网络技术有限公司 Context determination for matrix-based intra prediction
CN113678453B (en) * 2019-04-12 2024-05-14 北京字节跳动网络技术有限公司 Matrix-based intra-prediction context determination

Similar Documents

Publication Publication Date Title
CN105049855B (en) The decoded method of video
CN103220528B (en) Method and apparatus by using large-scale converter unit coding and decoding image
CN102045560B (en) Video encoding and decoding method and video encoding and decoding equipment
CN105959692B (en) Method for video coding for being encoded to segmentation block
JP6389264B2 (en) Encoding method and apparatus, and decoding method and apparatus
CN105721878A (en) Image Processing Device And Method For Intra-Frame Predication In Hevc Video Coding
CN104702958A (en) HEVC intraframe coding method and system based on spatial correlation
JP2006014342A5 (en)
CN105357540A (en) Method and apparatus for decoding video
CN102484699B (en) The method of Code And Decode, the corresponding intrument for Code And Decode are carried out to image
CN103250412A (en) Image encoding/decoding method for rate-istortion optimization and apparatus for performing same
CN101583036A (en) Method for determining the relation between movement characteristics and high efficient coding mode in pixel-domain video transcoding
CN102215392B (en) Intra-frame predicting method or device for estimating pixel value
CN107637077A (en) For the block determined by using the mode via adaptive order come the method and apparatus that are encoded or decoded to image
CN103596003B (en) Interframe predication quick mode selecting method for high-performance video coding
CN101790096B (en) Encoding and decoding method and device based on double prediction
CN1194544C (en) Video encoding method based on prediction time and space domain conerent movement vectors
CN110677644B (en) Video coding and decoding method and video coding intra-frame predictor
CN102984522B (en) A kind of luminance transformation territory infra-frame prediction decoding method and system
CN102196270B (en) Intra-frame prediction method, device, coding and decoding methods and devices
CN103402094A (en) Method and system for predicted encoding and decoding in chrominance frame of transform domain
CN102215391B (en) Video data encoding and decoding method and device as well as transform processing method and device
CN103391443A (en) Intra-frame prediction encoding and decoding method and system of brightness transformation domain of big size block
CN103533351B (en) A kind of method for compressing image quantifying table more
CN105791868A (en) Video coding method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: The central Shenzhen city of Guangdong Province, 518057 Keyuan Road, Nanshan District science and Technology Park No. 15 Science Park Sinovac A Building 1 unit 403, No. 405 unit

Applicant after: Shenzhen Yunzhou Multimedia Technology Co., Ltd.

Address before: Unit B4 9 building 518057 Guangdong city of Shenzhen province Nanshan District high in the four EVOC Technology Building No. 31

Applicant before: Shenzhen Yunzhou Multimedia Technology Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131113