Background technology:
Video coding and decoding technology is to realize multi-medium data storage and the key of transmitting efficiently, and advanced video coding and decoding technology exists with the form of standard usually.The MPEG series international standard that Motion Picture Experts Group's (Moving Picture Expert Group is called for short MPEG) that at present typical video compression standard has International Organization for Standardization to divide into releases; The H.26x series video compression standard that International Telecommunication Union proposes, and the JVT video encoding standard formulated of the joint video team (Joint VideoTeam is called for short JVT) set up of ISO and ITU etc.What the JVT standard adopted is a kind of novel coding techniques, and it is than all high many of the compression efficiency of existing any coding standard.The formal title of JVT standard in ISO is the tenth part of MPEG-4 standard, and the formal title in ITU is a standard H.264.
Video coding process is exactly the process that each two field picture of video sequence is encoded.In the JVT video encoding standard, the coding of each two field picture is elementary cell with the macro block.When each two field picture of coding, can be divided into situations such as (I frame) coding in the frame, prediction (P frame) coding and bi-directional predicted (B frame) coding again.The characteristics of I frame coding are not need with reference to other frame when encoding and decoding.In general: during coding, I frame, P frame and B frame are encoded to intert and are carried out, and for example: the order according to IBBPBBP is carried out.But, for some special application, for example: require low computational complexity, low memory capacity or require application such as Real Time Compression, can only use the I frame to encode.In addition, the video of full I frame coding also has the characteristics of being convenient to edit.In I frame coding, the redundancy in the macro block is eliminated by orthogonal transform, as discrete cosine transform (DCT), wavelet transformation etc.And traditional video coding algorithm is when eliminating redundant between macro block, adopts the Forecasting Methodology on the coefficient domain of orthogonal transform usually.Yet this prediction can only be carried out on DC component, so efficient is not high.
In I frame coding, adopting multidirectional spatial prediction is the main flow of studying at present, and has obtained good effect.Spatial prediction in the frame is meant: when carrying out I frame Code And Decode, at first press certain pattern, by the information in (decoding end can obtain) frame (such as the prediction of the generation current block piece of adjacent reconstruct); Then, deduct the piece that prediction is come out, obtain residual error, again residual error is encoded with the actual piece that will encode.
Multidirectional spatial prediction technology has obtained good application in video coding.The JVT video encoding standard has just adopted this technology.Yet also there are two main shortcomings in existing multi-direction spatial prediction technology: the existing technology of the first is applied to can produce serious scintillation when continuous I frame is encoded, and influences visual effect; Another is: multi-direction spatial prediction has changed the probability distribution of residual image on coefficient domain, and existing method still adopts fixing conversion coefficient zigzag scanning sequency, referring to Fig. 4, this zigzag scanning refers in Video Coding Scheme, to the coded sequence of the coefficient of the piece behind the change quantization, this order has very big influence to code efficiency.In the present coded system (jpeg, mpeg etc.),, generally adopted this fixing scanning sequency, so code efficiency does not reach optimum for the piece of identical size.JVT is a kind of video encoding standard efficiently in the middle of formulating at present.It is at first formulated by ITU (International Telecommunications Union, International Telecommunications Union), and then is adopted by ISO/IEC International Standards Organization, as the tenth part of ISO/IEC 14496 (MPEG4).
But when adopting JVT to do full I frame coding, reconstructed image has the phenomenon of flicker when playback.By analysis and checking, think it mainly is because block size is variable when coding causes with the relative randomness of infra-frame prediction.
Block size is variable to refer to: a macro block that is encoded can be subdivided into littler sub-piece by coding mode.Cut apart the pattern difference, the size of the sub-piece that then is divided into is also different.Block size is variable to cause flicker, and chief reason is: the piece that the same position of former frame and back one frame and content not have variation substantially, adopted different partitioning schemes during coding, and cause reconstruction result to be very different.This part just can be avoided by the coding strategy of suitable modification encoder, and does not need to revise decoder.Referring to Fig. 1, it is for being subdivided into the schematic diagram of various thin pieces to macro block among the JVT.
Common encoding scheme generally has only inter prediction, is used to eliminate temporal redundancy, and the redundancy on the space is then eliminated by various conversion.JVT has then proposed infra-frame prediction, is used from the redundancy of eliminating on the space with transition coding one, and then has improved code efficiency greatly.Intra4 * 4 and Intra16 * 16 two kind of pattern is specifically arranged.(Intra4 * 4 and Intra16 * 16 are partition modes of two kinds of macro blocks, 9 kinds of predictive modes are arranged under Intra4 * 4 patterns, Intral6 * 16 time have 4 kinds) referring to Fig. 2, under Intra4 * 4 patterns, will carry out the prediction in the frame to each the height piece in the macro block.The middle pixel of each fritter of 4 * 4 will be predicted by 17 pixels of having decoded in the adjacent piece.
Referring to Fig. 3, the predictive mode in the frame is divided into 9 kinds (pattern 0 is to patterns 8), and wherein pattern 2 is the DC prediction in the MPEG-4 standard.
In Intra16 * 16, the pixel of the piece of supposing to want predicted with P (x, y) expression, wherein, x, y=0 ... 15, and the critical pixel P in a predicted left side (1, y), y=0..15, predicted last critical pixel P (x ,-1), x=0..15.Define 4 kinds of predictive modes, be respectively: vertical prediction, horizontal forecast, DC prediction and planar prediction.The predictive mode of chrominance block also has four kinds, and is similar substantially with luminance block.
The relative randomness of infra-frame prediction is meant: owing to be used for the difference of the information in the frame of generation forecast and the difference of predictive mode, the same position of front and back two frames and content not have the piece of variation substantially, and their predicted value is in general inequality.When carrying out continuous I frame coding for the system that does not have infra-frame prediction, the small differences of front and back two corresponding blocks can be transformed the quantization operation in territory and get rid of.Similar more to Germany and Britain's macro block, the probability that this difference is removed is also big more.If two identical, then reconstructed image is also identical.Yet for the system that infra-frame prediction is arranged, reconstructed image is made of two parts addition: the residual image of predicted picture and reconstruct.The residual error of reconstruct is owing to the quantizing process of frequency domain, and its frequency coefficient satisfies the integral multiple of quantization step; And predicted picture is not owing to there is this process, the possibility of integral multiple that its frequency coefficient just satisfies quantization step is very little, if in fact with its frequency coefficient divided by quantization step, the fractional part of the coefficient that obtains can be thought the random number (when quantization step is not very big) between the 0-1, and the probability of any number that promptly equals (to comprise 0) between the 0-1 equates.
Because reconstructed image is made of such two parts, so the fractional part of the coefficient that obtains divided by quantization step with its frequency coefficient can be thought the random number between the 0-1 too.For more approaching front and back two corresponding blocks of pixel value, because the relative randomness of reconstructed image on frequency domain is different with the system that does not have infra-frame prediction, the relation of the similarity degree of their reconstructed image and their similarity degrees own is very not close.Even they are own identical, the identical probability of reconstructed image is also very little.
Referring to Fig. 5,6, existing cataloged procedure is: reconstructed image is under the control of mode selection module, handle by prediction module, prediction of output image, after this predictive image and of the processing of present encoding image through calculating residual error coefficient module, pass through the processing of scan module and entropy coding again, final output encoder code stream.
Referring to Fig. 7, the signal flow of multi-direction spatial predictive encoding system decodes part is in the prior art: export as video flowing through after the compensation of predictive image after the decoded decoding of code stream process entropy, inverse quantization and the inverse transformation again.
Above-mentioned coding/decoding process can't overcome the scintillation that produces based on the method for video coding of multi-direction spatial prediction when being applied to continuous I frame coding, owing to adopt fixing scan mode, can't improve the code efficiency of the method for video coding of multi-direction spatial prediction.
Summary of the invention
Main purpose of the present invention provides a kind of new spatial Forecasting Methodology that is used for video coding, overcome the defective that when being applied to continuous I frame coding, produces scintillation based on the method for video coding of multi-direction spatial prediction, and the code efficiency that improves the method for video coding of multi-direction spatial prediction, for the video coding technique based on multi-direction spatial prediction provides the continuous I frame encoding scheme of anti-flicker and based on the residual error coefficient sweeping scheme of pattern, when alleviating scintillation, guarantee code efficiency.
Another object of the present invention provides a kind of new spatial Forecasting Methodology that is used for video coding, as case study on implementation, provides the concrete technological means that solves continuous I frame coding flicker problem and improve code efficiency with the JVT standard for the JVT standard.
Another purpose of the present invention provides a kind of new spatial prediction unit that is used for video coding, and the concrete device of realizing said method is provided.
The objective of the invention is to realize by the following technical solutions:
A kind of new spatial Forecasting Methodology that is used for video coding when code stream is encoded, is used the transform method that adopts when handling residual image that predicted picture is also transformed to frequency domain, and is quantized with identical quantization parameter, and then as predicted picture; Identical processing method is done the frequency domain quantification to predicted picture when adopting with coding to this predicted picture during decoding, compensates to then on the residual image that decodes.
Described cataloged procedure is specially:
Step 100: according to selected predictive mode, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Step 101: predicted picture is transformed to frequency domain;
Step 102: the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts when handling residual image is identical; And the matrix of establishing the frequency coefficient after the quantification satisfies following formula:
Z=Q(Y)=(Y×Quant(QP)+Qconst(QP))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing,
Y is the frequency coefficient matrix,
Qp is a quantization parameter,
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of JVT definition;
Step 103: the matrix of the frequency coefficient that step 102 is obtained according to following formula carries out inverse quantization;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2
Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQconst (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of JVT definition;
Step 104: according to the method for step 100-103, the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture;
Step 105: the frequency coefficient matrix behind the frequency domain figure image subtraction inverse quantization directly obtains the frequency-domain residual image;
Step 106: quantize the frequency-domain residual coefficient after the frequency-domain residual image obtains quantizing, formula is the same.
Step 107: the frequency-domain residual coefficient is done coefficient scanning, and entropy coding obtains code stream;
Step 108:, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient according to following formula;
C=C+Z;
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 109: with the formula inverse quantization frequency-domain residual coefficient of JVT;
Step 110: size and pattern inverse transformation frequency-domain residual coefficient according to piece obtain preliminary reconstructed image;
Step 111: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Before above-mentioned coding, also further comprise processing based on predictive mode decision scanning sequency, concrete process is: the probability of adding up the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, value according to this probability, order from big to small generates the scanning sequency table, in order to replace single zigzag scan table.This zigzag scanning refers in Video Coding Scheme, and to the coded sequence of the coefficient of the piece behind the change quantization, this order has very big influence to code efficiency.Referring to Fig. 4, in the present coded system (jpeg, mpeg etc.),, generally adopted fixing scanning sequency for the piece of identical size.
When coded scanning, consult the scanning sequency table according to selected mode sequence, and residual error coefficient is scanned by the order of the position that checks in.
Described decode procedure is:
Step 200: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 201: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 203: predicted picture is transformed to frequency domain;
Step 204: the frequency coefficient of quantitative prediction image obtains the frequency coefficient matrix;
Step 205: according to following formula, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 206: inverse quantization frequency-domain residual coefficient;
Step 207: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 208: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decode procedure can also for:
Step 210: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 211: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 212: predicted picture is transformed to frequency domain;
Step 213: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 214: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 215: the frequency coefficient matrix through inverse quantization is compensated to through on the frequency-domain residual coefficient of inverse quantization,
Step 216: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 217: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decode procedure again can for:
Step 220: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 221: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 222: predicted picture is transformed to frequency domain;
Step 223: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 234: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 225: respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Step 226: the frequency coefficient matrix through inverse quantization and inverse transformation is compensated on the frequency-domain residual coefficient, obtain preliminary reconstructed image;
Step 227: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
When above-mentioned decoding scanning, consult the scanning sequency table according to selected mode sequence, and residual error coefficient is scanned by the order of the position that checks in.
A kind of new spatial prediction unit that is used for video coding which comprises at least coding module and decoder module; Wherein,
This coding module is provided with at least: prediction module, calculating residual error coefficient module, scan module and entropy coding module; This prediction module is handled the reconstructed image of input, obtain predicted picture, this predicted picture compensates the present encoding image after calculating the residual error coefficient resume module, then, after code stream after this compensation is handled through scan module, again by the output of encoding of entropy coding module;
This decoder module is provided with at least: entropy decoder module, compensating module, inverse quantization module, inverse transform block and change quantization module; Signal bit stream to input carries out entropy decoding, inverse quantization and inverse transformation successively; The change quantization module is handled predictive image, obtains the compensated information in the decode procedure.
Described coding is specially:
Prediction module is according to the selected predictive mode of mode selection module, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Calculate the residual error coefficient module predicted picture is transformed to frequency domain, and the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts during with the processing residual image is identical; And the matrix of the frequency coefficient after quantizing satisfies following formula:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing;
Y is the frequency coefficient matrix;
Qp is a quantization parameter
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of JVT definition;
Calculate the residual error coefficient module and the matrix of the frequency coefficient that obtains is carried out inverse quantization according to following formula;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2
Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQcons t (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of JVT definition;
Calculate the residual error coefficient module and adopt above-mentioned method that the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture; And further deduct the frequency coefficient matrix and directly obtain the frequency-domain residual image; Again the frequency-domain residual image is quantized the frequency-domain residual coefficient after obtaining quantizing;
Scan module is done coefficient scanning to the frequency-domain residual coefficient; This entropy coding module is encoded to the information after scanning and is obtained code stream;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula;
C=C+Z,
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module according to the formula of JVT to frequency-domain residual coefficient inverse quantization; Inverse transform block to frequency-domain residual coefficient inverse transformation, obtains preliminary reconstructed image according to the size of piece and pattern; Filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block.
Also be provided with the mode selection module based on predictive mode decision scanning sequency in this coding module and/or the decoder module, the mode that is used for controlling prediction module and scan module is selected, and improves coding and/or decoding efficiency; This scan module is added up the probability of the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, and according to the value of this probability, order from big to small generates the scanning sequency table, in order to replace single zigzag scan table.
Described decoding is specially:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module is carried out inverse quantization to the frequency-domain residual coefficient and is handled;
Inverse transform block is carried out inverse transformation according to the size and the pattern of piece to the frequency-domain residual coefficient, obtains preliminary reconstructed image; Last filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block.
Described decoding specifically can also for:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization through on the frequency-domain residual coefficient of inverse quantization,
Inverse transform block is according to the size and the pattern of piece, and inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decoding specifically again can for:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Be positioned at inverse quantization module module inverse transform block afterwards respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization and inverse transformation on the frequency-domain residual coefficient, obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Decoder module is also consulted the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned when decoding.
By the analysis to above technical scheme, the present invention has following advantage:
1, by the above-mentioned new spatial Forecasting Methodology that is used for video coding, overcome based on the method for video coding of multi-direction spatial prediction when being applied to continuous I frame coding, produce the defective of scintillation, improved the code efficiency of the method for video coding of multi-direction spatial prediction, the continuous I frame encoding scheme of anti-flicker is provided for the video coding technique based on multi-direction spatial prediction, with residual error coefficient sweeping scheme based on pattern, when alleviating scintillation, also further guaranteed code efficiency.
2, the present invention as case study on implementation, for the JVT standard provides the continuous I frame coding flicker problem that solves, and provides concrete technological means for the code efficiency that improves this standard with the JVT standard.
3, device provided by the invention provides hardware module and the assembled scheme thereof of realizing the concrete system configuration of said method and realizing this system.
Embodiment
The present invention will be further described in detail below in conjunction with specific embodiment:
The invention provides a kind of new spatial Forecasting Methodology and device thereof that is used for video coding, its purpose is effectively to alleviate the method for video coding based on multi-direction spatial prediction, the scintillation that produces when the I frame is encoded continuously; And decide the order of scanning according to predictive mode, be used for effectively improving code efficiency based on the method for video coding of multi-direction spatial prediction.
In the JVT coding standard, adopted following step to realize anti-flicker processing in one embodiment of the invention:
Referring to Fig. 7, Fig. 8,
The processing of coding side:
1, generation forecast image: according to selected predictive mode, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece.This step is former identical in steps with JVT;
2, predicted picture is transformed to frequency domain.The transform method that adopts when handling residual image among the method for conversion and the JVT is identical.For example, the piece for 4 * 4 is established and is input as X, then exports Y and is:
Wherein, Y is the frequency coefficient of predicted picture, and X is a predicted picture
3, the frequency coefficient Y of quantitative prediction image.The Qp that quantization parameter Qp adopts when handling residual image is identical.If the matrix of the frequency coefficient after quantizing is Z; Then quantitative formula is:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
4, the Z inverse quantization is obtained W.Inverse quantization here and the inverse quantization among the JVT are had any different: the inverse quantization among the JVT and the yardstick of quantification are also different,
In JVT, its inverse quantization formula is:
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
In order to make coefficient after the quantification return to yardstick before quantizing, just must the redesign inverse quantization formula: inverse quantization formula of the present invention be:
W=DQ’(Z)=(Z×DQuant’(Qp)+DQconst’(Qp))>>Q_per’(Qp)
And new formula must satisfy:
DQuant ' (Qp) * Quant (Qp) is approximately equal to 2
Q-per ' (Qp) * Q_bit (Qp)
5, the current block I that will encode is transformed to frequency domain and obtain frequency domain figure as F; Method is the same;
6, F deducts W and directly obtains frequency-domain residual image S;
7, the frequency-domain residual coefficient C after quantification S obtains quantizing, formula is the same;
8, C is done coefficient scanning, entropy coding obtains code stream;
9, Z is compensated on the C.Be C=C+Z;
10, inverse quantization C is with the formula of JVT
11, inverse transformation C obtains preliminary reconstructed image B, according to block size and the pattern original inverse transform method of JVT;
12, B is done the filtering of driving away blocking effect, obtain the output image 0 of current block.
Referring to Figure 10, Figure 11
One of processing of decoding end:
1, the entropy decoding obtains predictive mode and frequency-domain residual coefficient C;
2, generation forecast image: the predictive mode that obtains according to entropy decoding, by the decoded picture generation forecast image of the adjacent blocks of current decoding block.This step is former identical in steps with JVT.
3, predicted picture is transformed to frequency domain.The transform method that adopts when handling residual image among the method for conversion and the JVT is identical.2nd step of this step with when coding is identical.
4, the frequency coefficient Y of quantitative prediction image obtains Z.3rd step of this step with when coding is identical.
5, Z is compensated on the C.Be C=C+Z.With coding step 9
6, quantize C, with the formula of JVT.With coding step 10.
7, inverse transformation C obtains preliminary reconstructed image B.According to block size and the pattern original inverse transform method of JVT.The 11st step during with coding is identical.
8, B is done the filtering of driving away blocking effect, obtain the output image 0 of current block.The 12nd step during with coding is identical.
Referring to Figure 12, Figure 13, the processing of decoding end of the present invention two, three: basic identical with one of processing of above-mentioned decoding end, different is: one of processing of decoding end is directly quantizing post-compensation, two of the processing of decoding end compensates for finishing inverse quantization again, and three of the processing of decoding end is to finish the inverse transformation post-compensation.
Above-mentioned scheme two, three and scheme one are similar, are the position difference of compensation, and the difference of pressing compensated position also will be done inverse quantization and inverse transformation to the predicted picture after quantizing.
Method based on predictive mode decision scanning sequency of the present invention then can effectively improve the code efficiency based on the method for video coding of multi-direction spatial prediction.In the JVT coding standard, adopt the scan module of following steps realization based on predictive mode:
In the design phase: at first, add up the probability of the coefficient non-0 of each frequency of residual image under every kind of pattern respectively; Then, (for example: generate the scanning sequency table according to probability order from big to small with matrix of variables T (m, i) expression; That is: the position of the coefficient of i scanning is T (m, i)) under the m pattern.Replace single zigzag scan table Z (i).
In the encoding and decoding stage:
When scanning, according to selected mode m, and look into scanning sequency table T by the order that increases progressively of i, by the order of the position that checks in residual error coefficient is scanned.
Referring to Fig. 9, coding module in the device of the present invention is on the one type of prior art syringe basis, also be provided with new scan module in calculating between residual error coefficient module and the entropy coding module, this new scan module is according to the control of mode selection module, select predetermined scanning sequency, thereby improve the efficient of handling.
Referring to Figure 11,12,13, decoder module in the device of the present invention, all have additional the change quantization module, and comprise entropy decoder module, inverse quantization module, inverse transform block and predictive compensation module, different is: in different embodiment, the position of predictive compensation module can be separately positioned on after the entropy decoder module, or after the inverse quantization module, or after the inverse transform block.With Figure 11 is example, and concrete decode procedure comprises: code stream is carried out after the entropy decoding.In the predictive compensation module, compensate processing through the predictive image of change quantization resume module and the code stream information of decoding, handle back output through inverse quantization module and inverse transform block successively then through entropy.Different with Figure 11 is: the predictive compensation position shown in Figure 12,13 lays respectively at after inverse quantization module or the inverse transform block.
It should be noted that at last: above embodiment is only unrestricted in order to explanation the present invention, although the present invention is had been described in detail with reference to above-mentioned disclosed preferred embodiment, those of ordinary skill in the art is to be understood that: still can make amendment or be equal to replacement the present invention; And all technical schemes of the spirit and scope that do not break away from the present invention and put down in writing, it all should be encompassed in the middle of the claim scope of the present invention.