CN1225126C - Space predicting method and apparatus for video encoding - Google Patents

Space predicting method and apparatus for video encoding Download PDF

Info

Publication number
CN1225126C
CN1225126C CN02130833.0A CN02130833A CN1225126C CN 1225126 C CN1225126 C CN 1225126C CN 02130833 A CN02130833 A CN 02130833A CN 1225126 C CN1225126 C CN 1225126C
Authority
CN
China
Prior art keywords
frequency
coefficient
module
image
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN02130833.0A
Other languages
Chinese (zh)
Other versions
CN1489391A (en
Inventor
高文
范晓鹏
吕岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UNITED XINYUAN DIGITAL AUDIO V
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN02130833.0A priority Critical patent/CN1225126C/en
Publication of CN1489391A publication Critical patent/CN1489391A/en
Application granted granted Critical
Publication of CN1225126C publication Critical patent/CN1225126C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a novel space predicting method for video encoding and a device thereof. When a code flow is coded, a predicted image is converted to a frequency domain for frequency domain quantization, and the image is treated as the predicted image again; when the code flow is decoded, frequency domain quantization is carried out to the predicted image, and then the predicted image is compensated to a decoded residual image. The device comprises a coding module and a decoding module at least, wherein the coding module is provided with a prediction module, a residual coefficient calculating module, a scanning module and an entropy coding module; the decoding module is provided with an entropy decoding module, a compensation module, a dequantization module, an inverse transformation module and a transformation quantization module. The present invention overcomes the defect of a video coding method based on multidirectional spatial prediction that a flicker phenomenon occurs when the method is used for coding continuous I frames, and the coding efficiency of the video coding method based on the multidirectional spatial prediction is enhanced; the present invention provides a coding plan for the continuous I frames for resisting the flicker phenomenon and a scanning plan for residual coefficients based on a mode, and furthermore, the coding efficiency is ensured; the present invention also provides a specific system architecture for realizing the method.

Description

The new spatial Forecasting Methodology and the device thereof that are used for video coding
Technical field:
The present invention relates to a kind of new spatial Forecasting Methodology and device thereof that is used for video coding, specifically be meant in a kind of video coding and decoding technology based on the JVT standard, eliminate compensation for (I frame) flicker that encoded video streams produced in the full frame, and further improve the new spatial Forecasting Methodology of encoding-decoding efficiency and realize the device of this method; Belong to the Digital Video Processing technical field.
Background technology:
Video coding and decoding technology is to realize multi-medium data storage and the key of transmitting efficiently, and advanced video coding and decoding technology exists with the form of standard usually.The MPEG series international standard that Motion Picture Experts Group's (Moving Picture Expert Group is called for short MPEG) that at present typical video compression standard has International Organization for Standardization to divide into releases; The H.26x series video compression standard that International Telecommunication Union proposes, and the JVT video encoding standard formulated of the joint video team (Joint VideoTeam is called for short JVT) set up of ISO and ITU etc.What the JVT standard adopted is a kind of novel coding techniques, and it is than all high many of the compression efficiency of existing any coding standard.The formal title of JVT standard in ISO is the tenth part of MPEG-4 standard, and the formal title in ITU is a standard H.264.
Video coding process is exactly the process that each two field picture of video sequence is encoded.In the JVT video encoding standard, the coding of each two field picture is elementary cell with the macro block.When each two field picture of coding, can be divided into situations such as (I frame) coding in the frame, prediction (P frame) coding and bi-directional predicted (B frame) coding again.The characteristics of I frame coding are not need with reference to other frame when encoding and decoding.In general: during coding, I frame, P frame and B frame are encoded to intert and are carried out, and for example: the order according to IBBPBBP is carried out.But, for some special application, for example: require low computational complexity, low memory capacity or require application such as Real Time Compression, can only use the I frame to encode.In addition, the video of full I frame coding also has the characteristics of being convenient to edit.In I frame coding, the redundancy in the macro block is eliminated by orthogonal transform, as discrete cosine transform (DCT), wavelet transformation etc.And traditional video coding algorithm is when eliminating redundant between macro block, adopts the Forecasting Methodology on the coefficient domain of orthogonal transform usually.Yet this prediction can only be carried out on DC component, so efficient is not high.
In I frame coding, adopting multidirectional spatial prediction is the main flow of studying at present, and has obtained good effect.Spatial prediction in the frame is meant: when carrying out I frame Code And Decode, at first press certain pattern, by the information in (decoding end can obtain) frame (such as the prediction of the generation current block piece of adjacent reconstruct); Then, deduct the piece that prediction is come out, obtain residual error, again residual error is encoded with the actual piece that will encode.
Multidirectional spatial prediction technology has obtained good application in video coding.The JVT video encoding standard has just adopted this technology.Yet also there are two main shortcomings in existing multi-direction spatial prediction technology: the existing technology of the first is applied to can produce serious scintillation when continuous I frame is encoded, and influences visual effect; Another is: multi-direction spatial prediction has changed the probability distribution of residual image on coefficient domain, and existing method still adopts fixing conversion coefficient zigzag scanning sequency, referring to Fig. 4, this zigzag scanning refers in Video Coding Scheme, to the coded sequence of the coefficient of the piece behind the change quantization, this order has very big influence to code efficiency.In the present coded system (jpeg, mpeg etc.),, generally adopted this fixing scanning sequency, so code efficiency does not reach optimum for the piece of identical size.JVT is a kind of video encoding standard efficiently in the middle of formulating at present.It is at first formulated by ITU (International Telecommunications Union, International Telecommunications Union), and then is adopted by ISO/IEC International Standards Organization, as the tenth part of ISO/IEC 14496 (MPEG4).
But when adopting JVT to do full I frame coding, reconstructed image has the phenomenon of flicker when playback.By analysis and checking, think it mainly is because block size is variable when coding causes with the relative randomness of infra-frame prediction.
Block size is variable to refer to: a macro block that is encoded can be subdivided into littler sub-piece by coding mode.Cut apart the pattern difference, the size of the sub-piece that then is divided into is also different.Block size is variable to cause flicker, and chief reason is: the piece that the same position of former frame and back one frame and content not have variation substantially, adopted different partitioning schemes during coding, and cause reconstruction result to be very different.This part just can be avoided by the coding strategy of suitable modification encoder, and does not need to revise decoder.Referring to Fig. 1, it is for being subdivided into the schematic diagram of various thin pieces to macro block among the JVT.
Common encoding scheme generally has only inter prediction, is used to eliminate temporal redundancy, and the redundancy on the space is then eliminated by various conversion.JVT has then proposed infra-frame prediction, is used from the redundancy of eliminating on the space with transition coding one, and then has improved code efficiency greatly.Intra4 * 4 and Intra16 * 16 two kind of pattern is specifically arranged.(Intra4 * 4 and Intra16 * 16 are partition modes of two kinds of macro blocks, 9 kinds of predictive modes are arranged under Intra4 * 4 patterns, Intral6 * 16 time have 4 kinds) referring to Fig. 2, under Intra4 * 4 patterns, will carry out the prediction in the frame to each the height piece in the macro block.The middle pixel of each fritter of 4 * 4 will be predicted by 17 pixels of having decoded in the adjacent piece.
Referring to Fig. 3, the predictive mode in the frame is divided into 9 kinds (pattern 0 is to patterns 8), and wherein pattern 2 is the DC prediction in the MPEG-4 standard.
In Intra16 * 16, the pixel of the piece of supposing to want predicted with P (x, y) expression, wherein, x, y=0 ... 15, and the critical pixel P in a predicted left side (1, y), y=0..15, predicted last critical pixel P (x ,-1), x=0..15.Define 4 kinds of predictive modes, be respectively: vertical prediction, horizontal forecast, DC prediction and planar prediction.The predictive mode of chrominance block also has four kinds, and is similar substantially with luminance block.
The relative randomness of infra-frame prediction is meant: owing to be used for the difference of the information in the frame of generation forecast and the difference of predictive mode, the same position of front and back two frames and content not have the piece of variation substantially, and their predicted value is in general inequality.When carrying out continuous I frame coding for the system that does not have infra-frame prediction, the small differences of front and back two corresponding blocks can be transformed the quantization operation in territory and get rid of.Similar more to Germany and Britain's macro block, the probability that this difference is removed is also big more.If two identical, then reconstructed image is also identical.Yet for the system that infra-frame prediction is arranged, reconstructed image is made of two parts addition: the residual image of predicted picture and reconstruct.The residual error of reconstruct is owing to the quantizing process of frequency domain, and its frequency coefficient satisfies the integral multiple of quantization step; And predicted picture is not owing to there is this process, the possibility of integral multiple that its frequency coefficient just satisfies quantization step is very little, if in fact with its frequency coefficient divided by quantization step, the fractional part of the coefficient that obtains can be thought the random number (when quantization step is not very big) between the 0-1, and the probability of any number that promptly equals (to comprise 0) between the 0-1 equates.
Because reconstructed image is made of such two parts, so the fractional part of the coefficient that obtains divided by quantization step with its frequency coefficient can be thought the random number between the 0-1 too.For more approaching front and back two corresponding blocks of pixel value, because the relative randomness of reconstructed image on frequency domain is different with the system that does not have infra-frame prediction, the relation of the similarity degree of their reconstructed image and their similarity degrees own is very not close.Even they are own identical, the identical probability of reconstructed image is also very little.
Referring to Fig. 5,6, existing cataloged procedure is: reconstructed image is under the control of mode selection module, handle by prediction module, prediction of output image, after this predictive image and of the processing of present encoding image through calculating residual error coefficient module, pass through the processing of scan module and entropy coding again, final output encoder code stream.
Referring to Fig. 7, the signal flow of multi-direction spatial predictive encoding system decodes part is in the prior art: export as video flowing through after the compensation of predictive image after the decoded decoding of code stream process entropy, inverse quantization and the inverse transformation again.
Above-mentioned coding/decoding process can't overcome the scintillation that produces based on the method for video coding of multi-direction spatial prediction when being applied to continuous I frame coding, owing to adopt fixing scan mode, can't improve the code efficiency of the method for video coding of multi-direction spatial prediction.
Summary of the invention
Main purpose of the present invention provides a kind of new spatial Forecasting Methodology that is used for video coding, overcome the defective that when being applied to continuous I frame coding, produces scintillation based on the method for video coding of multi-direction spatial prediction, and the code efficiency that improves the method for video coding of multi-direction spatial prediction, for the video coding technique based on multi-direction spatial prediction provides the continuous I frame encoding scheme of anti-flicker and based on the residual error coefficient sweeping scheme of pattern, when alleviating scintillation, guarantee code efficiency.
Another object of the present invention provides a kind of new spatial Forecasting Methodology that is used for video coding, as case study on implementation, provides the concrete technological means that solves continuous I frame coding flicker problem and improve code efficiency with the JVT standard for the JVT standard.
Another purpose of the present invention provides a kind of new spatial prediction unit that is used for video coding, and the concrete device of realizing said method is provided.
The objective of the invention is to realize by the following technical solutions:
A kind of new spatial Forecasting Methodology that is used for video coding when code stream is encoded, is used the transform method that adopts when handling residual image that predicted picture is also transformed to frequency domain, and is quantized with identical quantization parameter, and then as predicted picture; Identical processing method is done the frequency domain quantification to predicted picture when adopting with coding to this predicted picture during decoding, compensates to then on the residual image that decodes.
Described cataloged procedure is specially:
Step 100: according to selected predictive mode, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Step 101: predicted picture is transformed to frequency domain;
Step 102: the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts when handling residual image is identical; And the matrix of establishing the frequency coefficient after the quantification satisfies following formula:
Z=Q(Y)=(Y×Quant(QP)+Qconst(QP))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing,
Y is the frequency coefficient matrix,
Qp is a quantization parameter,
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of JVT definition;
Step 103: the matrix of the frequency coefficient that step 102 is obtained according to following formula carries out inverse quantization;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2 Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQconst (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of JVT definition;
Step 104: according to the method for step 100-103, the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture;
Step 105: the frequency coefficient matrix behind the frequency domain figure image subtraction inverse quantization directly obtains the frequency-domain residual image;
Step 106: quantize the frequency-domain residual coefficient after the frequency-domain residual image obtains quantizing, formula is the same.
Step 107: the frequency-domain residual coefficient is done coefficient scanning, and entropy coding obtains code stream;
Step 108:, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient according to following formula;
C=C+Z;
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 109: with the formula inverse quantization frequency-domain residual coefficient of JVT;
Step 110: size and pattern inverse transformation frequency-domain residual coefficient according to piece obtain preliminary reconstructed image;
Step 111: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Before above-mentioned coding, also further comprise processing based on predictive mode decision scanning sequency, concrete process is: the probability of adding up the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, value according to this probability, order from big to small generates the scanning sequency table, in order to replace single zigzag scan table.This zigzag scanning refers in Video Coding Scheme, and to the coded sequence of the coefficient of the piece behind the change quantization, this order has very big influence to code efficiency.Referring to Fig. 4, in the present coded system (jpeg, mpeg etc.),, generally adopted fixing scanning sequency for the piece of identical size.
When coded scanning, consult the scanning sequency table according to selected mode sequence, and residual error coefficient is scanned by the order of the position that checks in.
Described decode procedure is:
Step 200: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 201: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 203: predicted picture is transformed to frequency domain;
Step 204: the frequency coefficient of quantitative prediction image obtains the frequency coefficient matrix;
Step 205: according to following formula, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 206: inverse quantization frequency-domain residual coefficient;
Step 207: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 208: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decode procedure can also for:
Step 210: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 211: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 212: predicted picture is transformed to frequency domain;
Step 213: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 214: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 215: the frequency coefficient matrix through inverse quantization is compensated to through on the frequency-domain residual coefficient of inverse quantization,
Step 216: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 217: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decode procedure again can for:
Step 220: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 221: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 222: predicted picture is transformed to frequency domain;
Step 223: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 234: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 225: respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Step 226: the frequency coefficient matrix through inverse quantization and inverse transformation is compensated on the frequency-domain residual coefficient, obtain preliminary reconstructed image;
Step 227: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
When above-mentioned decoding scanning, consult the scanning sequency table according to selected mode sequence, and residual error coefficient is scanned by the order of the position that checks in.
A kind of new spatial prediction unit that is used for video coding which comprises at least coding module and decoder module; Wherein,
This coding module is provided with at least: prediction module, calculating residual error coefficient module, scan module and entropy coding module; This prediction module is handled the reconstructed image of input, obtain predicted picture, this predicted picture compensates the present encoding image after calculating the residual error coefficient resume module, then, after code stream after this compensation is handled through scan module, again by the output of encoding of entropy coding module;
This decoder module is provided with at least: entropy decoder module, compensating module, inverse quantization module, inverse transform block and change quantization module; Signal bit stream to input carries out entropy decoding, inverse quantization and inverse transformation successively; The change quantization module is handled predictive image, obtains the compensated information in the decode procedure.
Described coding is specially:
Prediction module is according to the selected predictive mode of mode selection module, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Calculate the residual error coefficient module predicted picture is transformed to frequency domain, and the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts during with the processing residual image is identical; And the matrix of the frequency coefficient after quantizing satisfies following formula:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing;
Y is the frequency coefficient matrix;
Qp is a quantization parameter
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of JVT definition;
Calculate the residual error coefficient module and the matrix of the frequency coefficient that obtains is carried out inverse quantization according to following formula;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2 Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQcons t (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of JVT definition;
Calculate the residual error coefficient module and adopt above-mentioned method that the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture; And further deduct the frequency coefficient matrix and directly obtain the frequency-domain residual image; Again the frequency-domain residual image is quantized the frequency-domain residual coefficient after obtaining quantizing;
Scan module is done coefficient scanning to the frequency-domain residual coefficient; This entropy coding module is encoded to the information after scanning and is obtained code stream;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula;
C=C+Z,
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module according to the formula of JVT to frequency-domain residual coefficient inverse quantization; Inverse transform block to frequency-domain residual coefficient inverse transformation, obtains preliminary reconstructed image according to the size of piece and pattern; Filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block.
Also be provided with the mode selection module based on predictive mode decision scanning sequency in this coding module and/or the decoder module, the mode that is used for controlling prediction module and scan module is selected, and improves coding and/or decoding efficiency; This scan module is added up the probability of the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, and according to the value of this probability, order from big to small generates the scanning sequency table, in order to replace single zigzag scan table.
Described decoding is specially:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module is carried out inverse quantization to the frequency-domain residual coefficient and is handled;
Inverse transform block is carried out inverse transformation according to the size and the pattern of piece to the frequency-domain residual coefficient, obtains preliminary reconstructed image; Last filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block.
Described decoding specifically can also for:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization through on the frequency-domain residual coefficient of inverse quantization,
Inverse transform block is according to the size and the pattern of piece, and inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Described decoding specifically again can for:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Be positioned at inverse quantization module module inverse transform block afterwards respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization and inverse transformation on the frequency-domain residual coefficient, obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
Decoder module is also consulted the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned when decoding.
By the analysis to above technical scheme, the present invention has following advantage:
1, by the above-mentioned new spatial Forecasting Methodology that is used for video coding, overcome based on the method for video coding of multi-direction spatial prediction when being applied to continuous I frame coding, produce the defective of scintillation, improved the code efficiency of the method for video coding of multi-direction spatial prediction, the continuous I frame encoding scheme of anti-flicker is provided for the video coding technique based on multi-direction spatial prediction, with residual error coefficient sweeping scheme based on pattern, when alleviating scintillation, also further guaranteed code efficiency.
2, the present invention as case study on implementation, for the JVT standard provides the continuous I frame coding flicker problem that solves, and provides concrete technological means for the code efficiency that improves this standard with the JVT standard.
3, device provided by the invention provides hardware module and the assembled scheme thereof of realizing the concrete system configuration of said method and realizing this system.
Description of drawings:
Fig. 1 is the schematic diagram of macro block segmentation among the JVT.
Fig. 2 is under Intra4 * 4 patterns, the prediction schematic diagram of pixel in per 4 * 4 the fritter.
Fig. 3 is a prediction on intra-frame prediction mode direction schematic diagram.
Fig. 4 is the scanning constant sequential schematic that generally adopts in the present coded system.
Fig. 5 is the cataloged procedure schematic diagram of prior art.
Fig. 6 is the code device schematic diagram of prior art.
Fig. 7 is the signal flow graph of multi-direction spatial predictive encoding system decodes part in the prior art.
Fig. 8 is the flow chart of the new scan mode of the present invention.
Fig. 9 has the coding schematic diagram of new scan mode embodiment for the present invention one.
Figure 10 is the anti-flicker of the present invention decoding end flow chart.
Figure 11 is the block diagram of the decoded portion of one embodiment of the invention.
Figure 12 is the block diagram of the decoded portion of another embodiment of the present invention.
Figure 13 is the block diagram of the decoded portion of further embodiment of this invention.
Embodiment
The present invention will be further described in detail below in conjunction with specific embodiment:
The invention provides a kind of new spatial Forecasting Methodology and device thereof that is used for video coding, its purpose is effectively to alleviate the method for video coding based on multi-direction spatial prediction, the scintillation that produces when the I frame is encoded continuously; And decide the order of scanning according to predictive mode, be used for effectively improving code efficiency based on the method for video coding of multi-direction spatial prediction.
In the JVT coding standard, adopted following step to realize anti-flicker processing in one embodiment of the invention:
Referring to Fig. 7, Fig. 8,
The processing of coding side:
1, generation forecast image: according to selected predictive mode, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece.This step is former identical in steps with JVT;
2, predicted picture is transformed to frequency domain.The transform method that adopts when handling residual image among the method for conversion and the JVT is identical.For example, the piece for 4 * 4 is established and is input as X, then exports Y and is:
Y = 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 1 - 2 2 - 1 x 00 x 01 x 02 x 03 x 10 x 11 x 12 x 13 x 20 x 21 x 22 x 23 x 30 x 31 x 32 x 33 1 2 1 1 1 1 - 1 - 2 1 - 1 - 1 2 1 - 2 1 - 1
Wherein, Y is the frequency coefficient of predicted picture, and X is a predicted picture
3, the frequency coefficient Y of quantitative prediction image.The Qp that quantization parameter Qp adopts when handling residual image is identical.If the matrix of the frequency coefficient after quantizing is Z; Then quantitative formula is:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
4, the Z inverse quantization is obtained W.Inverse quantization here and the inverse quantization among the JVT are had any different: the inverse quantization among the JVT and the yardstick of quantification are also different,
In JVT, its inverse quantization formula is:
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
In order to make coefficient after the quantification return to yardstick before quantizing, just must the redesign inverse quantization formula: inverse quantization formula of the present invention be:
W=DQ’(Z)=(Z×DQuant’(Qp)+DQconst’(Qp))>>Q_per’(Qp)
And new formula must satisfy:
DQuant ' (Qp) * Quant (Qp) is approximately equal to 2 Q-per ' (Qp) * Q_bit (Qp)
5, the current block I that will encode is transformed to frequency domain and obtain frequency domain figure as F; Method is the same;
6, F deducts W and directly obtains frequency-domain residual image S;
7, the frequency-domain residual coefficient C after quantification S obtains quantizing, formula is the same;
8, C is done coefficient scanning, entropy coding obtains code stream;
9, Z is compensated on the C.Be C=C+Z;
10, inverse quantization C is with the formula of JVT
11, inverse transformation C obtains preliminary reconstructed image B, according to block size and the pattern original inverse transform method of JVT;
12, B is done the filtering of driving away blocking effect, obtain the output image 0 of current block.
Referring to Figure 10, Figure 11
One of processing of decoding end:
1, the entropy decoding obtains predictive mode and frequency-domain residual coefficient C;
2, generation forecast image: the predictive mode that obtains according to entropy decoding, by the decoded picture generation forecast image of the adjacent blocks of current decoding block.This step is former identical in steps with JVT.
3, predicted picture is transformed to frequency domain.The transform method that adopts when handling residual image among the method for conversion and the JVT is identical.2nd step of this step with when coding is identical.
4, the frequency coefficient Y of quantitative prediction image obtains Z.3rd step of this step with when coding is identical.
5, Z is compensated on the C.Be C=C+Z.With coding step 9
6, quantize C, with the formula of JVT.With coding step 10.
7, inverse transformation C obtains preliminary reconstructed image B.According to block size and the pattern original inverse transform method of JVT.The 11st step during with coding is identical.
8, B is done the filtering of driving away blocking effect, obtain the output image 0 of current block.The 12nd step during with coding is identical.
Referring to Figure 12, Figure 13, the processing of decoding end of the present invention two, three: basic identical with one of processing of above-mentioned decoding end, different is: one of processing of decoding end is directly quantizing post-compensation, two of the processing of decoding end compensates for finishing inverse quantization again, and three of the processing of decoding end is to finish the inverse transformation post-compensation.
Above-mentioned scheme two, three and scheme one are similar, are the position difference of compensation, and the difference of pressing compensated position also will be done inverse quantization and inverse transformation to the predicted picture after quantizing.
Method based on predictive mode decision scanning sequency of the present invention then can effectively improve the code efficiency based on the method for video coding of multi-direction spatial prediction.In the JVT coding standard, adopt the scan module of following steps realization based on predictive mode:
In the design phase: at first, add up the probability of the coefficient non-0 of each frequency of residual image under every kind of pattern respectively; Then, (for example: generate the scanning sequency table according to probability order from big to small with matrix of variables T (m, i) expression; That is: the position of the coefficient of i scanning is T (m, i)) under the m pattern.Replace single zigzag scan table Z (i).
In the encoding and decoding stage:
When scanning, according to selected mode m, and look into scanning sequency table T by the order that increases progressively of i, by the order of the position that checks in residual error coefficient is scanned.
Referring to Fig. 9, coding module in the device of the present invention is on the one type of prior art syringe basis, also be provided with new scan module in calculating between residual error coefficient module and the entropy coding module, this new scan module is according to the control of mode selection module, select predetermined scanning sequency, thereby improve the efficient of handling.
Referring to Figure 11,12,13, decoder module in the device of the present invention, all have additional the change quantization module, and comprise entropy decoder module, inverse quantization module, inverse transform block and predictive compensation module, different is: in different embodiment, the position of predictive compensation module can be separately positioned on after the entropy decoder module, or after the inverse quantization module, or after the inverse transform block.With Figure 11 is example, and concrete decode procedure comprises: code stream is carried out after the entropy decoding.In the predictive compensation module, compensate processing through the predictive image of change quantization resume module and the code stream information of decoding, handle back output through inverse quantization module and inverse transform block successively then through entropy.Different with Figure 11 is: the predictive compensation position shown in Figure 12,13 lays respectively at after inverse quantization module or the inverse transform block.
It should be noted that at last: above embodiment is only unrestricted in order to explanation the present invention, although the present invention is had been described in detail with reference to above-mentioned disclosed preferred embodiment, those of ordinary skill in the art is to be understood that: still can make amendment or be equal to replacement the present invention; And all technical schemes of the spirit and scope that do not break away from the present invention and put down in writing, it all should be encompassed in the middle of the claim scope of the present invention.

Claims (16)

1, a kind of new spatial Forecasting Methodology that is used for video coding is characterized in that: when code stream is encoded, predicted picture also transformed to frequency domain, and quantizes with identical quantization parameter, and then as predicted picture; Identical processing method is done the frequency domain quantification to predicted picture when adopting with coding to this predicted picture during decoding, compensates to then on the residual image that decodes.
2, the new spatial Forecasting Methodology that is used for video coding according to claim 1, it is characterized in that: described cataloged procedure is specially:
Step 100: according to selected predictive mode, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Step 101: predicted picture is transformed to frequency domain;
Step 102: the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts when handling residual image is identical; And the matrix of establishing the frequency coefficient after the quantification satisfies following formula:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing,
Y is the frequency coefficient matrix,
Qp is a quantization parameter,
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of joint video team standard definition;
Step 103: the matrix of the frequency coefficient that step 102 is obtained according to following formula carries out inverse quantization;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2 Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQconst (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of joint video team standard definition;
Step 104: according to the method for step 100-103, the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture;
Step 105: the frequency coefficient matrix behind the frequency domain figure image subtraction inverse quantization directly obtains the frequency-domain residual image;
Step 106: quantize the frequency-domain residual coefficient after the frequency-domain residual image obtains quantizing, formula is the same.
Step 107: the frequency-domain residual coefficient is done coefficient scanning, and entropy coding obtains code stream;
Step 108:, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient according to following formula;
C=C+Z;
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 109: with the formula inverse quantization frequency-domain residual coefficient of joint video team standard;
Step 110: size and pattern inverse transformation frequency-domain residual coefficient according to piece obtain preliminary reconstructed image;
Step 111: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
3, the new spatial Forecasting Methodology that is used for video coding according to claim 1 and 2, it is characterized in that: before the described coding, also further comprise processing based on predictive mode decision scanning sequency, concrete process is: the probability of adding up the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, according to the value of this probability, order from big to small generates the scanning sequency table.
4, the new spatial Forecasting Methodology that is used for video coding according to claim 3 is characterized in that: when coded scanning, consult the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned.
5, the new spatial Forecasting Methodology that is used for video coding according to claim 3, it is characterized in that: described decode procedure is:
Step 200: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 201: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 203: predicted picture is transformed to frequency domain;
Step 204: the frequency coefficient of quantitative prediction image obtains the frequency coefficient matrix;
Step 205: according to following formula, the frequency coefficient matrix is compensated on the frequency-domain residual coefficient,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient,
Z is the frequency coefficient matrix;
Step 206: inverse quantization frequency-domain residual coefficient;
Step 207: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 208: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
6, the new spatial Forecasting Methodology that is used for video coding according to claim 3, it is characterized in that: described decode procedure is:
Step 210: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 211: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 212: predicted picture is transformed to frequency domain;
Step 213: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 214: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 215: the frequency coefficient matrix through inverse quantization is compensated to through on the frequency-domain residual coefficient of inverse quantization,
Step 216: according to block size and pattern, inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image;
Step 217: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
7, the new spatial Forecasting Methodology that is used for video coding according to claim 3, it is characterized in that: described decode procedure is:
Step 220: obtain predictive mode and frequency-domain residual coefficient by the entropy decoding;
Step 221: the predictive mode that decoding obtains according to entropy; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
Step 222: predicted picture is transformed to frequency domain;
Step 223: the frequency coefficient of quantitative prediction image obtains the matrix of frequency coefficient;
Step 234: difference inverse quantization frequency coefficient and frequency-domain residual coefficient;
Step 225: respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Step 226: the frequency coefficient matrix through inverse quantization and inverse transformation is compensated on the frequency-domain residual coefficient, obtain preliminary reconstructed image;
Step 227: reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
8, the new spatial Forecasting Methodology that is used for video coding according to claim 5 is characterized in that: when decoding scanning, consult the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned.
9, the new spatial Forecasting Methodology that is used for video coding according to claim 6 is characterized in that: when decoding scanning, consult the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned.
10, the new spatial Forecasting Methodology that is used for video coding according to claim 7 is characterized in that: when decoding scanning, consult the scanning sequency table according to selected mode sequence, and by the order of the position that checks in residual error coefficient is scanned.
11, a kind of new spatial prediction unit that is used for video coding is characterized in that: which comprises at least coding module and decoder module; Wherein,
This coding module is provided with at least: prediction module, calculating residual error coefficient module, scan module and entropy coding module; This prediction module is handled the reconstructed image of input, obtain predicted picture, this predicted picture compensates the present encoding image after calculating the residual error coefficient resume module, then, after code stream after this compensation is handled through scan module, again by the output of encoding of entropy coding module;
Prediction module is according to the selected predictive mode of mode selection module, by the decoded picture generation forecast image of the adjacent blocks of present encoding piece;
Calculate the residual error coefficient module predicted picture is transformed to frequency domain, and the frequency coefficient of quantitative prediction image, wherein, the quantization parameter that quantization parameter adopts during with the processing residual image is identical; And the matrix of the frequency coefficient after quantizing satisfies following formula:
Z=Q(Y)=(Y×Quant(Qp)+Qconst(Qp))>>Q_bit(Qp)
Wherein,
Z is the frequency coefficient matrix after quantizing;
Y is the frequency coefficient matrix;
Qp is a quantization parameter
Quant (Qp), Qconst (Qp), Q_bit (Qp) are the function during by the quantification of joint video team standard definition;
Calculate the residual error coefficient module and the matrix of the frequency coefficient that obtains is carried out inverse quantization according to following formula;
W=DQ(Z)=(Z×DQuant(Qp)+DQconst(Qp))>>Q_per(Qp)
DQuant(Qp)×DQconst(Qp)≈2 Q_per(Qp)×Q_bit(Qp)
Wherein,
W is the frequency coefficient matrix behind the inverse quantization,
Z is the preceding frequency coefficient matrix of inverse quantization after quantizing,
Qp is a quantization parameter,
DQuant (Qp), DQconst (Qp), Q_bit (Qp), Q_per (Qp) are the function during by the quantification of joint video team standard definition;
Calculate the residual error coefficient module and adopt above-mentioned method that the current block that is encoded is transformed to frequency domain, obtain the frequency domain figure picture; And further deduct the frequency coefficient matrix and directly obtain the frequency-domain residual image; Again the frequency-domain residual image is quantized the frequency-domain residual coefficient after obtaining quantizing;
Scan module is done coefficient scanning to the frequency-domain residual coefficient; This entropy coding module is encoded to the information after scanning and is obtained code stream;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula;
C=C+Z,
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module according to the formula of joint video team standard to frequency-domain residual coefficient inverse quantization; Inverse transform block to frequency-domain residual coefficient inverse transformation, obtains preliminary reconstructed image according to the size of piece and pattern; Filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block;
This decoder module is provided with at least: entropy decoder module, compensating module, inverse quantization module, inverse transform block and change quantization module; Signal bit stream to input carries out entropy decoding, inverse quantization and inverse transformation successively; The change quantization module is handled predictive image, obtains the compensated information in the decode procedure.
12, the new spatial prediction unit that is used for video coding according to claim 11, it is characterized in that: also be provided with mode selection module in this coding module and/or the decoder module based on predictive mode decision scanning sequency, the mode that is used for controlling prediction module and scan module is selected, and improves coding and/or decoding efficiency; This scan module is added up the probability of the coefficient non-0 of each frequency of residual image under every kind of pattern respectively, and according to the value of this probability, order from big to small generates the scanning sequency table.
13, the new spatial prediction unit that is used for video coding according to claim 12, it is characterized in that: described decoding is specially:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Compensating module compensates to the frequency coefficient matrix on the frequency-domain residual coefficient according to following formula,
C=C+Z
Wherein,
C is the frequency-domain residual coefficient;
Z is the frequency coefficient matrix;
Inverse quantization module is carried out inverse quantization to the frequency-domain residual coefficient and is handled;
Inverse transform block is carried out inverse transformation according to the size and the pattern of piece to the frequency-domain residual coefficient, obtains preliminary reconstructed image; Last filtration module is done the filtering of removing blocking effect to reconstructed image, obtains the output image of current block.
14, the new spatial prediction unit that is used for video coding according to claim 12, it is characterized in that: described decoding is specially:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization through on the frequency-domain residual coefficient of inverse quantization,
Inverse transform block is according to the size and the pattern of piece, and inverse transformation frequency-domain residual coefficient obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
15, the new spatial prediction unit that is used for video coding according to claim 12, it is characterized in that: described decoding is specially:
The entropy decoder module is decoded to the code stream of input and is obtained predictive mode and frequency-domain residual coefficient; The predictive mode that obtains according to entropy decoding then; Decoded picture generation forecast image by the adjacent blocks of current decoding block;
The change quantization module transforms to frequency domain to predicted picture, and the frequency coefficient of quantitative prediction image, obtains the frequency coefficient matrix;
Lay respectively at entropy decoder module and change quantization module inverse quantization module afterwards, respectively inverse quantization frequency coefficient and frequency-domain residual coefficient;
Be positioned at inverse quantization module module inverse transform block afterwards respectively according to block size and pattern, inverse transformation frequency coefficient and frequency-domain residual coefficient;
Compensating module compensates to the frequency coefficient matrix through inverse quantization and inverse transformation on the frequency-domain residual coefficient, obtains preliminary reconstructed image; At last, reconstructed image is done the filtering of driving away blocking effect, obtain the output image of current block.
16, according to claim 13 or the 14 or 15 described new spatial prediction units that are used for video coding, it is characterized in that: decoder module is when decoding, also consult the scanning sequency table, and residual error coefficient is scanned by the order of the position that checks in according to selected mode sequence.
CN02130833.0A 2002-10-09 2002-10-09 Space predicting method and apparatus for video encoding Expired - Fee Related CN1225126C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN02130833.0A CN1225126C (en) 2002-10-09 2002-10-09 Space predicting method and apparatus for video encoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN02130833.0A CN1225126C (en) 2002-10-09 2002-10-09 Space predicting method and apparatus for video encoding

Publications (2)

Publication Number Publication Date
CN1489391A CN1489391A (en) 2004-04-14
CN1225126C true CN1225126C (en) 2005-10-26

Family

ID=34144647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN02130833.0A Expired - Fee Related CN1225126C (en) 2002-10-09 2002-10-09 Space predicting method and apparatus for video encoding

Country Status (1)

Country Link
CN (1) CN1225126C (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9330060B1 (en) 2003-04-15 2016-05-03 Nvidia Corporation Method and device for encoding and decoding video image data
US8660182B2 (en) 2003-06-09 2014-02-25 Nvidia Corporation MPEG motion estimation based on dual start points
CN101091393B (en) * 2004-12-28 2012-03-07 日本电气株式会社 Moving picture encoding method, device using the same
KR100703770B1 (en) * 2005-03-25 2007-04-06 삼성전자주식회사 Video coding and decoding using weighted prediction, and apparatus for the same
KR100678911B1 (en) * 2005-07-21 2007-02-05 삼성전자주식회사 Method and apparatus for video signal encoding and decoding with extending directional intra prediction
CN100405848C (en) * 2005-09-16 2008-07-23 宁波大学 Quantization method during video image coding
WO2007043609A1 (en) * 2005-10-14 2007-04-19 Nec Corporation Image encoding method, device using the same, and computer program
US8731071B1 (en) 2005-12-15 2014-05-20 Nvidia Corporation System for performing finite input response (FIR) filtering in motion estimation
US8724702B1 (en) 2006-03-29 2014-05-13 Nvidia Corporation Methods and systems for motion estimation used in video coding
US8660380B2 (en) 2006-08-25 2014-02-25 Nvidia Corporation Method and system for performing two-dimensional transform on data value array with reduced power consumption
US8756482B2 (en) 2007-05-25 2014-06-17 Nvidia Corporation Efficient encoding/decoding of a sequence of data frames
US9118927B2 (en) 2007-06-13 2015-08-25 Nvidia Corporation Sub-pixel interpolation and its application in motion compensated encoding of a video signal
US8873625B2 (en) 2007-07-18 2014-10-28 Nvidia Corporation Enhanced compression in representing non-frame-edge blocks of image frames
CN101933331B (en) * 2007-09-06 2014-04-09 日本电气株式会社 Video encoding device, video decoding method and video encoding method
US8718135B2 (en) * 2008-09-19 2014-05-06 The Hong Kong University Of Science And Technology Method and system for transcoding based robust streaming of compressed video
CN101742323B (en) * 2008-11-05 2013-05-01 上海天荷电子信息有限公司 Method and device for coding and decoding re-loss-free video
US8666181B2 (en) 2008-12-10 2014-03-04 Nvidia Corporation Adaptive multiple engine image motion detection system and method
CN101895756B (en) * 2010-07-15 2012-10-31 北京大学 Method and system for coding, decoding and reconstructing video image blocks
CN101895757A (en) * 2010-07-15 2010-11-24 北京大学 Method and system for reordering and inversely reordering predicted residual blocks
KR20120009618A (en) * 2010-07-19 2012-02-02 에스케이 텔레콤주식회사 Method and Apparatus for Partitioned-Coding of Frequency Transform Unit and Method and Apparatus for Encoding/Decoding of Video Data Thereof
CN102447896B (en) * 2010-09-30 2013-10-09 华为技术有限公司 Method, device and system for processing image residual block
CN102447895B (en) * 2010-09-30 2013-10-02 华为技术有限公司 Scanning method, scanning device, anti-scanning method and anti-scanning device
FR2972588A1 (en) 2011-03-07 2012-09-14 France Telecom METHOD FOR ENCODING AND DECODING IMAGES, CORRESPONDING ENCODING AND DECODING DEVICE AND COMPUTER PROGRAMS
JP6082123B2 (en) * 2012-11-29 2017-02-15 エルジー エレクトロニクス インコーポレイティド Video encoding / decoding method supporting multiple layers
JP2017513312A (en) * 2014-03-14 2017-05-25 シャープ株式会社 Video compression using color space scalability
CN106341689B (en) * 2016-09-07 2019-04-23 中山大学 A kind of optimization method and system of AVS2 quantization modules and inverse quantization module
CN108156457B (en) * 2017-12-27 2021-10-15 郑州云海信息技术有限公司 Image coding method and device for converting JPEG (Joint photographic experts group) into WebP (Web WebP)
CN109769104B (en) * 2018-10-26 2021-02-05 江苏四友信息技术有限公司 Unmanned aerial vehicle panoramic image transmission method and device
CN114449241B (en) * 2022-02-18 2024-04-02 复旦大学 Color space conversion algorithm suitable for image compression

Also Published As

Publication number Publication date
CN1489391A (en) 2004-04-14

Similar Documents

Publication Publication Date Title
CN1225126C (en) Space predicting method and apparatus for video encoding
CN1214647C (en) Method for encoding images, and image coder
CN1242620C (en) Transcoder-based adaptive quantization for digital video recording
CN1202650C (en) Image processing method, image processing device, and data storage medium
CN1950832A (en) Bitplane coding and decoding for AC prediction status and macroblock field/frame coding type information
CN1550110A (en) Moving picture signal coding method, decoding method, coding apparatus, and decoding apparatus
CN1835595A (en) Image encoding/decoding method and apparatus therefor
CN1347620A (en) Method and architecture for converting MPE G-2 4:2:2-profile bitstreams into main-profile bitstreams
CN1535027A (en) Inframe prediction method used for video frequency coding
CN1910933A (en) Image information encoding device and image information encoding method
CN1240226C (en) Video transcoder with drift compensation
CN1469632A (en) Video frequency coding/decoding method and equipment
CN1694537A (en) Adaptive de-blocking filtering apparatus and method for MPEG video decoder
CN1956546A (en) Image coding apparatus
CN1211373A (en) Digital image coding method and digital image coder, and digital image decoding method and digital image decoder, and data storage medium
CN1605213A (en) Skip macroblock coding
CN1574968A (en) Moving image decoding apparatus and moving image decoding method
CN1625265A (en) Method and apparatus for scalable video encoding and decoding
CN1652608A (en) Data processing device and method of same, and encoding device and decoding device
CN1270541C (en) Decoder and method thereof, coding device and method thereof, image processing system and method thereof
CN1288337A (en) Method and device used for automatic data converting coding video frequency image data
CN1455600A (en) Interframe predicting method based on adjacent pixel prediction
CN1225919C (en) Image information encoding method and encoder, and image information decoding method decoder
CN1926880A (en) Data processor, its method and coder
CN1274235A (en) Device and method for video signal with lowered resolution ratio of strengthened decode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: UNITED XINYUAN DIGITAL AUDIO-VIDEO TECHNOLOGY (BE

Free format text: FORMER OWNER: INST. OF COMPUTING TECHN. ACADEMIA SINICA

Effective date: 20080328

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20080328

Address after: Beijing city Haidian District East Road No. 1 Yingchuang power building block A room 701

Patentee after: UNITED XINYUAN DIGITAL AUDIO V

Address before: Digital room (Institute of Physics), Institute of computing, Chinese Academy of Sciences, South Road, Zhongguancun, Haidian District, Beijing 6, China

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20051026

Termination date: 20211009

CF01 Termination of patent right due to non-payment of annual fee