CN103581690A - Video decoding method, video decoder, video encoding method and video encoder - Google Patents

Video decoding method, video decoder, video encoding method and video encoder Download PDF

Info

Publication number
CN103581690A
CN103581690A CN201210282765.XA CN201210282765A CN103581690A CN 103581690 A CN103581690 A CN 103581690A CN 201210282765 A CN201210282765 A CN 201210282765A CN 103581690 A CN103581690 A CN 103581690A
Authority
CN
China
Prior art keywords
weight estimation
fragment
decoding
current fragment
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210282765.XA
Other languages
Chinese (zh)
Inventor
安基程
郭峋
黄毓文
雷少民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Singapore Pte Ltd
Original Assignee
MediaTek Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Singapore Pte Ltd filed Critical MediaTek Singapore Pte Ltd
Priority to CN201210282765.XA priority Critical patent/CN103581690A/en
Publication of CN103581690A publication Critical patent/CN103581690A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a video encoding method with partial weighted predication, a video decoding method with partial weighted predication, a video encoder with partial weighted predication and a video decoder with partial weighted predication. The video decoding method comprises the steps that data decoding is carried out on a current fragment to generate decoded data which are used for the current fragment and comprise a residual error and a weighted predication parameter; weighted predication used for the current fragment is generated based on the weighted prediction parameter; a predictor used for the current fragment is generated through intra-frame/inter-frame prediction; the weighted predication and the predictor are combined to obtain a modified predictor; reconstruction is carried out on the current fragment according to the modified predictor and the residual error. According to the video decoder, the video encoder and the application methods of the video decoder and the video encoder, the partial weighted prediction can be carried out, and therefore the partial brightness change of an image can be solved.

Description

Video decoding method, video decoder, method for video coding and video encoder
Technical field
The present invention is relevant for Video coding, more particularly, and relevant for method for video coding, video decoding method, video decoder and the video encoder with locality weight estimation.
Background technology
H.264/AVC (advanced video coding) is a kind of video compression standard, and it comprises some technology, allows the flexibility of efficient coding rate and wide range of application.Weight estimation (Weighted prediction, WP) is a kind of instrument of current H.264 standard.In instrument H.264WP, the property taken advantage of weighted factor (being after this called scale factor) and additivity biasing (additive offset) have been applied to motion compensated prediction.WP comprises two patterns, the recessive WP that B band (slice) is supported, and the dominant WP that supports of P, SP and B band.In explicit mode, for the reference picture index of each permission, single scale factor is encoded with being biased in band header.In implicit mode, scale factor and biasing are not encoded in band header, but relative picture order count (picture order count, POC) interval based on present image and its reference picture and obtained.The original applications of WP is for compensating the poor and colour difference of overall brightness between present image and temporal reference picture.WP instrument is effective especially for coding decline sequence (fading sequence).
Summary of the invention
Weight estimation in current H.264 standard is based on whole band, the poor and colour difference for the overall brightness of compensating images, and the bright intensity in part that therefore can not solve image changes.For head it off, the invention provides a kind of video decoder and video encoder and correlation method thereof.
A kind of embodiment of the method for video coding comprises the following steps: to obtain the data for a current fragment to be decoded from an incoming bit stream; The described data of having obtained are carried out to decoding, in order to produce the decoding data that comprises residual sum one weight estimation parameter for described current fragment; A weight estimation based on described weight estimation parameter generating for described current fragment; By in frame/inter prediction produces the fallout predictor for described current fragment; Described weight estimation and described fallout predictor are combined, in order to produce one, revised fallout predictor; And revised fallout predictor and described residual error is reconstructed described current fragment according to described.
In one embodiment, a kind of video decoder is provided, described video decoder comprises an entropy decoding unit, in order to obtain the data for a current fragment to be decoded from an incoming bit stream, and the described data of having obtained are carried out to decoding, in order to produce the decoding data that comprises residual sum one weight estimation parameter for described current fragment; One weight estimation determining unit, couples described entropy decoding unit, in order to the weight estimation for described current fragment based on described weight estimation parameter generating; One motion compensation units, in order to by frame/inter prediction produces the fallout predictor for described current fragment; And a first adder, couple described weight estimation determining unit and this motion compensation units, in order to described weight estimation and described fallout predictor are combined, to produce one, revised fallout predictor; Wherein, described video decoder has been revised fallout predictor and described residual error is reconstructed described current fragment according to described.
Another embodiment of a kind of method for Video coding comprises the following steps: to obtain a current fragment of a band to be encoded; By in frame/inter prediction produces a fallout predictor of described current fragment; Described fallout predictor to described current fragment is carried out weight estimation, in order to produce one, has revised fallout predictor and a weight estimation parameter; According to described current fragment and the described fallout predictor of having revised, produce residual error; And described residual error is encoded, and insert described weight estimation parameter, in order to produce a bit stream.
In another embodiment, provide a kind of video encoder, described video encoder comprises: in a frame/inter prediction unit, in order to by frame/inter prediction produces a fallout predictor of a current fragment; One determining unit, couples in described frame/inter prediction unit, in order to the described fallout predictor to described current fragment, carries out weight estimation, to produce one, has revised fallout predictor and a weight estimation parameter; One transform and quantization unit, in order to receive residual error, and carries out transform and quantization to described residual error, and in order to produce quantized value, wherein, described residual error is to produce according to described current fragment and the described fallout predictor of having revised; And an entropy coding unit, in order to described quantized value is encoded, and insert described weight estimation parameter, to produce a bit stream.
Video coding/interpretation method, encoder and decoder can be for being embedded in the procedure code form in tangible media.When this procedure code is written into machine and carried out by this machine, this machine becomes to realize the device of disclosed method.
Video decoder provided by the invention and video encoder and correlation method thereof, can carry out the weight estimation of locality, thereby can solve the bright intensity change in part of image.
Accompanying drawing explanation
With reference to accompanying drawing and following details, describe, will more fully understand the present invention, wherein:
Fig. 1 is the block schematic diagram of the video encoder with locality weight estimation according to one embodiment of the invention;
Fig. 2 is the block schematic diagram according to the video decoder of one embodiment of the invention;
Fig. 3 is for deriving an embodiment of biasing fallout predictor;
Fig. 4 is the flow chart of an embodiment of video decoding method of the present invention;
Fig. 5 is an embodiment of frame of video;
Fig. 6 is the embodiment of frame structure.
Embodiment
Following description is to realize optimal desired pattern of the present invention.The object of this description is in order to show the basic principle of this invention, and should be not restricted.With reference to appended claim, will determine better category of the present invention.
In the following description, for the convenience illustrating, will use an exemplary H.26x video sequence, but the invention is not restricted to this.This H.26x video sequence can comprise a plurality of images or multiple series of images (groups of pictures, GOPs), the plurality of image or multiple series of images can be aligned to a kind of gop structure of appointment.Each image is further divided into one or more bands.Each band can be divided into a plurality of fragments (segment), the block (block) that wherein this fragment can have any shape, this block has the size less than band, and for example, fragment can be 128x128,64x64,32x16,16x16,8x8 or 4x8 pixel.When the brightness between image changes distribution inhomogeneous in an image, locality weight estimation allows better prediction.For the convenience illustrating, a band of description below hypothesis is divided into a plurality of macro blocks (macroblock, MB), and weight estimation operation be take a MB and carried out as unit, but the invention is not restricted to MB level, locality weight estimation can be applicable to the fragment less than stripe size.
Video encoder is carried out inter prediction or infra-frame prediction for each MB that has received image, with thinking that each MB derives a fallout predictor (predictor).For example, when carrying out inter prediction, in a reference picture, find out a similar MB as the fallout predictor of current MB.Difference motion vector and reference picture index for current MB will be encoded into a bit stream, in order to represent the position of this fallout predictor in this reference picture.In other words, reference picture index represents that as the previous encoded image using with reference to image be whichever, and the motion vector of deriving from this difference motion vector represents the displacement between the locus of current MB and the locus of this fallout predictor reference frame.Except directly obtain this fallout predictor from previous encoded image, also can, the in the situation that of the accurate motion vector of sub-pixel, by interpolation (interpolation), obtain.
Then the fallout predictor of WP to current MB be provided, by be multiplied by a scale factor on original fallout predictor, increase a prediction biasing, or be multiplied by a scale factor and increase a prediction biasing, in order to produce one, revised fallout predictor, wherein this fallout predictor can be obtained by inter prediction or infra-frame prediction.
Fig. 1 is the block schematic diagram of the video encoder with locality weight estimation 100 according to one embodiment of the invention.In the present embodiment, video encoder 100 is by inputting video data encoding of MB one by one.Fig. 1 only shows the locality weight estimation that is applied to inter prediction, yet this not should be restriction of the present invention, because locality weight estimation also can be applicable to infra-frame prediction.In Fig. 1, to have revised fallout predictor and be based on a prediction biasing and calculated, this be only an example of weight estimation, in other embodiments, scale factor or be used to calculating in conjunction with the prediction biasing of scale factor and revised fallout predictor.Video encoder 100 comprises motion compensation units 102, frame buffer 104, reference motion vectors buffer 108, converter unit 110, quantifying unit 112, entropy coding unit 114, biasing estimation unit 116, inverse quantization unit 118, inverse transformation unit 120 and with reference to offset parameter buffer 122.Reference motion vectors buffer 108 storage had previously been encoded the motion vector of MB as with reference to motion vector, for generation of difference motion vector subsequently.With reference to the prediction biasing conduct reference biasing of the previous MB that encoded of offset parameter buffer 122 storage, for definite bias difference subsequently.
In frame/and inter prediction unit, for example, motion compensation units 102, carries out motion compensation, and in order to a motion vector of reference, the data that are certainly stored in frame buffer 104 produce the fallout predictor MBp of current MB.Motion vector and the difference motion vector of deriving between the motion vector prediction device 106 that is stored in the data in reference motion vectors buffer 108 are sent to entropy coding unit 114, in order to be encoding to bit stream.In the present embodiment, by the determining unit 130 of be coupled in frame/inter prediction unit, via increasing the prediction of being derived by biasing estimation unit 116, setover, the fallout predictor of each MB is carried out to WP, in order to produce, revised fallout predictor MBp '.Meanwhile, to calculate weight estimation parameter, bias difference for example, this bias difference represents to be applied to the prediction biasing and the difference deriving between one or more biasing fallout predictors 124 with reference to setovering of current MB, and this bias difference is sent to entropy coding unit 114 in order to be encoding to bit stream.The block conversion process of being carried out by converter unit 110 is applied to residual error (residuals), in order to reduce spatial statistics correlation (spatial statistical correlation).Residual error is that current MB and the sample one by one revised between fallout predictor are poor.For example, if current MB is of a size of 16x16, residual error is divided into the block of four 8x8.Residual error application reversible (reversible) frequency translation operation of encoder 100 to each 8x8, this reversible frequency translation operation produces a series of frequency domain (being frequency spectrum) coefficient.Discrete cosine transform (discrete cosine transform, DCT) is an example of frequency translation.Then, the output of converter unit 110 quantizes (Q) by quantifying unit 112, in order to obtain quantized value.
After quantification, 114 pairs of these quantized values of entropy coding unit are encoded, and insert weight estimation parameter, in order to produce bit stream.For example, entropy coding unit 114 can be carried out based on contextual self-adapting changeable long codes (content adaptive variable length coding, CAVLC), based on contextual adaptive binary arithmetic coding (context adaptive binary arithmetic coding, CABAC) or other entropy coding methods.
Video encoder 100 is further carried out inverse quantizations by inverse quantization unit 118, and carries out inverse transformations by inverse transformation unit 120, in order to Cost fun ction MBr, and by residual error MBr with revise fallout predictor MBp ' and combine, in order to calculate reconstruct MB MB '.This reconstruct MB MB ' be stored in frame buffer 104, for MB subsequently.Note that in the present embodiment, the bit stream obtaining comprises entropy coded residual, difference motion vector and bias difference.In some other embodiment, bit stream can comprise weight estimation parameter, and except bias difference, weight estimation parameter can comprise that ratio is for example poor, prediction biasing, scale factor or four combination in any.
In decode procedure, the representational decoding data of decoder, and carry out similar operations with reconstruct MB.Decoder has been revised fallout predictor by the fallout predictor from having a weight estimation for each fragment produces one, and fragment is carried out to decoding, and wherein this fallout predictor is derived autokinesis compensation, and then decoder merges this and revised fallout predictor and Cost fun ction.
Fig. 2 is for flowing to the block schematic diagram of an embodiment of the video decoder 200 of row decoding to bit with MB level weight estimation.In the present embodiment, the weight estimation parameter in bit stream only comprises bias difference, and in some other embodiment, weight estimation parameter can comprise that scale factor, prediction biasing, ratio are poor, bias difference or four combination.
Video decoder 200 comprises entropy decoding unit 210, inverse quantization unit 220, inverse transformation unit are (for example, inverse discrete cosine transformation (inverse discrete cosine transform, IDCT) unit) 230, motion compensation units 240, frame buffer 250, motion estimation unit 260 and weight estimation determining unit 270.Motion estimation unit 260 further comprises motion vector prediction device 262 and reference motion vectors buffer 264.Weight estimation determining unit 270 further comprise biasing fallout predictor 272, with reference to offset parameter buffer 274 and adder 276.Reference motion vectors buffer 264 storage previously the motion vector of decoding MB as with reference to motion vector, during for motion vector producing subsequently, use.Prediction with reference to the previous decoding MB of offset parameter buffer 274 storage is setovered as reference biasing, for using when definite prediction is subsequently setovered.
210 pairs of incoming bit streams of entropy decoding unit of video decoder 200 carry out decoding, in order to produce decoding data.For example, in the present embodiment, decoding data can comprise difference motion vector, weight estimation parameter (for example bias difference) and represent the quantized value of residual error data.Quantized value is inputted to inverse quantization unit 220 and inverse transformation unit 230, in order to Cost fun ction MBr, by bias difference weighted input prediction determining unit 270, in order to produce prediction biasing, and by difference motion vector input motion estimation unit 260, in order to produce motion vector.220 pairs of inverse quantization unit represent the quantized value execution inverse quantization operation of residual error data, for example, in order to export inverse quantization data (, DCT coefficient data) to inverse transformation unit 230.Then by inverse transformation unit 230, carry out inverse transformation (for example, IDCT operation), in order to produce residual error MBr.Adder 286 is by the residual error MBr of current MB being added to the fallout predictor of the modification MBp ' of current MB, thereby produces decoding MB.Decoding MB data M B ' is stored to frame buffer 250, in order to the MB to subsequently, carries out decoding.Motion compensation units 240 receives motion vectors and previous decoding MB data, and carries out motion compensation to provide original fallout predictor MBp to adder 284.Adder 284, by the prediction biasing of original fallout predictor MBp and 270 calculating of weight estimation determining unit is added, produces and has revised fallout predictor MBp '.
The bias difference that weight estimation determining unit 270 receives from entropy decoding unit 210, and the prediction biasing that produces current MB according to the biasing fallout predictor of the bias difference of current MB and current MB.Biasing fallout predictor 272 can be with reference to being stored in the biasing fallout predictor that first produces current MB with reference to the reference offset parameter of offset parameter buffer 274.With reference to offset parameter, it can be the previously prediction of decoding MB biasing.
The biasing fallout predictor of current MB can previously be predicted decoding MB (no matter being spatial domain or time-domain) and obtain from one or more.For example, the biasing fallout predictor of current MB can be determined by the prediction biasing of the previous adjacent MB of decoding.In certain embodiments, based on first the first weight estimation of decoding MB (for example first prediction biasing) and second at least one of the second weight estimation of decoding MB (for example the second prediction is setovered) predict the biasing fallout predictor of current MB.In one embodiment, first decoding MB and second decoding MB be positioned at identical band or image, and as the space neighbours of current MB.
Please refer to Fig. 3, Fig. 3 is for deriving an embodiment of biasing fallout predictor.As shown in Figure 3, the MB B that is positioned at the MB A in left side and is positioned at top is all the adjacent MB of current MB C.The biasing fallout predictor of current MB C is calculated by the formula below:
o p=(o A+o B)/2 (1),
Wherein, o prepresent the biasing fallout predictor of current MB C, o arepresent the prediction biasing of MB A, o brepresent the prediction biasing of MB B.In the present embodiment, the biasing fallout predictor of current MB is set to two averages of the prediction biasing of the adjacent MB of decoding, but the invention is not restricted to this.In another embodiment, the biasing fallout predictor o of current MB pcan based on first decoding MB at least the first biasing and predict, wherein this first decoding MB and this current MB have been positioned at different bands or image.For example, the biasing fallout predictor of current MB is based on first the first biasing and second second the setovering and predict of decoding MB of decoding MB, and this first decoding MB be (collocated) MB arranged side by side that is positioned at the first reference picture, this second decoding MB be (collocated) MB arranged side by side that is positioned at the second reference picture.In the case, first decoding MB and second decoding MB can be used as the time neighbours of current MB.
The calculated biasing fallout predictor (o of current MB p) be injected towards corresponding bias difference (o d), and the prediction of current MB biasing (o) can be obtained by formula below:
o=o p+o d (2),
For predicting that the fallout predictor of the modification MBp ' of current MB can be calculated by formula below:
MBp’=o+MBp (3),
Wherein, MBp represents an original fallout predictor, is obtained or directly come from previously decoding image by the interpolation of the accurate motion vector of sub-pixel.
In certain embodiments, when weight estimation parameter only comprises scale factor or the information relevant to scale factor, for predicting that the fallout predictor of the modification MBp ' of current MB can be calculated by formula below:
MBp’=SxMBp (4),
Wherein, S represents scale factor.
In certain embodiments, when weight estimation parameter comprises prediction biasing with the both information relevant to scale factor, for predicting that the fallout predictor of the modification MBp ' of current MB can be calculated by formula below:
MBp’=SxMBp+o(5)
The fallout predictor of the modification MBp ' of current MB is injected towards corresponding residual error MBr, and current MBMB ' can be reconstructed by formula below:
MB’=MBP’+MBr(6)
Fig. 4 is the flow chart of an embodiment of video decoding method of the present invention.Video decoding method of the present invention can be applicable to the video decoder 200 shown in Fig. 2.Please refer to Fig. 2 and Fig. 4, in step S410, from an incoming bit stream, obtain for example, data for current fragment to be decoded (, Fig. 2 MB).Note that in the present embodiment, bit stream comprises one or more frames or band, and each frame or band are divided into a plurality of fragments.Data for fragment can comprise coded residual data and multiple different data (for example, difference motion vector, reference picture index etc.), and these data for example, are encoded by CABAC in encoder (, the encoder in Fig. 1 100).In step S420, use a decoding unit (for example, entropy decoding unit 210) to carry out decoding to the data of obtaining of current fragment, with thinking that current fragment produces the decoding data that at least comprises residual sum weight estimation parameter.In step S430, the weight estimation (for example, prediction biasing, scale factor, or prediction biasing and scale factor) for example, based on weight estimation parameter generating (, by weight estimation determining unit 270) for current fragment.The weight estimation of current fragment can combine weight estimation parameter and data (for example, biasing fallout predictor) that previously decoding data (for example, previously the prediction of decoding fragment is setovered) had been predicted from least one and produce.Note that this previously decoding fragment can be space or time neighbours, or arranged side by side (temporal collocated) fragment of the time of current fragment.
In step S440, for example, by motion compensation units (, motion compensation units 240) or intraprediction unit, carry out inter prediction or infra-frame prediction, for example, in order to obtain for the fallout predictor of current fragment (, MBp).In step S450, for example, for example, for example, by conjunction with this fallout predictor, (, MBp) (, prediction biasing) produces the fallout predictor of modification (, MBp ') for current fragment with this weight estimation.Finally, in step S460, based on this revised fallout predictor (for example, MBp ') and corresponding residual error (for example, MBr) reconstruct deserve before fragment.
In certain embodiments, in bit stream, insert sign (flag), in order to for example to represent, for each fragment (, each MB) weight estimation whether effectively (enabled).In some other embodiment, in the band header of bit stream, insert sign, in order to represent whether to use weight estimation.These represent that the sign of the existence of weight estimation parameter provides the flexibility ratio of the adaptability application of locality weight estimation.For example, if sign is set to " 0 ", it is effectively that video decoder is apprised of weight estimation, if sign is set to " 1 ", it is invalid that video decoder is apprised of weight estimation.In some other embodiment, in bit stream, there is the sign of an insertion, in order to represent that this band is to be encoded by slice level weight estimation or locality weight estimation.Can use another sign to represent the size for the fragment of locality weight estimation.For example, video decoder can determine in bit stream, whether there is the sign (for example, from GOP header or band header) that represents to use locality weight estimation, if existed, obtains weight estimation parameter in order to this band of decoding.If this sign is set, can difference for the weight estimation parameter of each fragment.
Fig. 5 is an embodiment of frame of video.As shown in Figure 5, frame of video 500 is divided into two strips S 0 and S1, and wherein, each of strips S 0 and S1 can be further divided into a plurality of fragments.Fig. 6 is the embodiment of the frame structure of Fig. 5, and wherein, 610 and 620 represent respectively the band content of strips S 0 and S1.As shown in Figure 6, bar tape format has header district SH and the strip data district SD that comprises the fragment data in band.At sign 630, represent that locality weight estimations are non-effective and the situation Xia, header district SH that applied slice level weight estimation comprises a weight estimation parameter set 612 for whole band 610.At sign 630, represent that locality weight estimations are in effective situation, weight estimation parameter (for example, 624) (for example, finds each MB from band 620 in header MBH MB622).As shown in the band 610 of Fig. 6, indicate that 630 are set to " 0 ", video decoder 200 will obtain weight estimation parameter from header SH1, and use this acquired weight estimation parameter, thereby provide slice level weight estimation for each MB of band 610.As shown in the band 620 of Fig. 6, indicate that 630 are set to " 1 ", video decoder 200 obtains weight estimation parameter by the header MBH from each MB, and uses this acquired weight estimation parameter, in order to MB level weight estimation to be provided.
Note that in certain embodiments, weight estimation parameter can be quantized by the quantified precision relevant to residual error data in encoder, for example, (Quantization Parameter, QP) is larger for the quantization parameter of MB residual error data, and the quantified precision of weight estimation parameter is less.In the case, provide this weight estimation parameter in decoding is processed before, video decoder 200 must further be used suitable quantified precision (i.e. the quantified precision relevant with quantization parameter for inverse quantization residual error) to carry out inverse quantization (de-quantize) to the weight estimation parameter of decoding.
Generally speaking, according to video encoder of the present invention and decoder, and Video coding and interpretation method, for each fragment, all provide one or more weight estimation parameters, thereby adapt to the bright Strength Changes in part between fragment.
Video decoder, video encoder and corresponding Video coding and interpretation method, or corresponding particular aspects or part, can adopt and (for example be embedded in tangible media, floppy disk, CD-ROMS, hard disk drive or other any machine-readable storage media) procedure code form (, executable instruction), wherein, when procedure code is written into machine and for example, carried out by machine (computer), thereby this machine becomes to realize the device of the method.The method equally also can for example, with at some transmission mediums (, electric wire or cable, pass through optical fiber, or via any other transmission form) above the form of the procedure code of transmission is embedded, wherein, when this procedure code is received and is loaded into machine and for example, carried out by this machine (computer), this machine becomes to realize the device of institute's revealing method.When being implemented on general processor, this procedure code is in conjunction with this processor in order to provide a unique apparatus, and its class of operation is similar to special applications logical circuit.
Although the present invention with for example and the form of preferred embodiment be described, should know the present invention and be not limited to this.Any person of ordinary skill in the field still can, within not departing from category of the present invention and spirit, make various selections and modification (for example, using a ring buffer).Therefore, category of the present invention should define and protect with claim and equivalent thereof.

Claims (24)

1. a video decoding method, is characterized in that, described method comprises:
From an incoming bit stream, obtain the data for a current fragment to be decoded;
The described data of having obtained are carried out to decoding, in order to produce the decoding data that comprises residual sum one weight estimation parameter for described current fragment;
A weight estimation based on described weight estimation parameter generating for described current fragment;
By in frame/inter prediction produces the fallout predictor for described current fragment;
Described weight estimation and described fallout predictor are combined, in order to produce one, revised fallout predictor; And
According to described, revised fallout predictor and described residual error is reconstructed described current fragment.
2. the method for claim 1, is characterized in that, described current fragment is a block, and described block has smaller szie than a band and an image.
3. the method for claim 1, is characterized in that, described weight estimation parameter comprises a prediction biasing, a scale factor, a bias difference, a ratio is poor or four combination.
4. the method for claim 1, is characterized in that, from the weight estimation of previous decoding fragment, prediction obtains the described weight estimation for described current fragment.
5. method as claimed in claim 4, is characterized in that, described method further comprises:
One first weight estimation based on one first decoding fragment and one second weight estimation of one second decoding fragment are predicted, produce the described weight estimation for described current fragment, and combine with described weight estimation parameter, wherein, described the first decoding fragment is positioned at the band identical with described current fragment with described the second decoding fragment.
6. method as claimed in claim 4, is characterized in that, described method further comprises:
One first weight estimation based on one first decoding fragment is predicted, produce the described weight estimation for described current fragment, and combine with described weight estimation parameter, wherein, described the first decoding fragment and described current fragment are positioned at different bands.
7. the method for claim 1, is characterized in that, according to the described data of having obtained, utilizes, based on contextual adaptive binary arithmetic coding or based on contextual self-adapting changeable long codes, described weight estimation parameter is carried out to decoding.
8. the method for claim 1, is characterized in that, uses a quantified precision relevant with a quantization parameter for residual error described in inverse quantization to carry out inverse quantization to described weight estimation parameter.
9. the method for claim 1, is characterized in that, described method further comprises:
From described incoming bit stream, obtain a sign, described sign represents whether apply locality weight estimation; And
According to described sign, obtain the one or more weight estimation parameters for a plurality of fragments of a band.
10. a video decoder, is characterized in that, described video decoder comprises:
One entropy decoding unit, in order to obtain the data for a current fragment to be decoded from an incoming bit stream, and carries out decoding to the described data of having obtained, in order to produce the decoding data that comprises residual sum one weight estimation parameter for described current fragment;
One weight estimation determining unit, couples described entropy decoding unit, in order to the weight estimation for described current fragment based on described weight estimation parameter generating;
One motion compensation units, in order to by frame/inter prediction produces the fallout predictor for described current fragment; And
One first adder, couples described weight estimation determining unit and described motion compensation units, in order to described weight estimation and described fallout predictor are combined, to produce one, has revised fallout predictor;
Wherein, described video decoder has been revised fallout predictor and described residual error is reconstructed described current fragment according to described.
11. video decoders as claimed in claim 10, is characterized in that, described weight estimation parameter comprises a prediction biasing, a scale factor, a bias difference, a ratio is poor or four combination.
12. video decoders as claimed in claim 10, is characterized in that, described weight estimation determining unit is by predicting the weight estimation from previous decoding fragment, in order to produce the described weight estimation for described current fragment.
13. video decoders as claimed in claim 12, it is characterized in that, one first weight estimation of described weight estimation determining unit based on one first decoding fragment and one second weight estimation of one second decoding fragment are predicted, produce the described weight estimation for described current fragment, and combine with described weight estimation parameter, wherein, described the first decoding fragment is positioned at the band identical with described current fragment with described the second decoding fragment.
14. video decoders as claimed in claim 12, it is characterized in that, one first weight estimation of described weight estimation determining unit based on one first decoding fragment predicted, produce the described weight estimation for described current fragment, and combine with described weight estimation parameter, wherein, described the first decoding fragment and described current fragment are positioned at different bands.
15. video decoders as claimed in claim 10, is characterized in that, described video decoder further comprises:
One inverse quantization unit, is used a quantified precision relevant with a quantization parameter for residual error described in inverse quantization to carry out inverse quantization to described weight estimation parameter.
16. video decoders as claimed in claim 10, is characterized in that, described entropy decoding unit obtains a sign from described incoming bit stream, and determine whether to apply locality weight estimation according to described sign.
17. 1 kinds of method for video coding, is characterized in that, described method comprises:
Obtain a current fragment of a band to be encoded;
By in frame/inter prediction produces a fallout predictor of described current fragment;
Described fallout predictor to described current fragment is carried out weight estimation, in order to produce one, has revised fallout predictor and a weight estimation parameter;
According to described current fragment and the described fallout predictor of having revised, produce residual error; And
Described residual error is encoded, and insert described weight estimation parameter, in order to produce a bit stream.
18. methods as claimed in claim 17, is characterized in that, described weight estimation parameter comprises a prediction biasing, a scale factor, a bias difference, a ratio is poor or four combination.
19. methods as claimed in claim 17, is characterized in that, carry out weight estimation and further comprise a weight estimation of predicting described current fragment from a plurality of weight estimations of previous a plurality of reconstruct fragments.
20. methods as claimed in claim 17, is characterized in that, described method further comprises:
Determine whether to apply locality weight estimation, and in this bit stream, insert a sign in order to represent whether apply.
21. 1 kinds of video encoders, is characterized in that, described video encoder comprises:
In one frame/inter prediction unit, in order to by frame/inter prediction produces a fallout predictor of a current fragment;
One determining unit, couples in described frame/inter prediction unit, in order to the described fallout predictor to described current fragment, carries out weight estimation, to produce one, has revised fallout predictor and a weight estimation parameter;
One transform and quantization unit, in order to receive residual error, and carries out transform and quantization to described residual error, and in order to produce quantized value, wherein, described residual error is to produce according to described current fragment and the described fallout predictor of having revised; And
One entropy coding unit, in order to described quantized value is encoded, and inserts described weight estimation parameter, to produce a bit stream.
22. video encoders as claimed in claim 21, is characterized in that, described weight estimation parameter comprises a prediction biasing, a scale factor, a bias difference, a ratio is poor or four combination.
23. video encoders as claimed in claim 21, is characterized in that, described determining unit is predicted a weight estimation of described current fragment from a plurality of weight estimations of previous a plurality of reconstruct fragments.
24. video encoders as claimed in claim 21, is characterized in that, described entropy coding unit further inserts a sign, in order to represent whether to apply locality weight estimation.
CN201210282765.XA 2012-08-09 2012-08-09 Video decoding method, video decoder, video encoding method and video encoder Pending CN103581690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210282765.XA CN103581690A (en) 2012-08-09 2012-08-09 Video decoding method, video decoder, video encoding method and video encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210282765.XA CN103581690A (en) 2012-08-09 2012-08-09 Video decoding method, video decoder, video encoding method and video encoder

Publications (1)

Publication Number Publication Date
CN103581690A true CN103581690A (en) 2014-02-12

Family

ID=50052463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210282765.XA Pending CN103581690A (en) 2012-08-09 2012-08-09 Video decoding method, video decoder, video encoding method and video encoder

Country Status (1)

Country Link
CN (1) CN103581690A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534772A (en) * 2015-05-19 2018-01-02 联发科技股份有限公司 The method and device of context adaptive binary arithmetic coding based on multilist
CN107646195A (en) * 2015-06-08 2018-01-30 Vid拓展公司 Intra block replication mode for screen content coding
CN112655218A (en) * 2018-09-21 2021-04-13 华为技术有限公司 Inter-frame prediction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1810041A (en) * 2003-06-25 2006-07-26 汤姆森许可贸易公司 Method and apparatus for weighted prediction estimation using a displaced frame differential
CN101023673A (en) * 2004-09-16 2007-08-22 汤姆逊许可证公司 Video codec with weighted prediction utilizing local brightness variation
CN101072355A (en) * 2006-05-12 2007-11-14 中国科学院计算技术研究所 Weighted predication motion compensating method
US20110007799A1 (en) * 2009-07-09 2011-01-13 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1810041A (en) * 2003-06-25 2006-07-26 汤姆森许可贸易公司 Method and apparatus for weighted prediction estimation using a displaced frame differential
CN101023673A (en) * 2004-09-16 2007-08-22 汤姆逊许可证公司 Video codec with weighted prediction utilizing local brightness variation
CN101072355A (en) * 2006-05-12 2007-11-14 中国科学院计算技术研究所 Weighted predication motion compensating method
US20110007799A1 (en) * 2009-07-09 2011-01-13 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534772A (en) * 2015-05-19 2018-01-02 联发科技股份有限公司 The method and device of context adaptive binary arithmetic coding based on multilist
CN107534772B (en) * 2015-05-19 2020-05-19 联发科技股份有限公司 Entropy coding and decoding method and device for image or video data
US10742984B2 (en) 2015-05-19 2020-08-11 Mediatek Inc. Method and apparatus for multi-table based context adaptive binary arithmetic coding
CN107646195A (en) * 2015-06-08 2018-01-30 Vid拓展公司 Intra block replication mode for screen content coding
CN107646195B (en) * 2015-06-08 2022-06-24 Vid拓展公司 Intra block copy mode for screen content coding
CN112655218A (en) * 2018-09-21 2021-04-13 华为技术有限公司 Inter-frame prediction method and device
CN112655218B (en) * 2018-09-21 2022-04-29 华为技术有限公司 Inter-frame prediction method and device
US11647207B2 (en) 2018-09-21 2023-05-09 Huawei Technologies Co., Ltd. Inter prediction method and apparatus

Similar Documents

Publication Publication Date Title
US11889098B2 (en) Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US20200280743A1 (en) Method of Coding and Decoding Images, Coding and Decoding Device and Computer Programs Corresponding Thereto
CN108848387B (en) Method for deriving reference prediction mode values
JP6545770B2 (en) Inter prediction method, decoding apparatus and video decoding method
EP3518544B1 (en) Video decoding with improved motion vector diversity
US20120230405A1 (en) Video coding methods and video encoders and decoders with localized weighted prediction
US20150016526A1 (en) Image encoding/decoding method and device
EP2699001B1 (en) A method and a system for video signal encoding and decoding with motion estimation
US9807425B2 (en) Method and apparatus for encoding/decoding images considering low frequency components
CA2608279A1 (en) System and method for scalable encoding and decoding of multimedia data using multiple layers
KR20110071231A (en) Encoding method, decoding method and apparatus thereof
KR20090095012A (en) Method and apparatus for encoding and decoding image using consecutive motion estimation
GB2492778A (en) Motion compensated image coding by combining motion information predictors
US20170041606A1 (en) Video encoding device and video encoding method
CN1938728A (en) Method and apparatus for encoding a picture sequence using predicted and non-predicted pictures which each include multiple macroblocks
US20100158111A1 (en) Method for insertion of data, method for reading of inserted data
KR20100102386A (en) Method and apparatus for encoding/decoding image based on residual value static adaptive code table selection
CN103581690A (en) Video decoding method, video decoder, video encoding method and video encoder
KR20090103675A (en) Method for coding/decoding a intra prediction mode of video and apparatus for the same
RU2782400C2 (en) Method of encoding and decoding images, device for encoding and decoding and corresponding software
KR20040095399A (en) Weighting factor determining method and apparatus in explicit weighted prediction
KR20100138735A (en) Video encoding and decoding apparatus and method using context information-based adaptive post filter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140212