Embodiment
Image sequence is a series of several picture.Each image comprises each pixel or picture point of being associated with at least one project of view data.The project of view data is, for example, and the project of brightness data or the project of chroma data.
Term " exercise data " should be understood from the most wide in range meaning.It comprises motion vector and possibly comprise the reference picture index that reference picture can be identified from image sequence.It can also comprise the item of information that indication is used for the interior slotting type of definite predict blocks.In fact, do not have under the situation of rounded coordinate at the motion vector MVc that is associated with piece Bc, must be in reference picture Iref the interpolated image data, to confirm predict blocks Bp.The exercise data that is associated with a piece is generally through method for estimating, for example, matches through piece and to calculate.But the present invention will never be made motion vector limit with the method that a piece is associated.
Other data data of gained have afterwards been extracted in term " residual error data " expression.This extraction generally is that the individual element of source data and prediction data subtracts each other.But this extraction is more general, and especially comprises weighted subtraction.Term " residual error data " and term " residual error " synonym.Residual block is the block of pixels that is associated with residual error data.
The residual error data of conversion has been used in term " conversion residual error data " expression.3.4.2.2 chapter (I.E.Richardson at the works " H.264 with the MPEG-4 video compression " of I.E.Richardson; " H.262 and MPEG-4video compression ", J.Wiley Sons is published in September, 2003) in the DCT (discrete cosine transform) that describes be an example of such conversion.Wavelet transformation and hadamard (Hadamard) conversion in the works 3.4.2.3 of I.E.Richardson chapter, described are other examples.Such conversion is with view data, and for example, residual error brightness and/or chroma data piece " conversion " become also to be called " the transform data piece " of " frequency data piece " or " coefficient block ".
Term " prediction data " expression is used to predict the data of other data.Predict blocks is the block of pixels that is associated with prediction data.Predict blocks be from the piece of prediction under piece or several of the identical image of image (prediction in spatial prediction or the image) or from the piece of prediction under one (single directional prediction) or several (bidirectional measurement) piece of image pictures different (time prediction or inter picture prediction) obtain.
Term " reconstruct data " expression merges the data of gained afterwards with residual error data and prediction data.This merging generally is the individual element addition of prediction data and residual error data.But this merging is more general, and especially comprises weighting summation.Reconstructed blocks is the block of pixels that is associated with reconstruct data.
With reference to Fig. 3, the present invention relates to the method for the current block of coded video sequences.
During
step 20, from closing on motion vector central definite coordinate (vx, candidate motion vector MV vy) that (neighbouring) piece is associated with the space of current block Bc
CtPiece Bc belongs to image I c.For example, as shown in Figure 1, with candidate motion vector MV
CtBe specified to the piece A of adjacent with current block Bc (adjacent), one of motion vector of B and/or C.According to a kind of modification, with candidate motion vector MV
CtBe specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with the candidate motion vector MV that confirms
CtThe piece that closes on current block Bc that is associated is labeled as Bv.Giving tacit consent to the motion vector that keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc left side.According to a kind of modification, as shown in Figure 4, the motion vector that acquiescence keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc top.During
step 22, confirm coordinate (dx, correct motion vector Δ MV dy).Correct motion vector Δ MV confirms that by this way that makes exactly and is being labeled as
, coding and the adjacent blocks of reconstruct and candidate motion vector MV in succession through revising via correct motion vector Δ MV
CtThe distortion minimization that calculates between the predict blocks motion of compensation.Predict blocks belongs to reference picture I
RefFor example, use following equation:
Wherein:
Be through MV
CtThe predict blocks motion of+Δ MV compensation;
is pixel (x, the numerical value of coding y) and reconstructed image data item among the image I c;
is pixel (x, the numerical value of coding y) and reconstructed image data item in the reference picture.
According to a kind of modification,
According to another kind of modification,
During step 22, therefore seek make E (. .) minimized motion vector Δ MV.For example, (dx dy), calculates E (dx, value dy), and keep and make E (dx, (dx, dy) value that value dy) is minimum for each probable value.
According to a kind of modification of step 22, correct motion vector Δ MV is less than first threshold a in each amplitude of its coordinate dx and dy
EnhAdditional constraint under make E (. .) minimum motion vector Δ MV, wherein a
EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then
Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV.In fact, according to this modification, only centering on candidate motion vector MV
CtLimited interval in seek Δ MV, for level and vertical component each, this is defined as follows at interval: [a
Cod+ a
Enh, a
Cod-a
Enh].More detailed so when calculating under the more expensive version, can definition following than large-spacing in search for: [R, R], R>a
EnhRepresent the hunting zone.Can use, for example, the value of R=2.25.Under this latter event, with a
EnhPrecision around candidate motion vector MV
CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 24, from the candidate motion vector MV that revises via correct motion vector Δ MV
CtIn confirm motion vectors MVp.MVp=MV
ct+ΔMV。
During step 26, consider motion vectors MVp ground coding current block Bc.As everyone knows, from current block Bc, deduct residual error data and the motion vector difference MVdiff that predict blocks Bp obtains for piece Bc coding.The motion vector difference MVdiff that will from MVc and MVp, calculate is coded among the stream F.MVdiff has coordinate (MVx-MVpx; MVy-MVpy), (MVx wherein; MVy) be the coordinate of MVc, and (MVpx; MVpy) be the coordinate of MVp.Quantize residual error data after the general first conversion.Through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding residual error data that quantizes after the first conversion and motion vector difference MVdiff are encoded into coded data.The maximum encoding precision of permission is a for MVdiff
CodThe example of entropy coding method is described in the 9.3rd joint of the works 6.5.4 chapter of I.E.Richardson or ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects-Part 10:Advanced Video Coding ".According to another kind of modification, can use picture in the 9.2nd joint of ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding " and CAVLC (based on the context-adaptive variable length code) the type method as described in the works 6.4.13.2 chapter of I.E.Richardson.
According to a kind of modification, according to SKIP coding mode coding current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In fact, when from current block, extracting residual block that the predict blocks confirmed according to the motion vectors MVp that in step 24, confirms obtains all coefficients and all be zero by it, " skipping (skip) " coding mode current block of still encoding.
According to a kind of modification, confirm candidate motion vector MV at first embodiment shown in Fig. 5
Ct Step 20 comprise the step 200 of confirming at least two candidate motion vectors, the candidate motion vector that will in step 200, confirm is merged into the step 202 that merges motion vector and according to merging motion vector selection candidate motion vector MV in the middle of the motion vector of step 200, confirming
CtStep 204.Among Fig. 5 with Fig. 3 in those identical steps with identical label sign, do not remake and further describe.
In step 200, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV
Ct1And MV
Ct2For example, as shown in Figure 1, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV
Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV
Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.Candidate motion vector MV
Ct1Have coordinate (vx1, vy1), candidate motion vector MV
Ct2Have coordinate (vx2, vy2).
During step 202, the candidate motion vector that will in step 200, confirm is merged into coordinate (MV
Fus(x), MV
Fus(y)) single motion vector MV
FusFor example, MV
Fus(x)=median (vx1, vx2,0) and MV
Fus(y)=median (vy1, vy2,0).According to a kind of modification, MV
Fus(x)=0.5* (vx1+vx2) and MV
Fus(y)=0.5* (vy1+vy2).
During step 204, will be according to certain standard and MV
FusThe immediate candidate motion vector of confirming in step 200 is elected candidate motion vector MV as
CtFor example, if ‖ is MV
Ct2-MV
Fus‖<‖ MV
Ct1-MV
Fus‖, then MV
Ct=MV
Ct2, otherwise, MV
Ct=MV
Ct1This standard is, for example, and standard L2.This standard also can be an absolute value.
To two candidate motion vector MV
Ct1And MV
Ct2Described this modification can be applied to a candidate motion vector arbitrarily in the same manner.
Second embodiment has been shown in Fig. 6.
During step 30, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV
Ct1And MV
Ct2For example, as shown in Figure 1, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV
Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV
Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.
During step 32, be candidate motion vector MV
Ct1Confirm that (dx1, correct motion vector Δ MV1 dy1) is candidate motion vector MV to coordinate
Ct2Confirm coordinate (dx2, correct motion vector Δ MV2 dy2).With motion vector Δ MV1 be specified to make in succession the coding and reconstruct with candidate motion vector MV
Ct1Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M1
Ct1The distortion minimization that calculates between the predict blocks motion of compensation.Equally, with motion vector Δ MV2 be specified to make in succession the coding and reconstruct with candidate motion vector MV
Ct2Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ MV2
Ct2The distortion minimization that calculates between the predict blocks motion of compensation.For example, use like minor function:
According to a kind of modification,
With
According to another kind of modification,
With
During step 32, therefore seek make E1 (. .) minimized correct motion vector Δ MV1 and make E2 (. .) minimized correct motion vector Δ MV2.For example, (dx1 dy1), calculates E1 (dx1, value dy1), and keep and make E1 (dx1, (dx1, dy1) value that value dy1) is minimum for each probable value.Equally, (dx2 dy2), calculates E2 (dx2, value dy2), and keep and make E2 (dx2, (dx2, dy2) value that value dy2) is minimum for each probable value.
According to a kind of modification of step 32, correct motion vector Δ MV1 and Δ MV2 are at its coordinate dx1, dx2, and each amplitude of dy1 and dy2 is all less than a
EnhAdditional constraint under make respectively E1 (. .) and E2 (. .) minimum those motion vector Δ MV1 and Δ MV2, wherein a
EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then
Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV1 and Δ MV2.In fact, according to this modification, only centering on candidate motion vector MV respectively
Ct1And MV
Ct2Limited interval in seek Δ MV1 and Δ MV2, for level and vertical component each, this is defined as follows at interval: [a
Cod+ a
Enh, a
Cod-a
Enh].Therefore more detailed and when calculating under the more expensive form, can definition following than large-spacing in search for: [R, R], R>a
EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a
EnhPrecision at the coordinate of in the interval [R, R] of candidate motion vector, seeking correct motion vector.
During step 34, through merging the candidate motion vector MV that revises via correct motion vector Δ MV1 and Δ MV2 respectively
Ct1And MV
Ct2Confirm motion vectors MVp.For example, MVpx=median (vx1+dx1, vx2+dx2,0) and MVpy=median (vy1+dy 1, vy2+dy2,0).According to a kind of modification, MVpx=min (vx1+dx1, vx2+dx2,0) and MVpy=min (vy1+dy1, vy2+dy2,0).According to another kind of modification, MVpx=0.5* (vx1+dx1+vx2+dx2) and MVpy=0.5* (vy1+dy1+vy2+dy2).
During step 36, consider motion vectors MVp ground coding current block Bc.As everyone knows, from current block Bc, deduct residual error data and the motion vector difference MVdiff that predict blocks Bp obtains for piece Bc coding.The motion vector difference MVdiff that will from MVc and MVp, calculate is coded among the stream F.MVdiff has coordinate (MVx-MVpx; MVy-MVpy), (MVx wherein; MVy) be the coordinate of Mvc, (MVpx; MVpy) be the coordinate of Mvp.Quantize residual error data after the general first conversion.Through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding residual error data that quantizes after the first conversion and motion vector difference MVdiff are encoded into coded data.The maximum encoding precision of permission is a for MVdiff
CodThe example of entropy coding method is described in the 9.3rd joint of the works 6.5.4 chapter of I.E.Richardson or ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding ".According to another kind of modification, can use picture in the 9.2nd joint of ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding " and CAVLC (based on the context-adaptive variable length code) the type method as described in the works 6.4.13.2 chapter of I.E.Richardson.
According to a kind of modification, according to SKIP encoded current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In fact, when from current block, extracting residual block that the predict blocks confirmed according to the motion vectors MVp that in step 24, confirms obtains all coefficients and all be zero by it, " skipping " coding mode current block of still encoding.
Described to two candidate motion vector MV with reference to figure 5 above
Ct1And MV
Ct2Embodiment, still, also can be applied to an arbitrarily candidate motion vector.In this case, during step 32, for each candidate motion vector is confirmed correct motion vector.During step 34, from the candidate motion vector of revising via their correct motion vector separately, confirm motion vectors MVp.
With reference to figure 7, the invention still further relates to the method for the current block of reconstructed image sequence.
During step 52, from motion vector that the space adjacent blocks of current block Bc is associated in the middle of confirm coordinate (vx, candidate motion vector MV vy)
CtFor example, as shown in Figure 1, with candidate motion vector MV
CtBe specified to the piece A adjacent, one of motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV
CtBe specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with the candidate motion vector MV that confirms
CtThe piece that closes on current block Bc that is associated is labeled as Bv.Giving tacit consent to the motion vector that keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc left side.According to a kind of modification, as shown in Figure 4, the motion vector that acquiescence keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc top.Step 52 is identical with the step 20 of coding method.
During
step 54, confirm coordinate (dx, correct motion vector Δ MV dy).Correct motion vector Δ MV confirms that by this way that makes exactly and is being labeled as
, coding and the adjacent blocks of reconstruct and candidate motion vector MV in succession through revising via correct motion vector Δ MV
CtThe distortion minimization that calculates between the predict blocks motion of compensation.Predict blocks belongs to reference picture I
RefFor example, use like minor function:
Wherein:
Be through MV
CtThe predict blocks motion of+Δ MV compensation;
is pixel (x, the numerical value of coding y) and reconstructed image data item among the image I c; And
is pixel (x, the numerical value of coding y) and reconstructed image data item in the reference picture.
According to a kind of modification,
According to another kind of modification,
During step 54, therefore seek make E (. .) minimized motion vector Δ MV.For example, (dx dy), calculates E (dx, value dy), and keep and make E (dx, (dx, dy) value that value dy) is minimum for each probable value.Step 54 is identical with the step 22 of coding method.
According to a kind of modification of step 54, correct motion vector Δ MV is less than first threshold a in each amplitude of its coordinate dx and dy
EnhAdditional constraint under make E (. .) minimized motion vector Δ MV, wherein a
EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then
Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV.In fact, according to this modification, only centering on candidate motion vector MV
CtLimited interval in seek vector Δ MV, for level and vertical component each, this is defined as follows at interval: [a
Cod+ a
Enh, a
Cod-a
Enh].More detailed so when calculating under the more expensive version, can definition following than large-spacing in search for: [R, R], R>a
EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a
EnhPrecision around candidate motion vector MV
CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 56, from the candidate motion vector MV that revises via correct motion vector Δ MV
CtIn confirm motion vectors MVp.MVp=MV
ct+ΔMV。Step 56 is identical with the step 24 of coding method.
During step 58, consider motion vectors MVp ground reconstruct current block Bc.More particularly, through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding decoding transform/quantization residual error data and motion vector difference MVdiff from stream F.Go to quantize then to bring conversion institute transform/quantization residual error data through the inversion of the conversion in the step 26 of coding method, used.Among the motion vectors MVp that confirms from motion vector difference MVdiff with step 56 is current block Bc reconstitution movement vector MVc.MVc has coordinate (MVdiffx+MVpx; MVdiffy+MVpy), (MVdiffx wherein; MVdiffy) be the coordinate of MVdif, (MVpx; MVpy) be the coordinate of MVp.In the reference picture of reconstruct from motion vector MVc, confirm predict blocks, it is relevant with the predict blocks motion through motion vector MVc compensation.Then, predict blocks and the residual error data piece that is current block reconstruct from stream F are merged, for example, the individual element addition.
According to a kind of modification, according to SKIP coding method reconstruct current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In this case, the piece Bc of reconstruct is the predict blocks motion through the motion vectors MVp compensation of in step 56, confirming.
According in a kind of modification shown in Fig. 8, confirm candidate motion vector MV
CtStep 52 comprise the step 520 of confirming at least two candidate motion vectors, the candidate motion vector that will in step 520, confirm is merged into the step 522 that merges motion vector and according to merging motion vector selection candidate motion vector MV in the middle of the motion vector of step 520, confirming
CtStep 524.Step 520 is identical with the step 200 of Fig. 5, and step 522 is identical with the step 202 of Fig. 5, and step 524 is identical with the step 204 of Fig. 5.
Another embodiment has been shown in Fig. 9.
During step 62, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV
Ct1And MV
Ct2For example, as shown in Figure 1, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV
Ct1And MV
Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV
Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV
Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.This step 62 is identical with the step 30 of Fig. 6.
During step 64, be candidate motion vector MV
Ct1Confirm that (dx1, correct motion vector Δ MV1 dy1) is candidate motion vector MV to coordinate
Ct2Confirm coordinate (dx2, correct motion vector Δ MV2 dy2).With motion vector Δ MV1 be specified to make in succession the coding and reconstruct with candidate motion vector MV
Ct1Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M1
Ct1The distortion minimization that calculates between the predict blocks motion of compensation.Equally, with motion vector Δ MV2 be specified to make in succession the coding and reconstruct with candidate motion vector MV
Ct2Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M2
Ct2The distortion minimization that calculates between the predict blocks motion of compensation.For example, use like minor function:
According to a kind of modification,
With
According to another kind of modification,
With
During step 64, therefore seek make E1 (. .) minimized correct motion vector Δ MV1 and make E2 (. .) minimized correct motion vector Δ MV2.For example, (dx1 dy1), calculates E1 (dx1, value dy1), and keep and make E1 (dx1, (dx1, dy1) value that value dy1) is minimum for each probable value.Equally, (dx2 dy2), calculates E2 (dx2, value dy2), and keep and make E2 (dx2, (dx2, dy2) value that value dy2) is minimum for each probable value.This step 64 is identical with the step 32 of Fig. 6.
According to a kind of modification of step 64, correct motion vector Δ MV1 and Δ MV2 are at its coordinate dx1, dx2, and each amplitude of dy1 and dy2 is all less than a
EnhAdditional constraint under make respectively E1 (. .) and E2 (. .) minimized those motion vector Δ MV1 and Δ MV2, wherein a
EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then
Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV1 and Δ MV2.In fact, according to this modification, only centering on candidate motion vector MV respectively
Ct1And MV
Ct2Limited interval in seek Δ MV1 and Δ MV2, for level and vertical component each, this is defined as follows at interval: [a
Cod+ a
Enh, a
Cod-a
Enh].Therefore more detailed and when calculating under the more expensive form, can definition following than large-spacing in search for: [R, R], R>a
EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a
EnhPrecision around candidate motion vector MV
CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 66, through merging the candidate motion vector MV that revises via correct motion vector Δ MV1 and Δ MV2 respectively
Ct1And MV
Ct2Confirm motion vectors MVp.For example, MVpx=median (vx1+dx1, vx2+dx2,0) and MVpy=median (vy1+dy1, vy2+dy2,0).According to a kind of modification, MVpx=min (vx1+dx1, vx2+dx2,0) and MVpy=min (vy1+dy1, vy2+dy2,0).According to another kind of modification, MVpx=0.5* (vx1+dx1+vx2+dx2) and MVpy=0.5* (vy1+dy1+vy2+dy2).This step 66 is identical with the step 34 of Fig. 6.
During step 68, consider motion vectors MVp ground reconstruct current block Bc.More particularly, through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding decoding transform/quantization residual error data and motion vector difference MVdiff from stream F.Go to quantize then to bring conversion institute transform/quantization residual error data through the inversion of the conversion in the step 26 of coding method, used.Among the motion vectors MVp that confirms from motion vector difference MVdiff with step 56 is current block Bc reconstitution movement vector MVc.MVc has coordinate (MVdiffx+MVpx; MVdiffy+MVpy), (MVdiffx wherein; MVdiffy) be the coordinate of MVdiff, (MVpx; MVpy) be the coordinate of MVp.In the reference picture of reconstruct from motion vector MVc, confirm predict blocks, it is relevant with the predict blocks motion through motion vector MVc compensation.Then, predict blocks and the residual error data piece that is current block reconstruct from stream F are merged, for example, the individual element addition.
According to a kind of modification, according to SKIP coding method reconstruct current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In this case, the piece Bc of reconstruct is the predict blocks motion through the motion vectors MVp compensation of in step 66, confirming.
The present invention who describes under single motion vector and the situation that (single directional prediction) piece is associated can directly be generalized to the situation that two or more motion vectors and a piece (for example, bi-directional predicted) are associated.In this case, each motion vector is associated with the tabulation of reference picture.For example, in H.264, the two-way type piece uses two tabulation L0 and L1, is motion vector of each tabulation definition.Under two-way situation, for each tabulation independent utility step 20 to 24 or 30 to 34.During each step 20 (correspondingly 30), from the motion vector of the tabulation identical, confirm one or several candidate motion vectors with the current list, and with its be associated with piece that current block Bc space is closed on.In step 26 (correspondingly 36) according to motion vector encoder current block Bc from each tabulation of step 24 (correspondingly 34).
Method according to coding of the present invention and reconstitution movement vector has the advantage of using correct motion vector to improve the method for motion vectors.Therefore, with regard to quality and/or coding cost, they have the advantage that improves code efficiency.Especially can be partial to select " skipping " coding mode according to method of the present invention.In addition, under the situation of time prediction, it can also reduce the coding cost of exercise data, or under the constant prerequisite of cost, improves the precision of motion vector, so the quality of motion compensation.In fact, motion vectors MVp has precision a
Enh, coded vector difference MVdiff has precision a
Cod, therefore, the reconstructed vector MVc of addition motion vectors and coded vector difference gained has precision a
EnhTherefore can be according to coding of the present invention and reconstructing method according to certain precision a
CodEncoding motion vector equals a to motion vector then
EnhThe motion compensation of big precision.
The invention still further relates to encoding device of describing with reference to Figure 10 12 and the decoding device of describing with reference to Figure 11 13.In Figure 10 and 11, but shown module is to may or may not correspond to the functional unit of discrimination unit physically.For example, some of these modules or they can be grouped in the single parts together, or constitute the function of same software.On the contrary, some modules can be made up of the discrete physical entity.
With reference to Figure 10, encoding device 12 receives the image that belongs to image sequence on input.Each image is divided into each block of pixels that is associated with at least one view data item.Encoding device 12 has especially realized utilizing the coding of time prediction.In Figure 10, only show encoding device 12 and coding or the relevant module of INTER coding through time prediction.Unshowned other module of knowing with those of ordinary skill video encoder realizes utilizing or not utilizing the INTRA coding of spatial prediction.Encoding device 12 especially comprised can, for example, subtract each other through individual element, from current block Bc, extract predict blocks Bp to generate the computing module 1200 of residual image data piece or residual block Bres.It further comprises the module 1202 that can conversion residual block Bres then it be quantized into quantized data.Conversion T is, for example, and discrete cosine transform (or DCT).Encoding device 12 further comprises the entropy coding module 1204 that can quantized data be encoded into encoded data stream F.It further comprises the module 1206 of the inverse operation that carries out module 1202.Module 1206 is carried out re-quantization Q earlier
-1Carry out inverse transformation T again
-1Module 1206 is connected with computing module 1208, computing module 1208 can, for example,, merge data block and predict blocks Bp from module 1206 through the individual element addition, be stored in the reconstructed image data block in the memory 1210 with generation.Encoding device 12 further comprises the motion estimation module 1212 that can estimate piece Bc and be stored at least one the motion vector MVc between the piece of the reference picture Iref in the memory 1210, and this image is the reconstruct then of encoding in the past.According to a kind of modification, can between current block Bc and original reference image I c, carry out estimation, memory 1210 is not connected with motion estimation module 1212 in this case.According to the well-known method of those of ordinary skill in the art; Motion estimation module is searching moving data item in reference picture Iref by this way; Especially motion vector, that is exactly to make at current block Bc and the error minimize that calculates between through the piece among the reference picture Iref of exercise data item identification.
The exercise data of confirming sends to determination module 1214 by motion estimation module 1212, and determination module 1214 can be selected a kind of coding mode for current block Bc in a predetermined group coding pattern.The coding mode that stays is, for example, making bit rate-distortion type criterion, minimized that is a kind of.But the present invention is not limited to this system of selection, but can be according to another kind of criterion, and for example, priori type criterion is selected the pattern that stays.With coding mode and exercise data that determination module 1214 is selected, for example, the one or more exercise data items under the situation of time prediction pattern or INTER pattern send to prediction module 1216.Prediction module 1216 can realize that the step 20 of coding method is to 24 or 30 to 34.Step 26 realizes via this pack module of encoding device 12 with 36.In addition, with the coding mode of selecting with under opposite situation, one or more exercise data items send to entropy coding module 1204 so that be coded among the stream F.Prediction module 1216 and possibly confirmed predict blocks Bp from the exercise data (inter picture prediction) that motion estimation module 1212 is confirmed from the coding mode that determination module 1214 is confirmed.
With reference to Figure 11, decoding device 13 receives the encoded data stream F of representative image sequence on input.Stream F is that for example, encoding device 12 sends via channel.Decoding device 13 comprises can the generating solution code data, for example, and the entropy decoder module 1300 of coding mode and the decoded data relevant with the content of image.
Decoding device 13 also comprises the exercise data reconstructed module.According to first embodiment, this exercise data reconstructed module is the entropy decoder module 1300 of a part of the stream F of the said exercise data of decoding representative.According to not in a kind of modification shown in Figure 13, this exercise data reconstructed module is a motion estimation module.This solution via decoding device 13 reconstitution movement data is called " template matches ".
Then the decoded data relevant with the content of image sent to and to carry out the module 1302 of carrying out inverse transformation behind the re-quantization earlier.Module 1302 is identical with the module 1206 of the encoding device 12 that generates encoding stream F.Module 1302 is connected with computing module 1304, computing module 1304 can, for example,, merge piece and predict blocks Bp from module 1302 through the individual element addition, be stored in the reconstruct current block Bc in the memory 1306 with generation.Decoding device 13 also comprises prediction module 1308.Prediction module 1308 and possibly confirmed predict blocks Bp from the exercise data that the exercise data reconstructed module is confirmed from the coding mode of entropy decoder module 1300 for current block decoding.Prediction module 1308 can realize that step 52 according to coding/decoding method of the present invention is to 56 or 62 to 66.Step 58 realizes through this pack module of decoding device 12 with 68.
Obviously, the present invention is not limited to the above embodiments.
Especially, those of ordinary skill in the art can be with any alternative applications in said embodiment, and makes up them so that from their various advantages, benefit.Especially, the present invention will never receive may not be adjacent with current block the candidate motion vector class limitations.In addition, the present invention who describes under single motion vector and the situation that (single directional prediction) piece is associated can directly be generalized to the situation that two or more motion vectors and a piece (for example, bi-directional predicted) are associated.