CN102668562A - Motion vector prediction and refinement - Google Patents

Motion vector prediction and refinement Download PDF

Info

Publication number
CN102668562A
CN102668562A CN2010800580628A CN201080058062A CN102668562A CN 102668562 A CN102668562 A CN 102668562A CN 2010800580628 A CN2010800580628 A CN 2010800580628A CN 201080058062 A CN201080058062 A CN 201080058062A CN 102668562 A CN102668562 A CN 102668562A
Authority
CN
China
Prior art keywords
motion vector
current block
candidate
correct
candidate motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010800580628A
Other languages
Chinese (zh)
Other versions
CN102668562B (en
Inventor
E.弗朗索瓦
D.索罗
J.维隆
A.马丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
International Digital Madison Patent Holding SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of CN102668562A publication Critical patent/CN102668562A/en
Application granted granted Critical
Publication of CN102668562B publication Critical patent/CN102668562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Abstract

The invention relates to a method for coding a current block of a sequence of images comprising the following steps: - determining (20, 30) at least one candidate motion vector associated with a neighbouring block of said current block, - determining (24, 34) a prediction motion vector from the candidate motion vector, and - coding (26, 36) the current block from the prediction motion vector. Advantageously, the prediction motion vector is determined according to the following steps: - determining (22, 32), for the candidate motion vector, a corrective motion vector so as to minimise a distortion calculated between the neighbouring block successively coded and reconstructed and a prediction block motion compensated by the at least one candidate motion vector modified by the corrective motion vector, and - determining (24, 34) the prediction motion vector from the candidate motion vector modified by the corrective motion vector.

Description

Motion-vector prediction and refinement
Technical field
The present invention relates to the general field of image encoding.More particularly, the present invention relates to method and the method for the such piece of reconstruct of the block of pixels of coded video sequences.
Background technology
Known in the prior art use is through time prediction as shown in Figure 1 or inter-picture coding (pattern INTER) Methods for Coding high efficient coding image sequence.The current block Bc of image I c through from it special relevant, maybe from before time prediction among the piece Bp that inserts in the reference picture Iref of coding and reconstruct encode.The called piece that gives a forecast of the piece Bp of reference picture.Use the motion vector MVc that is associated with current block Bc to discern it.Motion vector MVc for example, confirms through the piece matching method through the method for estimation.According to known method, in stream F, be current block Bc encoding motion vector MVc and residual error data through the time prediction coding.Residual error data deducts predict blocks Bp and obtains from current block Bc.The motion vector MVc that uses this residual error data of reconstructed reference image I ref and be associated with current block Bc can reconstruct current block Bc.
In addition, known in the prior art using and the piece A adjacent the motion vector MVc that the prediction of the motion vector that B and C are associated is encoded and is associated with current block Bc with current block Bc.With reference to figure 2, during step 10, from adjacent block A, confirm one or more candidate motion vector MV in the motion vector that B and C are associated CtGenerally speaking, use three candidate motion vectors.As piece A, one of B and/or C outside image or piece have exception during through time prediction coding (for example, in the INTRA pattern).In fact, under latter event, there is not motion vector to be associated with related piece.During step 12, from candidate motion vector MV CtIn confirm motion vectors MVp.Known motion vectors MVp has the ordinate intermediate value of abscissa intermediate value and candidate motion vector of candidate motion vector as coordinate.During step 14, consider motion vectors MVp ground coding current block Bc.As everyone knows, from current block Bc, deduct residual error data and the motion vector difference MVdiff that predict blocks Bp obtains for piece Bc coding.The motion vector difference MVdiff that will from MVc and MVp, calculate is coded among the stream F.MVdiff has coordinate (MVx-MVpx; MVy-MVpy), (MVx wherein; MVy) be the coordinate of Mvc, (MVpx; MVpy) be the coordinate of Mvp.Quantize residual error data after the general first conversion.The residual error data and the motion vector difference MVdiff that quantize after with first conversion through VLC (variable length code) type entropy coding are encoded into coded data.
Method with reference to the described encoding motion vector MVc of Fig. 1 and 2 may not be efficiently always with regard to the coding cost.
Summary of the invention
The objective of the invention is to overcome at least one shortcoming of prior art.For this reason, the present invention relates to the method for the current block of coded video sequences, it comprises following steps:
-definite at least one candidate motion vector that is associated with the adjacent blocks of current block;
-definite motion vectors from least one candidate motion vector; And
-according to motion vectors coding current block.
Advantageously, said motion vectors is confirmed according to following steps:
-confirm correct motion vector at least one candidate motion vector so that make the adjacent blocks of coding and reconstruct in succession with through the predict blocks motion of at least one candidate motion vector compensation of revising via correct motion vector between the distortion minimization of calculating; And
-from a candidate motion vector of revising via correct motion vector, confirm motion vectors at least.
According to a kind of special characteristic of the present invention, said coding method comprises confirms the motion vector difference from the current motion vector that is associated with current block Bc and motion vectors, calculated, and the step of the current block of encoding comprises the encoding motion vector difference.
According to a particular aspects of the present invention, the amplitude of each coordinate of correct motion vector is through the first threshold restriction.
According to another aspect of the present invention, said threshold value is less than second threshold value of the encoding precision of representing motion vector.
According to a particular aspects of the present invention, said first threshold equal 1/8 and said second threshold value equal 1/4.
According to another particular aspects of the present invention, said first threshold equal 1/4 and said second threshold value equal 1/2.
According to another particular aspects of the present invention, with greater than the precision of the encoding precision of motion vector at each coordinate of in the qualification at interval of said candidate motion vector, confirming said correct motion vector.
According to the first embodiment of the present invention, confirm that the step of at least one candidate motion vector comprises following steps:
-definite at least two candidate motion vectors;
-at least two candidate motion vectors are merged into the merging motion vector; And
-in the middle of at least two candidate motion vectors, select and the immediate motion vector of merging motion vector.
According to the second embodiment of the present invention; Confirm at least two candidate motion vectors; The step of confirming correct motion vector be included as at least two candidate motion vectors each confirm correct motion vector, and the step of definite motion vectors comprise with via they separately the candidate motion vector revised of correct motion vector be merged into single motion vectors.
The present invention also relates to the method for reconstruct current block in addition, and it comprises following steps:
-definite at least one candidate motion vector that is associated with the adjacent blocks of current block;
-definite motion vectors from least one candidate motion vector; And
-according to motion vectors reconstruct current block.
Said motion vectors is confirmed according to following steps:
-confirm correct motion vector, so that make the distortion minimization that between reconstruct adjacent blocks and predict blocks motion, calculates through at least one candidate motion vector compensation of revising via correct motion vector; And
-definite motion vectors from the candidate motion vector of revising via correct motion vector.
According to a particular aspects of the present invention, said reconstructing method also comprises following steps:
-be current block decoding motion vectors difference;
-from motion vector difference and motion vectors, confirm at least one current motion vector for current block; And
-according to current motion vector reconstruct current block.
According to coding of the present invention and reconstructing method code efficiency is improved, that is, under the prerequisite of mass conservation, has reduced the coding cost of image sequence, or improved the quality of the reproducing sequence of image for given coding cost.
Description of drawings
Through restrictive embodiment and favourable implementation anything but being described with reference to accompanying drawing, can understand better and illustration the present invention, in the accompanying drawings:
Fig. 1 shows the coding method via time prediction according to prior art;
Fig. 2 shows the flow chart according to the method for prior art coding current block;
Fig. 3 shows the flow chart according to the method for first embodiment of the present invention coding current block;
Fig. 4 shows the step according to the coding method of the current block of the first embodiment of the present invention;
Fig. 5 shows the flow chart according to the method for a kind of modification coding current block of the first embodiment of the present invention;
Fig. 6 shows the flow chart according to the method for second embodiment of the present invention coding current block;
Fig. 7 shows the flow chart according to the method for first embodiment of the present invention reconstruct current block;
Fig. 8 shows the flow chart according to the method for a kind of modification reconstruct current block of the first embodiment of the present invention;
Fig. 9 shows the flow chart according to the method for second embodiment of the present invention reconstruct current block;
Figure 10 shows according to encoding device of the present invention; And
Figure 11 shows according to decoding device of the present invention.
Embodiment
Image sequence is a series of several picture.Each image comprises each pixel or picture point of being associated with at least one project of view data.The project of view data is, for example, and the project of brightness data or the project of chroma data.
Term " exercise data " should be understood from the most wide in range meaning.It comprises motion vector and possibly comprise the reference picture index that reference picture can be identified from image sequence.It can also comprise the item of information that indication is used for the interior slotting type of definite predict blocks.In fact, do not have under the situation of rounded coordinate at the motion vector MVc that is associated with piece Bc, must be in reference picture Iref the interpolated image data, to confirm predict blocks Bp.The exercise data that is associated with a piece is generally through method for estimating, for example, matches through piece and to calculate.But the present invention will never be made motion vector limit with the method that a piece is associated.
Other data data of gained have afterwards been extracted in term " residual error data " expression.This extraction generally is that the individual element of source data and prediction data subtracts each other.But this extraction is more general, and especially comprises weighted subtraction.Term " residual error data " and term " residual error " synonym.Residual block is the block of pixels that is associated with residual error data.
The residual error data of conversion has been used in term " conversion residual error data " expression.3.4.2.2 chapter (I.E.Richardson at the works " H.264 with the MPEG-4 video compression " of I.E.Richardson; " H.262 and MPEG-4video compression ", J.Wiley Sons is published in September, 2003) in the DCT (discrete cosine transform) that describes be an example of such conversion.Wavelet transformation and hadamard (Hadamard) conversion in the works 3.4.2.3 of I.E.Richardson chapter, described are other examples.Such conversion is with view data, and for example, residual error brightness and/or chroma data piece " conversion " become also to be called " the transform data piece " of " frequency data piece " or " coefficient block ".
Term " prediction data " expression is used to predict the data of other data.Predict blocks is the block of pixels that is associated with prediction data.Predict blocks be from the piece of prediction under piece or several of the identical image of image (prediction in spatial prediction or the image) or from the piece of prediction under one (single directional prediction) or several (bidirectional measurement) piece of image pictures different (time prediction or inter picture prediction) obtain.
Term " reconstruct data " expression merges the data of gained afterwards with residual error data and prediction data.This merging generally is the individual element addition of prediction data and residual error data.But this merging is more general, and especially comprises weighting summation.Reconstructed blocks is the block of pixels that is associated with reconstruct data.
With reference to Fig. 3, the present invention relates to the method for the current block of coded video sequences.
During step 20, from closing on motion vector central definite coordinate (vx, candidate motion vector MV vy) that (neighbouring) piece is associated with the space of current block Bc CtPiece Bc belongs to image I c.For example, as shown in Figure 1, with candidate motion vector MV CtBe specified to the piece A of adjacent with current block Bc (adjacent), one of motion vector of B and/or C.According to a kind of modification, with candidate motion vector MV CtBe specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with the candidate motion vector MV that confirms CtThe piece that closes on current block Bc that is associated is labeled as Bv.Giving tacit consent to the motion vector that keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc left side.According to a kind of modification, as shown in Figure 4, the motion vector that acquiescence keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc top.During step 22, confirm coordinate (dx, correct motion vector Δ MV dy).Correct motion vector Δ MV confirms that by this way that makes exactly and is being labeled as
Figure BDA00001788109300051
, coding and the adjacent blocks of reconstruct and candidate motion vector MV in succession through revising via correct motion vector Δ MV CtThe distortion minimization that calculates between the predict blocks motion of compensation.Predict blocks belongs to reference picture I RefFor example, use following equation:
E ( dx , dy ) = Σ ( x , y ) ∈ B v rec ( I c rec ( x , y ) - MC MV ct + ΔMV ( x , y ) ) 2
= Σ ( x , y ) ∈ B v rec ( I c rec ( x , y ) - I ref rec ( x - vx - dx , y - vy - dy ) ) 2
Wherein:
Figure BDA00001788109300054
Be through MV CtThe predict blocks motion of+Δ MV compensation;
Figure BDA00001788109300055
is pixel (x, the numerical value of coding y) and reconstructed image data item among the image I c;
Figure BDA00001788109300061
is pixel (x, the numerical value of coding y) and reconstructed image data item in the reference picture.
According to a kind of modification, E ( Dx , Dy ) = Σ ( x , y ) ∈ B v Rec | I c Rec ( x , y ) - MC MV Ct + Δ MV ( x , y ) | . According to another kind of modification, E ( Dx , Dy ) = Max ( x , y ) ∈ B v Rec | I c Rec ( x , y ) - MC MV Ct + Δ MV ( x , y ) | .
During step 22, therefore seek make E (. .) minimized motion vector Δ MV.For example, (dx dy), calculates E (dx, value dy), and keep and make E (dx, (dx, dy) value that value dy) is minimum for each probable value.
According to a kind of modification of step 22, correct motion vector Δ MV is less than first threshold a in each amplitude of its coordinate dx and dy EnhAdditional constraint under make E (. .) minimum motion vector Δ MV, wherein a EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV.In fact, according to this modification, only centering on candidate motion vector MV CtLimited interval in seek Δ MV, for level and vertical component each, this is defined as follows at interval: [a Cod+ a Enh, a Cod-a Enh].More detailed so when calculating under the more expensive version, can definition following than large-spacing in search for: [R, R], R>a EnhRepresent the hunting zone.Can use, for example, the value of R=2.25.Under this latter event, with a EnhPrecision around candidate motion vector MV CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 24, from the candidate motion vector MV that revises via correct motion vector Δ MV CtIn confirm motion vectors MVp.MVp=MV ct+ΔMV。
During step 26, consider motion vectors MVp ground coding current block Bc.As everyone knows, from current block Bc, deduct residual error data and the motion vector difference MVdiff that predict blocks Bp obtains for piece Bc coding.The motion vector difference MVdiff that will from MVc and MVp, calculate is coded among the stream F.MVdiff has coordinate (MVx-MVpx; MVy-MVpy), (MVx wherein; MVy) be the coordinate of MVc, and (MVpx; MVpy) be the coordinate of MVp.Quantize residual error data after the general first conversion.Through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding residual error data that quantizes after the first conversion and motion vector difference MVdiff are encoded into coded data.The maximum encoding precision of permission is a for MVdiff CodThe example of entropy coding method is described in the 9.3rd joint of the works 6.5.4 chapter of I.E.Richardson or ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects-Part 10:Advanced Video Coding ".According to another kind of modification, can use picture in the 9.2nd joint of ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding " and CAVLC (based on the context-adaptive variable length code) the type method as described in the works 6.4.13.2 chapter of I.E.Richardson.
According to a kind of modification, according to SKIP coding mode coding current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In fact, when from current block, extracting residual block that the predict blocks confirmed according to the motion vectors MVp that in step 24, confirms obtains all coefficients and all be zero by it, " skipping (skip) " coding mode current block of still encoding.
According to a kind of modification, confirm candidate motion vector MV at first embodiment shown in Fig. 5 Ct Step 20 comprise the step 200 of confirming at least two candidate motion vectors, the candidate motion vector that will in step 200, confirm is merged into the step 202 that merges motion vector and according to merging motion vector selection candidate motion vector MV in the middle of the motion vector of step 200, confirming CtStep 204.Among Fig. 5 with Fig. 3 in those identical steps with identical label sign, do not remake and further describe.
In step 200, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV Ct1And MV Ct2For example, as shown in Figure 1, with candidate motion vector MV Ct1And MV Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV Ct1And MV Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.Candidate motion vector MV Ct1Have coordinate (vx1, vy1), candidate motion vector MV Ct2Have coordinate (vx2, vy2).
During step 202, the candidate motion vector that will in step 200, confirm is merged into coordinate (MV Fus(x), MV Fus(y)) single motion vector MV FusFor example, MV Fus(x)=median (vx1, vx2,0) and MV Fus(y)=median (vy1, vy2,0).According to a kind of modification, MV Fus(x)=0.5* (vx1+vx2) and MV Fus(y)=0.5* (vy1+vy2).
During step 204, will be according to certain standard and MV FusThe immediate candidate motion vector of confirming in step 200 is elected candidate motion vector MV as CtFor example, if ‖ is MV Ct2-MV Fus‖<‖ MV Ct1-MV Fus‖, then MV Ct=MV Ct2, otherwise, MV Ct=MV Ct1This standard is, for example, and standard L2.This standard also can be an absolute value.
To two candidate motion vector MV Ct1And MV Ct2Described this modification can be applied to a candidate motion vector arbitrarily in the same manner.
Second embodiment has been shown in Fig. 6.
During step 30, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV Ct1And MV Ct2For example, as shown in Figure 1, with candidate motion vector MV Ct1And MV Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV Ct1And MV Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.
During step 32, be candidate motion vector MV Ct1Confirm that (dx1, correct motion vector Δ MV1 dy1) is candidate motion vector MV to coordinate Ct2Confirm coordinate (dx2, correct motion vector Δ MV2 dy2).With motion vector Δ MV1 be specified to make in succession the coding and reconstruct with candidate motion vector MV Ct1Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M1 Ct1The distortion minimization that calculates between the predict blocks motion of compensation.Equally, with motion vector Δ MV2 be specified to make in succession the coding and reconstruct with candidate motion vector MV Ct2Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ MV2 Ct2The distortion minimization that calculates between the predict blocks motion of compensation.For example, use like minor function:
E 1 ( dx 1 , dy 1 ) = Σ ( x , y ) ∈ B v 1 rec ( I c rec ( x , y ) - MC MV ct 1 + ΔMV 1 ( x , y ) ) 2
= Σ ( x , y ) ∈ B v 1 rec ( I c rec ( x , y ) - I ref rec ( x - vx 1 - dx 1 , y - vy 1 - dy 1 ) ) 2
E 2 ( dx 2 , dy 2 ) = Σ ( x , y ) ∈ B v 2 rec ( I c rec ( x , y ) - MC MV ct 2 + ΔMV 2 ( x , y ) ) 2
= Σ ( x , y ) ∈ B v 2 rec ( I c rec ( x , y ) - I ref rec ( x - vx 2 - dx 2 , y - vy 2 - dy 2 ) ) 2
According to a kind of modification, E 1 ( Dx , Dy ) = Σ ( x , y ) ∈ B v 1 Rec | I c Rec ( x , y ) - MC MV Ct 1 + Δ MV 1 ( x , y ) | With E 2 ( Dx , Dy ) = Σ ( x , y ) ∈ B v 1 Rec | I c Rec ( x , y ) - MC MV Ct 2 + Δ MV 2 ( x , y ) | . According to another kind of modification, E 1 ( Dx , Dy ) = Max ( x , y ) ∈ B v 2 Rec | I c Rec ( x , y ) - MC MV Ct 1 + Δ MV 1 ( x , y ) | With
E 2 ( dx , dy ) = max ( x , y ) ∈ B v 2 rec | I c rec ( x , y ) - MC MV ct 2 + ΔMV 2 ( x , y ) | .
During step 32, therefore seek make E1 (. .) minimized correct motion vector Δ MV1 and make E2 (. .) minimized correct motion vector Δ MV2.For example, (dx1 dy1), calculates E1 (dx1, value dy1), and keep and make E1 (dx1, (dx1, dy1) value that value dy1) is minimum for each probable value.Equally, (dx2 dy2), calculates E2 (dx2, value dy2), and keep and make E2 (dx2, (dx2, dy2) value that value dy2) is minimum for each probable value.
According to a kind of modification of step 32, correct motion vector Δ MV1 and Δ MV2 are at its coordinate dx1, dx2, and each amplitude of dy1 and dy2 is all less than a EnhAdditional constraint under make respectively E1 (. .) and E2 (. .) minimum those motion vector Δ MV1 and Δ MV2, wherein a EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV1 and Δ MV2.In fact, according to this modification, only centering on candidate motion vector MV respectively Ct1And MV Ct2Limited interval in seek Δ MV1 and Δ MV2, for level and vertical component each, this is defined as follows at interval: [a Cod+ a Enh, a Cod-a Enh].Therefore more detailed and when calculating under the more expensive form, can definition following than large-spacing in search for: [R, R], R>a EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a EnhPrecision at the coordinate of in the interval [R, R] of candidate motion vector, seeking correct motion vector.
During step 34, through merging the candidate motion vector MV that revises via correct motion vector Δ MV1 and Δ MV2 respectively Ct1And MV Ct2Confirm motion vectors MVp.For example, MVpx=median (vx1+dx1, vx2+dx2,0) and MVpy=median (vy1+dy 1, vy2+dy2,0).According to a kind of modification, MVpx=min (vx1+dx1, vx2+dx2,0) and MVpy=min (vy1+dy1, vy2+dy2,0).According to another kind of modification, MVpx=0.5* (vx1+dx1+vx2+dx2) and MVpy=0.5* (vy1+dy1+vy2+dy2).
During step 36, consider motion vectors MVp ground coding current block Bc.As everyone knows, from current block Bc, deduct residual error data and the motion vector difference MVdiff that predict blocks Bp obtains for piece Bc coding.The motion vector difference MVdiff that will from MVc and MVp, calculate is coded among the stream F.MVdiff has coordinate (MVx-MVpx; MVy-MVpy), (MVx wherein; MVy) be the coordinate of Mvc, (MVpx; MVpy) be the coordinate of Mvp.Quantize residual error data after the general first conversion.Through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding residual error data that quantizes after the first conversion and motion vector difference MVdiff are encoded into coded data.The maximum encoding precision of permission is a for MVdiff CodThe example of entropy coding method is described in the 9.3rd joint of the works 6.5.4 chapter of I.E.Richardson or ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding ".According to another kind of modification, can use picture in the 9.2nd joint of ISO/IEC 14496-10 file " Information technology-Coding of audio-visual objects – Part 10:Advanced Video Coding " and CAVLC (based on the context-adaptive variable length code) the type method as described in the works 6.4.13.2 chapter of I.E.Richardson.
According to a kind of modification, according to SKIP encoded current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In fact, when from current block, extracting residual block that the predict blocks confirmed according to the motion vectors MVp that in step 24, confirms obtains all coefficients and all be zero by it, " skipping " coding mode current block of still encoding.
Described to two candidate motion vector MV with reference to figure 5 above Ct1And MV Ct2Embodiment, still, also can be applied to an arbitrarily candidate motion vector.In this case, during step 32, for each candidate motion vector is confirmed correct motion vector.During step 34, from the candidate motion vector of revising via their correct motion vector separately, confirm motion vectors MVp.
With reference to figure 7, the invention still further relates to the method for the current block of reconstructed image sequence.
During step 52, from motion vector that the space adjacent blocks of current block Bc is associated in the middle of confirm coordinate (vx, candidate motion vector MV vy) CtFor example, as shown in Figure 1, with candidate motion vector MV CtBe specified to the piece A adjacent, one of motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV CtBe specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with the candidate motion vector MV that confirms CtThe piece that closes on current block Bc that is associated is labeled as Bv.Giving tacit consent to the motion vector that keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc left side.According to a kind of modification, as shown in Figure 4, the motion vector that acquiescence keeps is, for example, and the motion vector that is associated with the adjacent blocks that is positioned at current block Bc top.Step 52 is identical with the step 20 of coding method.
During step 54, confirm coordinate (dx, correct motion vector Δ MV dy).Correct motion vector Δ MV confirms that by this way that makes exactly and is being labeled as
Figure BDA00001788109300101
, coding and the adjacent blocks of reconstruct and candidate motion vector MV in succession through revising via correct motion vector Δ MV CtThe distortion minimization that calculates between the predict blocks motion of compensation.Predict blocks belongs to reference picture I RefFor example, use like minor function:
E ( dx , dy ) = Σ ( x , y ) ∈ B v rec ( I c rec ( x , y ) - MC MV ct + ΔMV ( x , y ) ) 2
= Σ ( x , y ) ∈ B v rec ( I c rec ( x , y ) - I ref rec ( x - vx - dx , y - vy - dy ) ) 2
Wherein:
Figure BDA00001788109300113
Be through MV CtThe predict blocks motion of+Δ MV compensation;
Figure BDA00001788109300114
is pixel (x, the numerical value of coding y) and reconstructed image data item among the image I c; And
Figure BDA00001788109300115
is pixel (x, the numerical value of coding y) and reconstructed image data item in the reference picture.
According to a kind of modification, E ( Dx , Dy ) = Σ ( x , y ) ∈ B v Rec | I c Rec ( x , y ) - MC MV Ct + Δ MV ( x , y ) | . According to another kind of modification, E ( Dx , Dy ) = Max ( x , y ) ∈ B v Rec | I c Rec ( x , y ) - MC MV Ct + Δ MV ( x , y ) | .
During step 54, therefore seek make E (. .) minimized motion vector Δ MV.For example, (dx dy), calculates E (dx, value dy), and keep and make E (dx, (dx, dy) value that value dy) is minimum for each probable value.Step 54 is identical with the step 22 of coding method.
According to a kind of modification of step 54, correct motion vector Δ MV is less than first threshold a in each amplitude of its coordinate dx and dy EnhAdditional constraint under make E (. .) minimized motion vector Δ MV, wherein a EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV.In fact, according to this modification, only centering on candidate motion vector MV CtLimited interval in seek vector Δ MV, for level and vertical component each, this is defined as follows at interval: [a Cod+ a Enh, a Cod-a Enh].More detailed so when calculating under the more expensive version, can definition following than large-spacing in search for: [R, R], R>a EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a EnhPrecision around candidate motion vector MV CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 56, from the candidate motion vector MV that revises via correct motion vector Δ MV CtIn confirm motion vectors MVp.MVp=MV ct+ΔMV。Step 56 is identical with the step 24 of coding method.
During step 58, consider motion vectors MVp ground reconstruct current block Bc.More particularly, through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding decoding transform/quantization residual error data and motion vector difference MVdiff from stream F.Go to quantize then to bring conversion institute transform/quantization residual error data through the inversion of the conversion in the step 26 of coding method, used.Among the motion vectors MVp that confirms from motion vector difference MVdiff with step 56 is current block Bc reconstitution movement vector MVc.MVc has coordinate (MVdiffx+MVpx; MVdiffy+MVpy), (MVdiffx wherein; MVdiffy) be the coordinate of MVdif, (MVpx; MVpy) be the coordinate of MVp.In the reference picture of reconstruct from motion vector MVc, confirm predict blocks, it is relevant with the predict blocks motion through motion vector MVc compensation.Then, predict blocks and the residual error data piece that is current block reconstruct from stream F are merged, for example, the individual element addition.
According to a kind of modification, according to SKIP coding method reconstruct current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In this case, the piece Bc of reconstruct is the predict blocks motion through the motion vectors MVp compensation of in step 56, confirming.
According in a kind of modification shown in Fig. 8, confirm candidate motion vector MV CtStep 52 comprise the step 520 of confirming at least two candidate motion vectors, the candidate motion vector that will in step 520, confirm is merged into the step 522 that merges motion vector and according to merging motion vector selection candidate motion vector MV in the middle of the motion vector of step 520, confirming CtStep 524.Step 520 is identical with the step 200 of Fig. 5, and step 522 is identical with the step 202 of Fig. 5, and step 524 is identical with the step 204 of Fig. 5.
Another embodiment has been shown in Fig. 9.
During step 62, from with motion vector that piece that current block Bc space is closed on is associated in the middle of confirm at least two candidate motion vector MV Ct1And MV Ct2For example, as shown in Figure 1, with candidate motion vector MV Ct1And MV Ct2Be specified to the piece A adjacent, the motion vector of B and/or C with current block Bc.According to a kind of modification, with candidate motion vector MV Ct1And MV Ct2Be specified to the motion vector of the piece that closes on piece Bc space but may not be adjacent.Will with candidate motion vector MV Ct1The adjacent blocks of the current block Bc that is associated is labeled as Bv1, will with candidate motion vector MV Ct2The adjacent blocks of the current block Bc that is associated is labeled as Bv2.This step 62 is identical with the step 30 of Fig. 6.
During step 64, be candidate motion vector MV Ct1Confirm that (dx1, correct motion vector Δ MV1 dy1) is candidate motion vector MV to coordinate Ct2Confirm coordinate (dx2, correct motion vector Δ MV2 dy2).With motion vector Δ MV1 be specified to make in succession the coding and reconstruct with candidate motion vector MV Ct1Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M1 Ct1The distortion minimization that calculates between the predict blocks motion of compensation.Equally, with motion vector Δ MV2 be specified to make in succession the coding and reconstruct with candidate motion vector MV Ct2Adjacent blocks that is associated and the candidate motion vector MV that passes through to revise via correct motion vector Δ M2 Ct2The distortion minimization that calculates between the predict blocks motion of compensation.For example, use like minor function:
E 1 ( dx 1 , dy 1 ) = Σ ( x , y ) ∈ B v 1 rec ( I c rec ( x , y ) - MC MV ct 1 + ΔMV 1 ( x , y ) ) 2
= Σ ( x , y ) ∈ B v 1 rec ( I c rec ( x , y ) - I ref rec ( x - vx 1 - dx 1 , y - vy 1 - dy 1 ) ) 2
E 2 ( dx 2 , dy 2 ) = Σ ( x , y ) ∈ B v 2 rec ( I c rec ( x , y ) - MC MV ct 2 + ΔMV 2 ( x , y ) ) 2
= Σ ( x , y ) ∈ B v 2 rec ( I c rec ( x , y ) - I ref rec ( x - vx 2 - dx 2 , y - vy 2 - dy 2 ) ) 2
According to a kind of modification, E 1 ( Dx , Dy ) = Σ ( x , y ) ∈ B v 1 Rec | I c Rec ( x , y ) - MC MV Ct 1 + Δ MV 1 ( x , y ) | With E 2 ( Dx , Dy ) = Σ ( x , y ) ∈ B v 1 Rec | I c Rec ( x , y ) - MC MV Ct 2 + Δ MV 2 ( x , y ) | . According to another kind of modification, E 1 ( Dx , Dy ) = Max ( x , y ) ∈ B v 2 Rec | I c Rec ( x , y ) - MC MV Ct 1 + Δ MV 1 ( x , y ) | With E 2 ( Dx , Dy ) = Max ( x , y ) ∈ B v 2 Rec | I c Rec ( x , y ) - MC MV Ct 2 + Δ MV 2 ( x , y ) | .
During step 64, therefore seek make E1 (. .) minimized correct motion vector Δ MV1 and make E2 (. .) minimized correct motion vector Δ MV2.For example, (dx1 dy1), calculates E1 (dx1, value dy1), and keep and make E1 (dx1, (dx1, dy1) value that value dy1) is minimum for each probable value.Equally, (dx2 dy2), calculates E2 (dx2, value dy2), and keep and make E2 (dx2, (dx2, dy2) value that value dy2) is minimum for each probable value.This step 64 is identical with the step 32 of Fig. 6.
According to a kind of modification of step 64, correct motion vector Δ MV1 and Δ MV2 are at its coordinate dx1, dx2, and each amplitude of dy1 and dy2 is all less than a EnhAdditional constraint under make respectively E1 (. .) and E2 (. .) minimized those motion vector Δ MV1 and Δ MV2, wherein a EnhIt is the precision of permission for motion compensation.For example, if with 1/4 precision encoding decoding motion vectors, then a then Enh=1/8.This modification is restricted the computational complexity of confirming correct motion vector Δ MV1 and Δ MV2.In fact, according to this modification, only centering on candidate motion vector MV respectively Ct1And MV Ct2Limited interval in seek Δ MV1 and Δ MV2, for level and vertical component each, this is defined as follows at interval: [a Cod+ a Enh, a Cod-a Enh].Therefore more detailed and when calculating under the more expensive form, can definition following than large-spacing in search for: [R, R], R>a EnhRepresent the hunting zone.Can use, for example, the value of R=2.Under this latter event, with a EnhPrecision around candidate motion vector MV CtInterval [R, R] in seek the coordinate of correct motion vector Δ MV.
During step 66, through merging the candidate motion vector MV that revises via correct motion vector Δ MV1 and Δ MV2 respectively Ct1And MV Ct2Confirm motion vectors MVp.For example, MVpx=median (vx1+dx1, vx2+dx2,0) and MVpy=median (vy1+dy1, vy2+dy2,0).According to a kind of modification, MVpx=min (vx1+dx1, vx2+dx2,0) and MVpy=min (vy1+dy1, vy2+dy2,0).According to another kind of modification, MVpx=0.5* (vx1+dx1+vx2+dx2) and MVpy=0.5* (vy1+dy1+vy2+dy2).This step 66 is identical with the step 34 of Fig. 6.
During step 68, consider motion vectors MVp ground reconstruct current block Bc.More particularly, through VLC (variable length code) type entropy coding or CABAC (context adaptive binary arithmetic coding) type coding decoding transform/quantization residual error data and motion vector difference MVdiff from stream F.Go to quantize then to bring conversion institute transform/quantization residual error data through the inversion of the conversion in the step 26 of coding method, used.Among the motion vectors MVp that confirms from motion vector difference MVdiff with step 56 is current block Bc reconstitution movement vector MVc.MVc has coordinate (MVdiffx+MVpx; MVdiffy+MVpy), (MVdiffx wherein; MVdiffy) be the coordinate of MVdiff, (MVpx; MVpy) be the coordinate of MVp.In the reference picture of reconstruct from motion vector MVc, confirm predict blocks, it is relevant with the predict blocks motion through motion vector MVc compensation.Then, predict blocks and the residual error data piece that is current block reconstruct from stream F are merged, for example, the individual element addition.
According to a kind of modification, according to SKIP coding method reconstruct current block Bc.In this case, residual error data and exercise data are not coded among the stream F for current block Bc.In this case, the piece Bc of reconstruct is the predict blocks motion through the motion vectors MVp compensation of in step 66, confirming.
The present invention who describes under single motion vector and the situation that (single directional prediction) piece is associated can directly be generalized to the situation that two or more motion vectors and a piece (for example, bi-directional predicted) are associated.In this case, each motion vector is associated with the tabulation of reference picture.For example, in H.264, the two-way type piece uses two tabulation L0 and L1, is motion vector of each tabulation definition.Under two-way situation, for each tabulation independent utility step 20 to 24 or 30 to 34.During each step 20 (correspondingly 30), from the motion vector of the tabulation identical, confirm one or several candidate motion vectors with the current list, and with its be associated with piece that current block Bc space is closed on.In step 26 (correspondingly 36) according to motion vector encoder current block Bc from each tabulation of step 24 (correspondingly 34).
Method according to coding of the present invention and reconstitution movement vector has the advantage of using correct motion vector to improve the method for motion vectors.Therefore, with regard to quality and/or coding cost, they have the advantage that improves code efficiency.Especially can be partial to select " skipping " coding mode according to method of the present invention.In addition, under the situation of time prediction, it can also reduce the coding cost of exercise data, or under the constant prerequisite of cost, improves the precision of motion vector, so the quality of motion compensation.In fact, motion vectors MVp has precision a Enh, coded vector difference MVdiff has precision a Cod, therefore, the reconstructed vector MVc of addition motion vectors and coded vector difference gained has precision a EnhTherefore can be according to coding of the present invention and reconstructing method according to certain precision a CodEncoding motion vector equals a to motion vector then EnhThe motion compensation of big precision.
The invention still further relates to encoding device of describing with reference to Figure 10 12 and the decoding device of describing with reference to Figure 11 13.In Figure 10 and 11, but shown module is to may or may not correspond to the functional unit of discrimination unit physically.For example, some of these modules or they can be grouped in the single parts together, or constitute the function of same software.On the contrary, some modules can be made up of the discrete physical entity.
With reference to Figure 10, encoding device 12 receives the image that belongs to image sequence on input.Each image is divided into each block of pixels that is associated with at least one view data item.Encoding device 12 has especially realized utilizing the coding of time prediction.In Figure 10, only show encoding device 12 and coding or the relevant module of INTER coding through time prediction.Unshowned other module of knowing with those of ordinary skill video encoder realizes utilizing or not utilizing the INTRA coding of spatial prediction.Encoding device 12 especially comprised can, for example, subtract each other through individual element, from current block Bc, extract predict blocks Bp to generate the computing module 1200 of residual image data piece or residual block Bres.It further comprises the module 1202 that can conversion residual block Bres then it be quantized into quantized data.Conversion T is, for example, and discrete cosine transform (or DCT).Encoding device 12 further comprises the entropy coding module 1204 that can quantized data be encoded into encoded data stream F.It further comprises the module 1206 of the inverse operation that carries out module 1202.Module 1206 is carried out re-quantization Q earlier -1Carry out inverse transformation T again -1Module 1206 is connected with computing module 1208, computing module 1208 can, for example,, merge data block and predict blocks Bp from module 1206 through the individual element addition, be stored in the reconstructed image data block in the memory 1210 with generation.Encoding device 12 further comprises the motion estimation module 1212 that can estimate piece Bc and be stored at least one the motion vector MVc between the piece of the reference picture Iref in the memory 1210, and this image is the reconstruct then of encoding in the past.According to a kind of modification, can between current block Bc and original reference image I c, carry out estimation, memory 1210 is not connected with motion estimation module 1212 in this case.According to the well-known method of those of ordinary skill in the art; Motion estimation module is searching moving data item in reference picture Iref by this way; Especially motion vector, that is exactly to make at current block Bc and the error minimize that calculates between through the piece among the reference picture Iref of exercise data item identification.
The exercise data of confirming sends to determination module 1214 by motion estimation module 1212, and determination module 1214 can be selected a kind of coding mode for current block Bc in a predetermined group coding pattern.The coding mode that stays is, for example, making bit rate-distortion type criterion, minimized that is a kind of.But the present invention is not limited to this system of selection, but can be according to another kind of criterion, and for example, priori type criterion is selected the pattern that stays.With coding mode and exercise data that determination module 1214 is selected, for example, the one or more exercise data items under the situation of time prediction pattern or INTER pattern send to prediction module 1216.Prediction module 1216 can realize that the step 20 of coding method is to 24 or 30 to 34.Step 26 realizes via this pack module of encoding device 12 with 36.In addition, with the coding mode of selecting with under opposite situation, one or more exercise data items send to entropy coding module 1204 so that be coded among the stream F.Prediction module 1216 and possibly confirmed predict blocks Bp from the exercise data (inter picture prediction) that motion estimation module 1212 is confirmed from the coding mode that determination module 1214 is confirmed.
With reference to Figure 11, decoding device 13 receives the encoded data stream F of representative image sequence on input.Stream F is that for example, encoding device 12 sends via channel.Decoding device 13 comprises can the generating solution code data, for example, and the entropy decoder module 1300 of coding mode and the decoded data relevant with the content of image.
Decoding device 13 also comprises the exercise data reconstructed module.According to first embodiment, this exercise data reconstructed module is the entropy decoder module 1300 of a part of the stream F of the said exercise data of decoding representative.According to not in a kind of modification shown in Figure 13, this exercise data reconstructed module is a motion estimation module.This solution via decoding device 13 reconstitution movement data is called " template matches ".
Then the decoded data relevant with the content of image sent to and to carry out the module 1302 of carrying out inverse transformation behind the re-quantization earlier.Module 1302 is identical with the module 1206 of the encoding device 12 that generates encoding stream F.Module 1302 is connected with computing module 1304, computing module 1304 can, for example,, merge piece and predict blocks Bp from module 1302 through the individual element addition, be stored in the reconstruct current block Bc in the memory 1306 with generation.Decoding device 13 also comprises prediction module 1308.Prediction module 1308 and possibly confirmed predict blocks Bp from the exercise data that the exercise data reconstructed module is confirmed from the coding mode of entropy decoder module 1300 for current block decoding.Prediction module 1308 can realize that step 52 according to coding/decoding method of the present invention is to 56 or 62 to 66.Step 58 realizes through this pack module of decoding device 12 with 68.
Obviously, the present invention is not limited to the above embodiments.
Especially, those of ordinary skill in the art can be with any alternative applications in said embodiment, and makes up them so that from their various advantages, benefit.Especially, the present invention will never receive may not be adjacent with current block the candidate motion vector class limitations.In addition, the present invention who describes under single motion vector and the situation that (single directional prediction) piece is associated can directly be generalized to the situation that two or more motion vectors and a piece (for example, bi-directional predicted) are associated.

Claims (11)

1. the method for the current block of a coded video sequences, it comprises following steps:
-confirm at least one candidate motion vector that (20,30) are associated with the adjacent blocks of said current block;
-definite (24,34) motion vectors from said at least one candidate motion vector; And
-according to said motion vectors coding (26,36) said current block,
Said method is characterised in that said motion vectors confirms according to following steps:
-be that said at least one candidate motion vector confirms (22; 32) correct motion vector (Δ MV), so as to make coding in succession and the adjacent blocks of reconstruct with move through the predict blocks of said at least one candidate motion vector compensation of revising via said correct motion vector (Δ MV) between the distortion minimization of calculating; And
-from a said candidate motion vector, confirm (24,34) said motion vectors at least via said correct motion vector modification.
2. according to the described coding method of claim 1; It comprises definite (26; 36) the motion vector difference of from the current motion vector that is associated with said current block Bc and said motion vectors, calculating, and the step of the said current block of wherein encoding comprises the said motion vector difference of coding.
3. according to claim 1 or 2 described coding methods, wherein during the step of confirming said correct motion vector, (dx, the amplitude of each coordinate dy) is limited by first threshold said correct motion vector.
4. according to the described coding method of claim 3, wherein said threshold value is less than second threshold value of the encoding precision of representing motion vector.
5. according to the described coding method of claim 4, wherein said first threshold equals 1/8, and said second threshold value equals 1/4.
6. according to the described coding method of claim 4, wherein said first threshold equals 1/4, and said second threshold value equals 1/2.
7. according to the described coding method in one of claim 1 or 2, wherein with greater than each coordinate of confirming said correct motion vector in the qualification at interval of said candidate motion vector of the precision of the encoding precision of motion vector (dx, dy).
8. according to the described coding method of one of claim 1 to 7, confirm that wherein the step of (20) at least one candidate motion vector comprises following steps:
-definite (200) at least two candidate motion vectors;
-said at least two candidate motion vectors are merged (202) one-tenth merging motion vector; And
-selection (204) and the immediate motion vector of said merging motion vector in the middle of said at least two candidate motion vectors.
9. according to the described coding method of one of claim 1 to 7; Wherein confirm (30) at least two candidate motion vectors; The step of confirming (32) correct motion vector be included as said at least two candidate motion vectors each confirm correct motion vector, and the said step of wherein confirming (34) said motion vectors comprise with via they separately the said candidate motion vector revised of correct motion vector be merged into single motion vectors.
10. the method for a reconstruct current block, it comprises following steps:
-confirm at least one candidate motion vector that (52) are associated with the adjacent blocks of said current block;
-definite (56) motion vectors from said at least one candidate motion vector;
-according to the said current block of said motion vectors reconstruct (58),
Said method is characterised in that said motion vectors confirms according to following steps:
-confirm (54) correct motion vector (dx, dy) so that make the reconstruct adjacent blocks with through via said correct motion vector (dx, the distortion minimization that calculates between the predict blocks motion of said at least one candidate motion vector compensation of dy) revising; And
-definite (56) said motion vectors from the said candidate motion vector of revising via said correct motion vector.
11. according to the described reconstructing method of claim 10, it further comprises following steps:
-be said current block decoding motion vectors difference;
-from said motion vector difference and said motion vectors, confirm at least one current motion vector for said current block;
According to the said current block of said current motion vector reconstruct (58).
CN201080058062.8A 2009-10-20 2010-10-13 Motion-vector prediction and refinement Active CN102668562B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0957342 2009-10-20
FR0957342 2009-10-20
PCT/EP2010/065316 WO2011047994A1 (en) 2009-10-20 2010-10-13 Motion vector prediction and refinement

Publications (2)

Publication Number Publication Date
CN102668562A true CN102668562A (en) 2012-09-12
CN102668562B CN102668562B (en) 2015-09-30

Family

ID=42235402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201080058062.8A Active CN102668562B (en) 2009-10-20 2010-10-13 Motion-vector prediction and refinement

Country Status (8)

Country Link
US (1) US10142650B2 (en)
EP (1) EP2491717A1 (en)
JP (1) JP5669278B2 (en)
KR (1) KR101711688B1 (en)
CN (1) CN102668562B (en)
BR (1) BR112012009042A2 (en)
TW (1) TWI566586B (en)
WO (1) WO2011047994A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883161A (en) * 2012-09-19 2013-01-16 华为技术有限公司 Video encoding and decoding processing method and device
WO2014106435A1 (en) * 2013-01-07 2014-07-10 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
WO2015010319A1 (en) * 2013-07-26 2015-01-29 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation encoding method
WO2015010317A1 (en) * 2013-07-26 2015-01-29 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation method
CN108141605A (en) * 2015-10-02 2018-06-08 高通股份有限公司 Intra block replicates merging patterns and unavailable intra block replicates the filling of reference zone
WO2019010634A1 (en) * 2017-07-11 2019-01-17 华为技术有限公司 Decoding method and apparatus based on template matching
CN109417631A (en) * 2016-06-30 2019-03-01 交互数字Vc控股公司 Utilize the Video coding of adaptive motion information refinement
CN110100440A (en) * 2016-12-22 2019-08-06 株式会社Kt Video signal processing method and device
WO2019154424A1 (en) * 2018-02-12 2019-08-15 华为技术有限公司 Video decoding method, video decoder, and electronic device
CN110313180A (en) * 2017-01-03 2019-10-08 交互数字Vc控股公司 Method and apparatus for coding and decoding motion information
CN110719489A (en) * 2019-09-18 2020-01-21 浙江大华技术股份有限公司 Motion vector correction method, motion vector prediction method, motion vector encoding device, and storage device
CN111971966A (en) * 2018-03-30 2020-11-20 韩国电子通信研究院 Image encoding/decoding method and apparatus, and recording medium storing bit stream
TWI727338B (en) * 2018-06-07 2021-05-11 大陸商北京字節跳動網絡技術有限公司 Signaled mv precision

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8526499B2 (en) * 2007-06-15 2013-09-03 Sungkyunkwan University Foundation For Corporate Collaboration Bi-prediction coding method and apparatus, bi-prediction decoding method and apparatus, and recording medium
WO2008153262A1 (en) 2007-06-15 2008-12-18 Sungkyunkwan University Foundation For Corporate Collaboration Bi-prediction coding method and apparatus, bi-prediction decoding method and apparatus, and recording midium
FR2955730A1 (en) * 2010-01-25 2011-07-29 Thomson Licensing CODING AND DECODING METHODS
KR102005088B1 (en) * 2011-03-21 2019-10-01 엘지전자 주식회사 Method for selecting motion vector predictor and device using same
CA2830036C (en) 2011-04-12 2019-03-05 Panasonic Corporation Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
US9485518B2 (en) 2011-05-27 2016-11-01 Sun Patent Trust Decoding method and apparatus with candidate motion vectors
TR201819396T4 (en) 2011-05-27 2019-01-21 Sun Patent Trust Image Decoding Method And Image Decoding Device
CN107257483B (en) 2011-05-31 2020-01-21 太阳专利托管公司 Moving image encoding method and moving image encoding device
SG194746A1 (en) 2011-05-31 2013-12-30 Kaba Gmbh Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device
US9313494B2 (en) 2011-06-20 2016-04-12 Qualcomm Incorporated Parallelization friendly merge candidates for video coding
SI3481066T1 (en) 2011-06-28 2021-10-29 Lg Electronics Inc Method for deriving a motion vector predictor
MX2013013029A (en) 2011-06-30 2013-12-02 Panasonic Corp Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device.
US10536701B2 (en) 2011-07-01 2020-01-14 Qualcomm Incorporated Video coding using adaptive motion vector resolution
CN103718558B (en) 2011-08-03 2017-04-19 太阳专利托管公司 Video encoding method and apparatus, video decoding method and apparatus, and video encoding/decoding apparatus
GB2493755B (en) * 2011-08-17 2016-10-19 Canon Kk Method and device for encoding a sequence of images and method and device for decoding a sequence of images
US9736489B2 (en) 2011-09-17 2017-08-15 Qualcomm Incorporated Motion vector determination for video coding
US9083983B2 (en) * 2011-10-04 2015-07-14 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
EP3923572A1 (en) 2011-10-19 2021-12-15 Sun Patent Trust Image encoding method, image encoding device, image decoding method, and picture decoding device
EP2779647B1 (en) 2011-11-08 2018-03-07 Kabushiki Kaisha Toshiba Image encoding method and image encoding device
PL400344A1 (en) * 2012-08-13 2014-02-17 Politechnika Poznanska Method for determining the the motion vector predictor
CA2924763A1 (en) 2013-10-14 2015-04-23 Microsoft Corporation Features of intra block copy prediction mode for video and image coding and decoding
US11109036B2 (en) 2013-10-14 2021-08-31 Microsoft Technology Licensing, Llc Encoder-side options for intra block copy prediction mode for video and image coding
MX360926B (en) * 2014-01-03 2018-11-22 Microsoft Technology Licensing Llc Block vector prediction in video and image coding/decoding.
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
BR112016017201B1 (en) * 2014-01-29 2023-09-26 Hfi Innovation Inc ENCODING AND DECODING METHOD FOR ADAPTIVE MOTION VECTOR ACCURACY OF A BLOCK OF VIDEO DATA
AU2014385769B2 (en) 2014-03-04 2018-12-06 Microsoft Technology Licensing, Llc Block flipping and skip mode in intra block copy prediction
KR20230130178A (en) 2014-06-19 2023-09-11 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Unified intra block copy and inter prediction modes
EP3202150B1 (en) 2014-09-30 2021-07-21 Microsoft Technology Licensing, LLC Rules for intra-picture prediction modes when wavefront parallel processing is enabled
EP3264769A1 (en) * 2016-06-30 2018-01-03 Thomson Licensing Method and apparatus for video coding with automatic motion information refinement
US11343530B2 (en) * 2016-11-28 2022-05-24 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium having bitstream stored thereon
CN116866585A (en) 2017-05-17 2023-10-10 株式会社Kt Method for decoding and encoding image and apparatus for storing compressed video data
JP6344508B2 (en) * 2017-06-12 2018-06-20 株式会社Jvcケンウッド Image decoding apparatus, image decoding method, and image decoding program
JP6344509B2 (en) * 2017-06-12 2018-06-20 株式会社Jvcケンウッド Image coding apparatus, image coding method, and image coding program
WO2019009567A1 (en) * 2017-07-03 2019-01-10 엘지전자 주식회사 Method and apparatus for image decoding according to inter prediction in image coding system
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
WO2019194497A1 (en) * 2018-04-01 2019-10-10 엘지전자 주식회사 Inter-prediction mode-based image processing method and apparatus therefor
WO2019211514A1 (en) * 2018-05-02 2019-11-07 Nokia Technologies Oy Video encoding and decoding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885948A (en) * 2005-06-21 2006-12-27 中国科学院计算技术研究所 Motion vector space prediction method for video coding
CN101090491A (en) * 2006-06-16 2007-12-19 香港科技大学 Enhanced block-based motion estimation algorithms for video compression
US20080253457A1 (en) * 2007-04-10 2008-10-16 Moore Darnell J Method and system for rate distortion optimization
EP2101504A2 (en) * 2008-03-09 2009-09-16 LG Electronics Inc. Video coding using template matching

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9519923D0 (en) 1995-09-29 1995-11-29 Philips Electronics Nv Motion estimation for predictive image coding
KR100197368B1 (en) * 1995-12-23 1999-06-15 전주범 Apparatus for restoring error of image data
EP1152621A1 (en) 2000-05-05 2001-11-07 STMicroelectronics S.r.l. Motion estimation process and system.
JP4401783B2 (en) 2002-01-24 2010-01-20 株式会社日立製作所 Video encoding method
KR100492127B1 (en) * 2002-02-23 2005-06-01 삼성전자주식회사 Apparatus and method of adaptive motion estimation
JP4724351B2 (en) 2002-07-15 2011-07-13 三菱電機株式会社 Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and communication apparatus
KR100774296B1 (en) * 2002-07-16 2007-11-08 삼성전자주식회사 Method and apparatus for encoding and decoding motion vectors
US20050013498A1 (en) * 2003-07-18 2005-01-20 Microsoft Corporation Coding of motion vector information
KR100579542B1 (en) * 2003-07-29 2006-05-15 삼성전자주식회사 Motion estimation apparatus considering correlation between blocks, and method of the same
US7880769B2 (en) * 2004-02-13 2011-02-01 Qualcomm Incorporated Adaptive image stabilization
US7720148B2 (en) * 2004-03-26 2010-05-18 The Hong Kong University Of Science And Technology Efficient multi-frame motion estimation for video compression
FR2872989A1 (en) 2004-07-06 2006-01-13 Thomson Licensing Sa METHOD AND DEVICE FOR CHOOSING A MOTION VECTOR FOR ENCODING A BLOCK ASSEMBLY
WO2006012383A2 (en) * 2004-07-20 2006-02-02 Qualcomm Incorporated Method and apparatus for motion vector prediction in temporal video compression
US20060153300A1 (en) * 2005-01-12 2006-07-13 Nokia Corporation Method and system for motion vector prediction in scalable video coding
US9172973B2 (en) 2005-04-01 2015-10-27 Broadcom Corporation Method and system for motion estimation in a video encoder
EP1727371A1 (en) 2005-05-27 2006-11-29 Thomson Licensing Method for controlling the encoder output bit rate in a block-based video encoder, and corresponding video encoder apparatus
JP5025645B2 (en) 2005-06-23 2012-09-12 トライデント マイクロシステムズ インコーポレイテッド Motion estimation method
US7616821B2 (en) 2005-07-19 2009-11-10 International Business Machines Corporation Methods for transitioning compression levels in a streaming image system
US20070076796A1 (en) * 2005-09-27 2007-04-05 Fang Shi Frame interpolation using more accurate motion information
US8121194B2 (en) 2005-12-16 2012-02-21 Texas Instruments Incorporated Fast macroblock encoding with the early qualification of skip prediction mode using its temporal coherence
FR2907301A1 (en) * 2006-10-12 2008-04-18 Thomson Licensing Sas METHOD OF INTERPOLATING A COMPENSATED IMAGE IN MOTION AND DEVICE FOR CARRYING OUT SAID METHOD
US8218636B2 (en) 2006-11-21 2012-07-10 Vixs Systems, Inc. Motion refinement engine with a plurality of cost calculation methods for use in video encoding and methods for use therewith
KR101365574B1 (en) 2007-01-29 2014-02-20 삼성전자주식회사 Method and apparatus for video encoding, and Method and apparatus for video decoding
US8265136B2 (en) * 2007-02-20 2012-09-11 Vixs Systems, Inc. Motion refinement engine for use in video encoding in accordance with a plurality of sub-pixel resolutions and methods for use therewith
US8275041B2 (en) * 2007-04-09 2012-09-25 Nokia Corporation High accuracy motion vectors for video coding with low encoder and decoder complexity
US20080285651A1 (en) * 2007-05-17 2008-11-20 The Hong Kong University Of Science And Technology Spatio-temporal boundary matching algorithm for temporal error concealment
US8351509B1 (en) * 2007-12-21 2013-01-08 Dimension, Inc. Video codec systems and methods for optimal motion vector with variable size blocks
US8275033B2 (en) * 2008-01-15 2012-09-25 Sony Corporation Picture mode selection for video transcoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885948A (en) * 2005-06-21 2006-12-27 中国科学院计算技术研究所 Motion vector space prediction method for video coding
CN101090491A (en) * 2006-06-16 2007-12-19 香港科技大学 Enhanced block-based motion estimation algorithms for video compression
US20080253457A1 (en) * 2007-04-10 2008-10-16 Moore Darnell J Method and system for rate distortion optimization
EP2101504A2 (en) * 2008-03-09 2009-09-16 LG Electronics Inc. Video coding using template matching

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883161B (en) * 2012-09-19 2015-09-30 华为技术有限公司 The processing method of Video coding and decoding and device
CN102883161A (en) * 2012-09-19 2013-01-16 华为技术有限公司 Video encoding and decoding processing method and device
WO2014106435A1 (en) * 2013-01-07 2014-07-10 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
US9967586B2 (en) 2013-01-07 2018-05-08 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
CN104769947B (en) * 2013-07-26 2019-02-26 北京大学深圳研究生院 A kind of more hypothesis motion compensation encoding methods based on P frame
US10298950B2 (en) 2013-07-26 2019-05-21 Peking University Shenzhen Graduate School P frame-based multi-hypothesis motion compensation method
CN104488271A (en) * 2013-07-26 2015-04-01 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation method
WO2015010317A1 (en) * 2013-07-26 2015-01-29 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation method
CN104769947A (en) * 2013-07-26 2015-07-08 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation encoding method
WO2015010319A1 (en) * 2013-07-26 2015-01-29 北京大学深圳研究生院 P frame-based multi-hypothesis motion compensation encoding method
CN104488271B (en) * 2013-07-26 2019-05-07 北京大学深圳研究生院 A kind of more hypothesis motion compensation process based on P frame
CN108141605A (en) * 2015-10-02 2018-06-08 高通股份有限公司 Intra block replicates merging patterns and unavailable intra block replicates the filling of reference zone
CN108141605B (en) * 2015-10-02 2022-01-04 高通股份有限公司 Intra block copy merge mode and filling of unavailable intra block copy reference areas
CN109417631A (en) * 2016-06-30 2019-03-01 交互数字Vc控股公司 Utilize the Video coding of adaptive motion information refinement
CN109417631B (en) * 2016-06-30 2023-06-20 交互数字Vc控股公司 Video coding with adaptive motion information refinement
CN110100440A (en) * 2016-12-22 2019-08-06 株式会社Kt Video signal processing method and device
CN110313180A (en) * 2017-01-03 2019-10-08 交互数字Vc控股公司 Method and apparatus for coding and decoding motion information
CN110313180B (en) * 2017-01-03 2023-05-23 交互数字麦迪逊专利控股公司 Method and apparatus for encoding and decoding motion information
WO2019010634A1 (en) * 2017-07-11 2019-01-17 华为技术有限公司 Decoding method and apparatus based on template matching
WO2019154424A1 (en) * 2018-02-12 2019-08-15 华为技术有限公司 Video decoding method, video decoder, and electronic device
CN111971966A (en) * 2018-03-30 2020-11-20 韩国电子通信研究院 Image encoding/decoding method and apparatus, and recording medium storing bit stream
TWI727338B (en) * 2018-06-07 2021-05-11 大陸商北京字節跳動網絡技術有限公司 Signaled mv precision
CN110719489A (en) * 2019-09-18 2020-01-21 浙江大华技术股份有限公司 Motion vector correction method, motion vector prediction method, motion vector encoding device, and storage device

Also Published As

Publication number Publication date
TW201116070A (en) 2011-05-01
CN102668562B (en) 2015-09-30
KR20120096471A (en) 2012-08-30
TWI566586B (en) 2017-01-11
EP2491717A1 (en) 2012-08-29
US10142650B2 (en) 2018-11-27
US20130101040A1 (en) 2013-04-25
KR101711688B1 (en) 2017-03-02
JP5669278B2 (en) 2015-02-12
BR112012009042A2 (en) 2016-04-19
WO2011047994A1 (en) 2011-04-28
JP2013509063A (en) 2013-03-07

Similar Documents

Publication Publication Date Title
CN102668562B (en) Motion-vector prediction and refinement
CN103329528B (en) The Video coding of Fault recovery improvement and decoding
CN101494782B (en) Video encoding method and apparatus, and video decoding method and apparatus
TWI621351B (en) Image prediction decoding device, image prediction decoding method and image prediction decoding program
KR20130115185A (en) Method of estimating motion vector using multiple motion vector predictors, apparatus, encoder, decoder and decoding method
CN102714721A (en) Method for coding and method for reconstruction of a block of an image
CN103748880A (en) Method and device for encoding a sequence of images and method and device for decoding a sequence of images
JP4993676B2 (en) Image coding apparatus and image coding method
CN102474619A (en) Motion vector prediction method, and apparatus and method for encoding and decoding image using the same
CN1604653B (en) Differential video coding method
CN103327319A (en) Method and device to identify motion vector candidates using a scaled motion search
CN102726045B (en) The method of Code And Decode image block
KR101364532B1 (en) Method of estimating motion vector considering the size of neighboring partition, apparatus, encoder, decoder and decoding method
CN103139563B (en) The method and relevant device of coding and reconstructed pixel block
CN102342104B (en) Method for predicting block of image data, decoding and coding devices implementing said method
JP5184447B2 (en) Video encoding apparatus and decoding apparatus
CN103430543A (en) Method for reconstructing and coding image block
Veena et al. A Machine Learning Framework for Inter-frame Prediction for Effective Motion Estimation
CN102763414A (en) Method for coding and for reconstruction of a block of an image sequence
JP2765528B2 (en) Half-pixel accuracy motion vector search device
KR101786921B1 (en) Apparatus and Method for fast motion estimation
KR101786957B1 (en) Apparatus and Method for fast motion estimation
JP2020025308A (en) Image encoding method and image decoding method
Dufaux et al. Combined spline-and block-based motion estimation for video coding
KR20120008271A (en) Methods and apparatus for the predicted motion vector selection using matching with neighboring pixels

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190201

Address after: Paris France

Patentee after: International Digital Madison Patent Holding Co.

Address before: I Si Eli Murli Nor, France

Patentee before: THOMSON LICENSING

Effective date of registration: 20190201

Address after: I Si Eli Murli Nor, France

Patentee after: THOMSON LICENSING

Address before: I Si Eli Murli Nor, France

Patentee before: THOMSON LICENSING

TR01 Transfer of patent right