CN102131094A - Motion prediction method - Google Patents

Motion prediction method Download PDF

Info

Publication number
CN102131094A
CN102131094A CN 201110020283 CN201110020283A CN102131094A CN 102131094 A CN102131094 A CN 102131094A CN 201110020283 CN201110020283 CN 201110020283 CN 201110020283 A CN201110020283 A CN 201110020283A CN 102131094 A CN102131094 A CN 102131094A
Authority
CN
China
Prior art keywords
motion
unit
candidate
active cell
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110020283
Other languages
Chinese (zh)
Inventor
蔡玉宝
傅智铭
林建良
黄毓文
雷少民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/957,644 external-priority patent/US9036692B2/en
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to CN201210272628.8A priority Critical patent/CN102833540B/en
Publication of CN102131094A publication Critical patent/CN102131094A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a motion prediction method. First, a plurality of candidate units corresponding to a current unit of a current frame is determined A plurality of motion vectors of the candidate units is then obtained. A plurality of temporal scaling factors of the candidate units is then calculated according to a plurality of temporal distances between a plurality of reference frames of the motion vectors and the current frame. The motion vectors of the candidate units are then scaled according to the temporal scaling factors to obtain a plurality of scaled motion vectors. Finally, a motion vector predictor for motion prediction of the current unit is then selected from the candidate units according to the scaled motion vectors.

Description

Motion forecast method
Technical field
The present invention is particularly to the motion prediction of video data relevant for Video processing.
Background technology
H.264 compression standard can provide bit rate (bit rate) the outstanding video quality much lower with respect to previous standard by adopting for example subpixel accuracy (sub-pixel accuracy) and many features with reference to (multiple-referencing).The video compression program can be divided into 5 parts usually, comprises: inter prediction/infra-frame prediction (inter-prediction/intra-prediction), conversion/inverse transformation (transform/inverse-transform), quantification/inverse quantization (quantization/inverse-quantization), loop filtering (loop filter) and entropy coding (entropy encoding).H.264 be used to various application, for example Blu-ray Disc (Blu-ray Disc), DVB broadcast service, direct broadcasting satellite TV (direct-broadcastsatellite television) service, cable TV service and (real-time) video conference (conferencing) in real time.
Video data stream comprises series of frames.Each frame is divided into a plurality of coding units (for example (extended) macro block of macro block or expansion) that are used for Video processing.Each coding unit can be split into quaternary tree subregion (quad-tree partition), and the leaf coding unit is called as predicting unit.Predicting unit can further be split into the quaternary tree subregion, and each subregion has been assigned with kinematic parameter.For reducing the cost of a large amount of kinematic parameters of transmission, by the contiguous encoding block of reference, be each subregion calculation of motion vectors predictor (motion vector predictor, be designated hereinafter simply as MVP), because of the motion of contiguous block trends towards having high spatial correlation (spatialcorrelation), thereby code efficiency can be enhanced.
Please refer to Fig. 1, Fig. 1 is the schematic diagram of current (coding) unit 100 and a plurality of adjacent (coding) unit A, B, C and D.In this example, active cell 100 is big or small identical with adjacent cells A, B, C and D's; Yet the size of said units needn't be identical.The MVP of active cell 100 according to adjacent cells A, B, with C or A, B, predicted with D (if C is unavailable).When active cell 100 is the motion vector of 16 * 16 and adjacent cells C when existing, adjacent cells A, B, be decided to be the MVP of active cell 100 with the intermediate value (medium) of the motion vector of C.When active cell 100 is the motion vector of 16 * 16 and adjacent cells C when not existing, adjacent cells A, B, be decided to be the MVP of active cell 100 with the intermediate value of the motion vector of D.When active cell 100 was half of 8 * 16 subregions in a left side of 16 * 16, the motion vector of adjacent cells A was decided to be the MVP of active cell 100.When active cell 100 was 16 * 16 right half of 8 * 16 subregions, the motion vector of adjacent cells C was decided to be the MVP of active cell 100.When active cell 100 was 16 * 16 16 * 8 subregions of the upper, the motion vector of adjacent cells B was decided to be the MVP of active cell 100.When active cell 100 was 16 * 16 16 * 8 subregions of following one side of something, the motion vector of adjacent cells A was decided to be the MVP of active cell 100.
When the MVP of active cell was predicted according to the motion vector of adjacent cells A, B, C and D, the motion vector of adjacent cells A, B, C and D was not gone up scaled in the time (temporal) suitably.For instance, adjacent cells A, B, different with the reference frame of C, and adjacent cells A, B, correspond respectively to above-mentioned reference frame with the motion vector of C.Each reference frame is different with time gap between the present frame.Therefore according to adjacent cells A, B, with the MVP of the motion vector prediction active cell 100 of C before, adjacent cells A, B, with the motion vector of C should be scaled in time according to time gap.
The MVP of active cell 100 is only predicted according to the motion vector (motion vector is designated hereinafter simply as MV) of adjacent cells A, B, C and D.Select the best if consider more candidate MVP and percent of pass aberration optimizing (rate-distortion optimization) from candidate MVP, the precision of prediction of MVP can further improve.For instance, (motion vector competition MVC) is suggested to select best MVP from the predetermined candidate collection of sequence level (sequencelevel) appointment in the motion vector competition.Predetermined candidate collection comprises H.264 normative forecast (for example intermediate value motion vector of adjacent cells (median MV)), the MV of coordination (collocated) unit, and the MV of adjacent cells, wherein identical with the position of active cell in present frame with the position of bit location in reference frame.The quantity of MVP in the predetermined candidate collection of recommending is two.Predetermined candidate collection according to the motion vector competing method, is fixed in the video sequence rank.
Summary of the invention
For solving above technical problem, the spy provides following technical scheme:
Embodiment of the present invention provides a kind of motion forecast method, comprises: decision is corresponding to a plurality of candidate units of the active cell of present frame; Obtain the motion vector of these a plurality of candidate units; According to the reference frame of these a plurality of candidate units and the time gap between the present frame, calculate the time-scaling factor of these a plurality of candidate units; According to the time-scaling factor, the motion vector of these a plurality of candidate units of convergent-divergent is to obtain the motion vector of convergent-divergent; And, select to be used for motion vector prediction of the motion prediction of active cell from a plurality of candidate units according to the motion vector of convergent-divergent.
Embodiment of the present invention provides a kind of motion forecast method in addition, comprises: decision is used for the candidate unit of the motion prediction of active cell; Decision is corresponding to the coding unit of active cell; Calculating is corresponding to each the motion vector and the motion difference between the motion vector of each in the coding unit of candidate unit in the coding unit; According to a series of weight, will be corresponding to each the motion difference addition in the candidate unit, to obtain a plurality of each weighted sums that correspond respectively in the candidate unit; And, select at least one selected candidate unit that is used for the motion prediction of active cell from candidate unit according to weighted sum.
Above-described motion forecast method, candidate collection is determined adaptively according to the feature of active cell, can be improved the performance of motion prediction.
Description of drawings
Fig. 1 is the schematic diagram of present encoding unit and a plurality of adjacent encoders unit.
Fig. 2 is the block diagram according to the video encoder of one embodiment of the present invention.
Fig. 3 is the schematic diagram of convergent-divergent of the motion vector of two candidate units.
Fig. 4 is the flow chart with motion forecast method of time difference adjustment.
Fig. 5 is the schematic diagram according to a plurality of candidate units of the motion prediction that is used for active cell of one embodiment of the present invention.
Fig. 6 A and Fig. 6 B are the flow charts according to the motion forecast method with the selected candidate unit of self adaptation (adaptively) of one embodiment of the present invention.
Fig. 7 is the schematic diagram corresponding to the table of the record motion difference of difference coding unit and candidate unit according to one embodiment of the present invention.
Embodiment
In the middle of specification and claims, used some vocabulary to censure specific element.The person of ordinary skill in the field should understand, and hardware manufacturer may be called same element with different nouns.This specification and claims book not with the difference of title as the mode of distinguishing element, but with the difference of element on function as the criterion of distinguishing." comprising " mentioned in specification and claims is open term, therefore, should be construed to " comprise but be not limited to ".In addition, " couple " speech and comprise any indirect means that are electrically connected that directly reach here.Therefore, be coupled to second device, then represent first device can directly be electrically connected, or be connected electrically to second device indirectly by other device or connection means at second device if describe first device in the literary composition.
Please refer to Fig. 2, Fig. 2 is the block diagram according to the video encoder 200 of one embodiment of the present invention.In one embodiment, video encoder 200 comprises motion prediction module 202, subtraction block 204, conversion module 206, quantization modules 208 and entropy coding module 210.The input of video encoder 200 receiver, videos also produces as the bit stream of exporting.202 pairs of video inputs of motion prediction module are carried out motion prediction to produce forecast sample and information of forecasting.Subtraction block 204 deducts forecast sample obtaining residual error (residue) from video input then, thereby the video data volume of video input is reduced to the video data volume of residual error.Residual error is sent to conversion module 206 and quantization modules 208 in proper order then.206 pairs of residual errors of conversion module are carried out discrete cosine transform, and (discrete cosine transform is DCT) to obtain the residual error of conversion.The residual error of quantization modules 208 quantization transforms is to obtain the residual error of quantification then.The residual error of 210 pairs of quantifications of entropy coding module and information of forecasting are carried out entropy coding to obtain the bit stream as output then.
Motion prediction module 202 is according to the MVP of the active cell of the motion vector prediction present frame of a plurality of candidate units.In one embodiment, candidate unit is the adjacent cells adjacent with active cell.Before the MVP of motion prediction module 202 prediction active cells, the reference frame of calculated candidate unit and the time gap between the present frame, and the motion vector of candidate unit is scaled according to time gap.Please refer to Fig. 3, Fig. 3 is the schematic diagram of convergent-divergent of the motion vector of two candidate units 310 and 320.Present frame k comprises two candidate units of the motion prediction that is used for active cell 300: first candidate unit 310 and second candidate unit 320.First candidate unit 310 has the motion vector MV corresponding to reference frame i 1, and the difference D of the very first time between reference frame i and the present frame k IkCalculated.Second candidate unit 320 has the motion vector MV corresponding to reference frame l 2, and the second time difference D between reference frame l and the present frame k LkCalculated.
Object time distance D between target search frame j and the present frame k then JkCalculated.The reference frame of target search frame j for selecting.Then by very first time distance D IkDivide the object time distance D Jk, very first time zoom factor is calculated, and the motion vector MV of first candidate unit 310 1Be multiplied by very first time zoom factor (D Jk/ D Ik) to obtain motion vector MV corresponding to the convergent-divergent of first candidate unit 310 1'.Then by the second time gap D LkDivide the object time distance D Jk, the second time-scaling factor is calculated, and the motion vector MV of second candidate unit 320 2Be multiplied by the second time-scaling factor (D Jk/ D Lk) to obtain motion vector MV corresponding to the convergent-divergent of second candidate unit 320 2'.Like this, the motion vector MV of convergent-divergent 1' and MV 2' all measured corresponding to target search frame j, so the time interval deviation factor is from the motion vector MV of convergent-divergent 1' and MV 2' remove.Motion prediction module 202 can be according to the motion vector MV of the convergent-divergent of candidate unit 310 and 320 then 1' and MV 2' prediction present frame 300 MVP.
Please refer to Fig. 4, Fig. 4 is the flow chart with motion forecast method 400 of time difference adjustment.At first, a plurality of candidate units (step 402) of the active cell of decision present frame.Candidate unit is the piece with identical size or different sizes with active cell, and each of said units can be coding unit, predicting unit or predicting unit subregion.In one embodiment, candidate unit comprises last unit B, the top-right upper right unit C of active cell and the upper left upper left cells D of active cell of top of left unit A, the active cell on the left side of active cell.Obtain a plurality of motion vectors (step 404) of candidate unit then.Then according to a plurality of reference frames of candidate unit and the time gap between the present frame, a plurality of time-scaling factors (step 406) of calculated candidate unit.In one embodiment, a plurality of time gaps between the reference frame of candidate unit and the present frame are at first calculated, object time distance between target search frame and the present frame is also calculated, and divide the object time distance by time gap respectively then corresponding to candidate unit, to obtain a plurality of time-scaling factors, as shown in Figure 3 corresponding to candidate unit.
Then, according to the time-scaling factor, the motion vector of convergent-divergent candidate unit is to obtain the motion vector (step 408) of a plurality of convergent-divergents.In one embodiment, the motion vector of candidate unit be multiply by the time-scaling factor of candidate unit respectively, with the motion vector of the convergent-divergent that obtains candidate unit, as shown in Figure 3.According to the motion vector of convergent-divergent, select motion vector prediction (step 410) of active cell from candidate unit then.In one embodiment, according to the motion vector of the motion vector calculation intermediate value convergent-divergent of convergent-divergent the motion vector ordering of convergent-divergent (for example to), and select the MVP of the motion vector of intermediate value convergent-divergent from the motion vector of convergent-divergent then as active cell.
When motion prediction module 202 during, usually, only be comprised in the candidate collection of the MVP that is used for determining active cell at the motion vector of two candidate units of sequence level decision according to the MVP of motion vector competing method decision active cell.In addition, candidate collection is not that feature according to active cell is determined adaptively.If candidate collection is determined adaptively that according to the feature of active cell then the performance of motion prediction can be enhanced.
Please refer to Fig. 5, Fig. 5 is the schematic diagram according to a plurality of candidate units of the motion prediction that is used for active cell 512 of one embodiment of the present invention.In the present embodiment, active cell 512 is the piece with different sizes with candidate unit, and for instance, active cell 512 is 16 of 16x and candidate unit is the 4x4 piece.In another embodiment, current identical or different with big I candidate unit, its size can be 4x4,8x8,8x16,16x8,16x 16,32x32 or 64x64.In the present embodiment, four of present frame 502 candidate unit A, B, C, can be used as the candidate of the MVP that is used to determine active cell 512 with the motion vector of D.In addition, identical with the position of bit location 514 in reference frame 504 with the position of active cell 512 in present frame 502, and with adjacent with bit location 514 or be arranged in the candidate that motion vector with a plurality of candidate unit a~j of bit location 514 also can be used as the MVP that is used to determine active cell 512.
Candidate unit A in the present frame 502 is the subregion that is positioned at active cell 512 left sides, candidate unit B in the present frame 502 is the subregion that is positioned at active cell 512 tops, candidate unit C in the present frame 502 is for being positioned at active cell 512 top-right subregions, and the candidate unit D in the present frame 502 is for being positioned at active cell 512 upper left subregions.Candidate unit a in the reference frame 504 is the subregion that is positioned at bit location 514 left sides, candidate unit b in the reference frame 504 is the subregion that is positioned at bit location 514 tops, candidate unit c in the reference frame 504 is for being positioned at bit location 514 top-right subregions, and the candidate unit d in the reference frame 504 is for being positioned at bit location 514 upper left subregions.In addition, candidate unit e in the reference frame 504 is the subregion that is positioned at bit location 514 inside, candidate unit f in the reference frame 504 and g are the subregion that is positioned at bit location 514 the right, candidate unit h in the reference frame 504 is the subregion that is positioned at bit location 514 lower lefts, candidate unit i in the reference frame 504 is the subregion that is positioned at bit location 514 bottoms, and the candidate unit j in the reference frame 504 is for being positioned at bit location 514 bottom-right subregions.In one embodiment, be used to determine the candidate collection of the MVP of active cell 512 more to comprise calculated motion vector, for instance, equal candidate unit A, B, with the motion vector of the intermediate value of the motion vector of C, equal candidate unit A, B, with the motion vector of the intermediate value of the motion vector of D and by being similar to the MVP of the convergent-divergent that the method shown in Fig. 4 obtains.
The a plurality of motion vectors corresponding to active cell 512 be decided to be included in the candidate collection after, at least one motion vector is selected adaptively from the candidate collection of the motion prediction that is used for active cell 512.Please refer to Fig. 6 A and Fig. 6 B, Fig. 6 A and Fig. 6 B are the flow charts according to the motion forecast method 600 with the selected candidate unit of self adaptation of one embodiment of the present invention.Decision is corresponding to a plurality of coding units (step 602) of active cell 512.Decision is corresponding to a plurality of candidate units (step 603) of active cell 512.The candidate collection that is used for active cell 512 is selected from a plurality of motion vectors corresponding to active cell 512.Motion vector can comprise the motion vector in of motion vector of the subregion/piece of encoding in the same frame or combination, calculated motion vector and the reference frame.In one embodiment, the candidate collection corresponding to the active cell shown in Fig. 5 512 comprises the motion vector of unit A, B, C and D in the present frame 502 and the motion vector of the unit e in the reference frame 504.Candidate collection can be according to the position of the shape of one or more previous statistics, neighbor information, active cell and active cell and is determined.For instance, according to neighbor information classified (rank), and first three motion vector is selected as being included among the candidate collection corresponding to a plurality of motion vectors of active cell 512.Final MVP can select from candidate collection by motion vector competing method or other system of selection.In some embodiments, a plurality of motion vectors are classified according to selecting sequence, and selecting sequence is by the weighted sum decision of differences in motion.Differences in motion is decode poor between the motion vector (being the real time kinematics vector) of each motion vector prediction and candidate unit corresponding.Weight can be passed through the shape and the determining positions of active cell, or weight can be by the shape and the determining positions of adjacent block.
Please refer to Fig. 7, Fig. 7 is the schematic diagram corresponding to the table of the record motion difference of difference coding unit and candidate unit according to one embodiment of the present invention.For instance, suppose that unit A is selected as target coding unit.Computing unit A and the candidate unit A that is positioned at the A left side, unit AMotion vector between motion difference D A, AAlso computing unit A be positioned at the candidate unit B of A top, unit AMotion vector between motion difference D B, AAlso computing unit A be positioned at the top-right candidate unit C of unit A AMotion vector between motion difference D C, AAlso computing unit A be positioned at the upper left candidate unit D of unit A AMotion vector between motion difference D D, AAlso computing unit A and the candidate unit a that is positioned at corresponding to the left side of the same bit location of unit A AMotion vector between motion difference D A, ASimilarly, also calculate corresponding to the motion difference D of coding unit A B, A..., D J, AThen corresponding to the motion difference D of the calculating of coding unit A A, A, D B, A, D C, A, D D, A, D A, A, D B, A..., D J, ABe recorded in the table shown in Figure 7.Then from coding unit select target coding unit B (step 604), calculate target coding unit B motion vector and corresponding to the motion difference D between the motion vector of target a plurality of candidate units of coding unit B A, B, D B, B, D C, B, D D, B, D A, B, D B, B..., D J, B(step 606) also is recorded in it in the table shown in Figure 7.Step 604 and step 606 repeat that coding unit A, B, C, D and e all are selected as target coding unit and all calculated (step 608) corresponding to the motion difference of coding unit A, B, C, D and e up to all.
Corresponding to after the differences in motion of coding unit A, B, C, D and e has all been calculated, determine the selecting sequence of a plurality of motion vectors by the weighted sum of differences in motion, from candidate unit select target candidate unit (step 610).For instance, if candidate unit A is selected as the target candidate unit, then according to a series of weights W A, W B, W C, W D, and W eWill be corresponding to the motion difference D of target candidate unit A A, A, D A, B, D A, C, D A, D, and D A, eAddition is to obtain the weighted sum S corresponding to target candidate unit A A=[(D A, A* W A)+(D A, B* W B)+(D A, C* W C)+(D A, D* W D)+(D A, e* W e)] (step 612), wherein weights W A, W B, W C, W D, and W eCorrespond respectively among coding unit A, B, C, D and the e.Then other candidate unit B, C, D, e ..., i and j be the target candidate unit by selective sequential, corresponding to candidate unit B, C, D, e ..., the weighted sum S of i and j B, S C, S D, S e..., S i, and S jBy order computation.
When all candidate units all be selected as the target candidate unit and corresponding to all candidate unit A, B, C, D, e ..., the weighted sum S of i and j A, S B, S C, S D, S e..., S i, and S jWhen all having calculated (step 614), according to corresponding to candidate unit A, B, C, D, e ..., the weighted sum S of i and j A, S B, S C, S D, S e..., S i, and S jFrom candidate unit A, B, C, D, e ..., i and j select at least one selected candidate unit (step 616) that is used for the motion prediction of active cell.In one embodiment, according to size to weighted sum S A, S B, S C, S D, S e..., S i, and S jThe ordering, and corresponding to optimum weighting and (according to the different weights method can be minimum weight and or maximum weighted and) candidate unit be decided to be selected candidate unit.At last, according to the motion vector of the motion vector prediction active cell 512 of selected candidate unit.
Though the present invention discloses as above with better embodiment; so it is not to be used to limit the present invention, and any the technical staff in the technical field is not in departing from the scope of the present invention; can do some and change, so protection scope of the present invention should be as the criterion with the scope that claim was defined.

Claims (18)

1. motion forecast method comprises:
Decision is corresponding to a plurality of candidate units of the active cell of present frame;
Obtain a plurality of motion vectors of these a plurality of candidate units;
According to a plurality of reference frames of these a plurality of candidate units and a plurality of time gaps between this present frame, calculate a plurality of time-scaling factors of these a plurality of candidate units;
According to these a plurality of time-scaling factors, a plurality of motion vectors of this of these a plurality of candidate units of convergent-divergent are to obtain the motion vector of a plurality of convergent-divergents; And
According to the motion vector of these a plurality of convergent-divergents, be used for motion vector prediction of the motion prediction of this active cell from these a plurality of candidate units selections.
2. motion forecast method as claimed in claim 1 is characterized in that, this motion forecast method more comprises:
According to this motion vector of this motion vector prediction, predict the motion vector of this active cell.
3. motion forecast method as claimed in claim 1 is characterized in that, the calculation procedure of a plurality of time-scaling factors of this of these a plurality of candidate units more comprises:
Calculate these a plurality of reference frames of this a plurality of motion vectors of these a plurality of candidate units and a plurality of time gaps between this present frame;
Calculate the object time distance between target search frame and this present frame; And
Divide this object time distance by these a plurality of time gaps respectively, to obtain this a plurality of time-scaling factors.
4. motion forecast method as claimed in claim 3 is characterized in that, the convergent-divergent step of a plurality of motion vectors of this of these a plurality of candidate units more comprises:
These a plurality of motion vectors of these a plurality of candidate units be multiply by these a plurality of time-scaling factors of this a plurality of candidate units respectively, with the motion vector of these a plurality of convergent-divergents of obtaining these a plurality of candidate units.
5. motion forecast method as claimed in claim 1 is characterized in that, the selection step of this motion vector prediction more comprises:
Motion vector from the motion vector calculation intermediate value convergent-divergent of these a plurality of convergent-divergents;
To be this motion vector prediction corresponding to this candidate unit decision of the motion vector of this intermediate value convergent-divergent.
6. motion forecast method as claimed in claim 1, it is characterized in that these a plurality of candidate units comprise last unit, the top-right upper right unit of this active cell and the upper left upper left unit of this active cell of top of left unit, this active cell on the left side of this active cell.
7. motion forecast method as claimed in claim 1 is characterized in that, this active cell is piece or macro block with these a plurality of candidate units.
8. motion forecast method comprises:
Decision is used for a plurality of candidate units of the motion prediction of active cell;
Decision is corresponding to a plurality of coding units of this active cell;
Calculating corresponding in these a plurality of coding units each a plurality of motion vectors of these a plurality of candidate units and a plurality of motion differences between each the motion vector in this a plurality of coding units;
According to a series of a plurality of weights, will be corresponding to each this a plurality of motion difference addition in these a plurality of candidate units, to obtain a plurality of each weighted sums that correspond respectively in these a plurality of candidate units; And
According to these a plurality of weighted sums, select at least one selected candidate unit that is used for the motion prediction of this active cell from these a plurality of candidate units.
9. motion forecast method as claimed in claim 8 is characterized in that, this motion forecast method more comprises:
According to this motion vector of this selected candidate unit, predict the motion vector of this active cell.
10. motion forecast method as claimed in claim 8 is characterized in that, the calculation procedure of these a plurality of motion differences more comprises:
From these a plurality of select targets of coding unit coding unit;
Calculating is corresponding to these a plurality of motion differences between this motion vector of these a plurality of motion vectors of these a plurality of candidate units of this target coding unit and this target coding unit; And
Repetition is corresponding to the selection of this target of this target coding unit coding unit and the calculation procedure of these a plurality of motion differences, up to whole this target coding units that have been selected as of these a plurality of coding units.
11. motion forecast method as claimed in claim 8 is characterized in that, the addition step of these a plurality of motion differences more comprises:
From these a plurality of candidate unit select target candidate units;
According to these a plurality of weights of this series, will be corresponding to this a plurality of motion difference addition of this target candidate unit, to obtain this weighted sum corresponding to this target candidate unit; And
Repetition is corresponding to the selection of this target candidate unit of this target candidate unit and the addition step of these a plurality of motion differences, up to whole this target candidate unit that have been selected as of these a plurality of candidate units.
12. motion forecast method as claimed in claim 11 is characterized in that, these a plurality of weights correspond respectively to of these a plurality of coding units.
13. motion forecast method as claimed in claim 8 is characterized in that, the selection step of these a plurality of selected candidate units more comprises:
To these a plurality of weighted sum orderings; And
Will corresponding to optimum weighting and this candidate unit be chosen as this selected candidate unit.
14. motion forecast method as claimed in claim 8, it is characterized in that, these a plurality of coding units comprise the last unit, the top-right upper right unit of this active cell, the upper left upper left unit and the same bit location of this active cell of top of left unit, this active cell on the left side of this active cell, wherein should be identical with the position of this active cell in present frame with the position of bit location in reference frame.
15. motion forecast method as claimed in claim 8, it is characterized in that these a plurality of candidate units comprise last unit, the top-right upper right unit of this active cell and the upper left upper left unit of this active cell of top of left unit, this active cell on the left side of this active cell.
16. motion forecast method as claimed in claim 15, it is characterized in that, these a plurality of candidate units more comprise in first value cell in the value cell and second, wherein this in first the motion vector of value cell equal this unit, left side, the intermediate value of the motion vector of unit and this upper right unit on this, and this in second the motion vector of value cell equal this unit, left side, the intermediate value of the motion vector of unit and this upper left unit on this.
17. motion forecast method as claimed in claim 15, it is characterized in that, identical with the position of bit location in reference frame with the position of this active cell in present frame, and these a plurality of candidate units more comprise this left co-ordinated unit with the left side of bit location, be somebody's turn to do last same bit location with the top of bit location, should be with the top-right upper right same bit location of bit location, should be with the upper left upper left same bit location of bit location, this same bit location, be somebody's turn to do right same bit location with the right of bit location, be somebody's turn to do the same bit location in lower-left with the lower left of bit location, be somebody's turn to do following same bit location with the bottom of bit location, and should be with the same bit location in bottom-right bottom right of bit location.
18. motion forecast method as claimed in claim 8 is characterized in that, this active cell is piece or macro block with these a plurality of candidate units.
CN 201110020283 2010-01-18 2011-01-18 Motion prediction method Pending CN102131094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210272628.8A CN102833540B (en) 2010-01-18 2011-01-18 Motion forecast method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US29581010P 2010-01-18 2010-01-18
US61/295,810 2010-01-18
US32673110P 2010-04-22 2010-04-22
US61/326,731 2010-04-22
US12/957,644 2010-12-01
US12/957,644 US9036692B2 (en) 2010-01-18 2010-12-01 Motion prediction method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201210272628.8A Division CN102833540B (en) 2010-01-18 2011-01-18 Motion forecast method

Publications (1)

Publication Number Publication Date
CN102131094A true CN102131094A (en) 2011-07-20

Family

ID=44268965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110020283 Pending CN102131094A (en) 2010-01-18 2011-01-18 Motion prediction method

Country Status (1)

Country Link
CN (1) CN102131094A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014106435A1 (en) * 2013-01-07 2014-07-10 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
CN104838656A (en) * 2012-07-09 2015-08-12 高通股份有限公司 Temporal motion vector prediction in video coding extensions
CN106851273A (en) * 2017-02-10 2017-06-13 北京奇艺世纪科技有限公司 A kind of motion-vector coding method and device
CN107483956A (en) * 2011-11-07 2017-12-15 英孚布瑞智有限私人贸易公司 The coding/decoding method of video data
CN107483928A (en) * 2011-09-09 2017-12-15 株式会社Kt Method for decoding video signal
CN107566835A (en) * 2011-12-23 2018-01-09 韩国电子通信研究院 Picture decoding method, method for encoding images and recording medium
CN107948656A (en) * 2011-10-28 2018-04-20 太阳专利托管公司 Picture decoding method and picture decoding apparatus
CN108184125A (en) * 2011-11-10 2018-06-19 索尼公司 Image processing equipment and method
CN108696754A (en) * 2017-04-06 2018-10-23 联发科技股份有限公司 The method and apparatus of motion vector prediction
CN113678455A (en) * 2019-03-12 2021-11-19 Lg电子株式会社 Video or image coding for deriving weight index information for bi-prediction
US11356696B2 (en) 2011-10-28 2022-06-07 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1523896A (en) * 2003-09-12 2004-08-25 浙江大学 Prediction method and apparatus for motion vector in video encoding/decoding
US20040223548A1 (en) * 2003-05-07 2004-11-11 Ntt Docomo, Inc. Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
US20080285653A1 (en) * 2007-05-14 2008-11-20 Himax Technologies Limited Motion estimation method
CN101605256A (en) * 2008-06-12 2009-12-16 华为技术有限公司 A kind of method of coding and decoding video and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040223548A1 (en) * 2003-05-07 2004-11-11 Ntt Docomo, Inc. Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
CN1592421A (en) * 2003-05-07 2005-03-09 株式会社Ntt都科摩 Moving image encoder, moving image decoder, moving image encoding method, moving image decoding method
CN1523896A (en) * 2003-09-12 2004-08-25 浙江大学 Prediction method and apparatus for motion vector in video encoding/decoding
US20080285653A1 (en) * 2007-05-14 2008-11-20 Himax Technologies Limited Motion estimation method
CN101605256A (en) * 2008-06-12 2009-12-16 华为技术有限公司 A kind of method of coding and decoding video and device

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10523967B2 (en) 2011-09-09 2019-12-31 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
US11089333B2 (en) 2011-09-09 2021-08-10 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
CN107580221B (en) * 2011-09-09 2020-12-08 株式会社Kt Method for decoding video signal
CN107635140B (en) * 2011-09-09 2020-12-08 株式会社Kt Method for decoding video signal
CN107483928A (en) * 2011-09-09 2017-12-15 株式会社Kt Method for decoding video signal
CN107580219B (en) * 2011-09-09 2020-12-08 株式会社Kt Method for decoding video signal
CN107580220A (en) * 2011-09-09 2018-01-12 株式会社Kt Method for decoding video signal
CN107580219A (en) * 2011-09-09 2018-01-12 株式会社Kt Method for decoding video signal
CN107580221A (en) * 2011-09-09 2018-01-12 株式会社Kt Method for decoding video signal
CN107580218A (en) * 2011-09-09 2018-01-12 株式会社Kt Method for decoding video signal
CN107592527A (en) * 2011-09-09 2018-01-16 株式会社Kt Method for decoding video signal
CN107592528A (en) * 2011-09-09 2018-01-16 株式会社Kt Method for decoding video signal
CN107592529A (en) * 2011-09-09 2018-01-16 株式会社Kt Method for decoding video signal
CN107635140A (en) * 2011-09-09 2018-01-26 株式会社Kt Method for decoding video signal
US10805639B2 (en) 2011-09-09 2020-10-13 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
CN107580220B (en) * 2011-09-09 2020-06-19 株式会社Kt Method for decoding video signal
CN107483928B (en) * 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CN107592527B (en) * 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CN107592528B (en) * 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CN107592529B (en) * 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CN107580218B (en) * 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CN107948656B (en) * 2011-10-28 2021-06-01 太阳专利托管公司 Image decoding method and image decoding device
US11902568B2 (en) 2011-10-28 2024-02-13 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
US11831907B2 (en) 2011-10-28 2023-11-28 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
US11622128B2 (en) 2011-10-28 2023-04-04 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
US11356696B2 (en) 2011-10-28 2022-06-07 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
CN107948656A (en) * 2011-10-28 2018-04-20 太阳专利托管公司 Picture decoding method and picture decoding apparatus
US11115677B2 (en) 2011-10-28 2021-09-07 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, and image decoding apparatus
CN107483956A (en) * 2011-11-07 2017-12-15 英孚布瑞智有限私人贸易公司 The coding/decoding method of video data
CN107483956B (en) * 2011-11-07 2020-04-21 英孚布瑞智有限私人贸易公司 Method for decoding video data
CN108184125A (en) * 2011-11-10 2018-06-19 索尼公司 Image processing equipment and method
CN108184125B (en) * 2011-11-10 2022-03-08 索尼公司 Image processing apparatus and method
US11284067B2 (en) 2011-12-23 2022-03-22 Electronics And Telecommunications Research Institute Method and apparatus for setting reference picture index of temporal merging candidate
US11843769B2 (en) 2011-12-23 2023-12-12 Electronics And Telecommunications Research Institute Method and apparatus for setting reference picture index of temporal merging candidate
US10848757B2 (en) 2011-12-23 2020-11-24 Electronics And Telecommunications Research Institute Method and apparatus for setting reference picture index of temporal merging candidate
CN107566835A (en) * 2011-12-23 2018-01-09 韩国电子通信研究院 Picture decoding method, method for encoding images and recording medium
CN107682704A (en) * 2011-12-23 2018-02-09 韩国电子通信研究院 Picture decoding method, method for encoding images and recording medium
CN107659813A (en) * 2011-12-23 2018-02-02 韩国电子通信研究院 Picture decoding method, method for encoding images and recording medium
CN107682704B (en) * 2011-12-23 2020-04-17 韩国电子通信研究院 Image decoding method, image encoding method, and recording medium
CN107659813B (en) * 2011-12-23 2020-04-17 韩国电子通信研究院 Image decoding method, image encoding method, and recording medium
US11843768B2 (en) 2011-12-23 2023-12-12 Electronics And Telecommunications Research Institute Method and apparatus for setting reference picture index of temporal merging candidate
CN107566835B (en) * 2011-12-23 2020-02-28 韩国电子通信研究院 Image decoding method, image encoding method, and recording medium
CN104838656A (en) * 2012-07-09 2015-08-12 高通股份有限公司 Temporal motion vector prediction in video coding extensions
CN104838656B (en) * 2012-07-09 2018-04-17 高通股份有限公司 Decoding and method, video decoder and the computer-readable storage medium of encoded video data
US9967586B2 (en) 2013-01-07 2018-05-08 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
WO2014106435A1 (en) * 2013-01-07 2014-07-10 Mediatek Inc. Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding
CN106851273A (en) * 2017-02-10 2017-06-13 北京奇艺世纪科技有限公司 A kind of motion-vector coding method and device
CN106851273B (en) * 2017-02-10 2019-08-06 北京奇艺世纪科技有限公司 A kind of motion-vector coding method and device
CN108696754A (en) * 2017-04-06 2018-10-23 联发科技股份有限公司 The method and apparatus of motion vector prediction
CN113678455B (en) * 2019-03-12 2024-01-16 Lg电子株式会社 Video or image coding for deriving weight index information for bi-prediction
US11876960B2 (en) 2019-03-12 2024-01-16 Lg Electronics Inc. Video or image coding for inducing weight index information for bi-prediction
CN113678455A (en) * 2019-03-12 2021-11-19 Lg电子株式会社 Video or image coding for deriving weight index information for bi-prediction

Similar Documents

Publication Publication Date Title
CN102833540B (en) Motion forecast method
CN102131094A (en) Motion prediction method
EP2534841B1 (en) Motion vector prediction method
RU2699404C1 (en) Predictive coding method, a predictive coding device and a predictive coding method of a motion vector and a predictive decoding method, a predictive decoding device and a motion vector prediction decoding program
CN102362498B (en) Selectivity is carried out the apparatus and method of syntax element encodes/decoding and is used it to carry out the apparatus and method of coding/decoding to image
CN102792688B (en) Data compression for video
CN100566426C (en) The method and apparatus of encoding and decoding of video
CN100551073C (en) Decoding method and device, image element interpolation processing method and device
CN102223542A (en) Method for performing localized multi-hypothesis prediction during video coding of a coding unit, and associated apparatus
US20180115787A1 (en) Method for encoding and decoding video signal, and apparatus therefor
CN102396227B (en) Image processing device and method
CN102726043A (en) Hybrid video coding
CN102223532B (en) Method for performing hybrid multihypothsis prediction during video coding of coding unit, and associated apparatus
WO2008082158A1 (en) Method and apparatus for estimating motion vector using plurality of motion vector predictors, encoder, decoder, and decoding method
CN103430545A (en) Content adaptive motion compensation filtering for high efficiency video coding
CN102037732A (en) Single pass adaptive interpolation filter
WO2013148002A2 (en) Context based video encoding and decoding
KR20110081304A (en) Image prediction encoding device, image prediction decoding device, image prediction encoding method, image prediction decoding method, image prediction encoding program, and image prediction decoding program
Afonso et al. Low cost and high throughput FME interpolation for the HEVC emerging video coding standard
CN111314698A (en) Image coding processing method and device
US20110090963A1 (en) Method and apparatus for zoom motion estimation
Chiang et al. A multi-pass coding mode search framework for AV1 encoder optimization
US10051268B2 (en) Method for encoding, decoding video signal and device therefor
Hsu et al. Selective Block Size Decision Algorithm for Intra Prediction in Video Coding and Learning Website
Zhang et al. Comprehensive scheme for subpixel variable block-size motion estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110720