CN101257625B - Method for indexing position in video decoder and video decoder - Google Patents

Method for indexing position in video decoder and video decoder Download PDF

Info

Publication number
CN101257625B
CN101257625B CN 200810015490 CN200810015490A CN101257625B CN 101257625 B CN101257625 B CN 101257625B CN 200810015490 CN200810015490 CN 200810015490 CN 200810015490 A CN200810015490 A CN 200810015490A CN 101257625 B CN101257625 B CN 101257625B
Authority
CN
China
Prior art keywords
module
blockdistancee
piece
decoding
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200810015490
Other languages
Chinese (zh)
Other versions
CN101257625A (en
Inventor
刘韶
刘微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN 200810015490 priority Critical patent/CN101257625B/en
Publication of CN101257625A publication Critical patent/CN101257625A/en
Application granted granted Critical
Publication of CN101257625B publication Critical patent/CN101257625B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention discloses a position index method for video coding/decoding, including the following steps: A. computing and storing at least partial repetition computational all possible result; B. establishing address controller for positioning and reading memory result of step A; C. transferring the address controller for reading corresponding result when executing at least partial repetition computation in video coding/decoding process. The present invention computes beforehand and stores which needs considerable operation formerly, and forms search table, then in the actual operation, especially actual movement complexor predictive process, only searches the stored result beforehand (search table ), obtains corresponding result, thereby saving time, and improving real-time performance of system. Correspondingly, the present invention also provides a video decoder.

Description

Method for indexing position in the coding and decoding video and Video Decoder
Technical field
The present invention relates to the video coding and decoding technology field, relate in particular to method for indexing position and Video Decoder in a kind of coding and decoding video.
Background technology
At present in the coding and decoding video field, through a large amount of mathematical computations of employing of regular meeting's repetition.These calculating repeat loaded down with trivial details, and amount of calculation is very big, and holding time is more, make that the real-time of system is relatively poor.Below be example with video encoding and decoding standard AVS.
The code decode algorithm of video encoding and decoding standard AVS all adopts different frame type patterns, is respectively I, P, B frame.Adopting the purpose of P, B frame is in order to improve compression ratio, P, B frame have all adopted the method for motion prediction, this is wherein also forgiving the prediction to motion vector, it promptly is motion vector with the motion-vector prediction current block of macro block around the current block, as shown in Figure 1, illustrate current block E and left piece A thereof, upper left D, go up the position relation of piece B, upper right C, wherein the motion vector of current block E is predicted according to the original motion vector of piece A, B, C and D.
Motion-vector prediction is to be unit with the piece, predicts according to macro block (mb) type.In AVS, motion-vector prediction is most important also to be the most complicated step to be: but in A, B, C piece, have two piece times spent at least, and and current block E is not when being 16 * 8 or 8 * 16 patterns, need carry out following convergent-divergent to the prediction of the motion vector of current block E and calculate:
MVA_x=Sign(mvA_x)×((Abs(mvA_x)×BlockDistanceE×(512/BlockDistanceA)+256)>>9)
1(a)
MVA_y=Sign(mvA_y)×((Abs(mvA_y)×BlockDistanceE×(512/BlockDistanceA)+256)>>9)
1(b)
MVB_x=Sign(mvB_x)×((Abs(mvB_x)×BlockDistanceE×(512/BlockDistanceB)+256)>>9)
1(c)
MVB_y=Sign(mvB_y)×((Abs(mvB_y)×BlockDistanceE×(512/BlockDistanceB)+256)>>9)
1(d)
MVC_x=Sign(mvC_x)×((Abs(mvC_x)×BlockDistanceE×(512/BlockDistanceC)+256)>>9)
1(e)
MVC_y=Sign(mvC_y)×((Abs(mvC_y)×BlockDistanceE×(512/BlockDistanceC)+256)>>9)
1(f)
Wherein, MV Φ _ Ω represents the motion vector predictor of current block E on the Ω of piece Φ direction, and the Φ value is A, B, C, and which piece expression is; The Ω value is x, y, represents horizontal ordinate.The symbol of the expression formula in the bracket is judged in Sign () expression, and Abs () expression takes absolute value to the value in the bracket; BlockDistance Φ represents piece Φ to the distance between its reference block, ">>" the expression right-shift operation, follow the figure place of numeral thereafter for moving to right; Mv Φ _ Ω represents the original motion vector of piece Φ on the Ω direction; "/" expression rounds operation.
Described BlockDistance Φ calculates in the following manner:
The reference block of if block Φ is (DISPLAY ORDER) before piece Φ, then the BlockDistance DistanceIndex that equals piece Φ deduct described reference block DistanceIndex poor, add 512 and, again with 512 deliverys;
The reference block of if block Φ is (DISPLAY ORDER) after piece Φ, then the BlockDistance DistanceIndex that equals described reference block deduct piece Φ DistanceIndex poor, add 512 and, again with 512 deliverys.
Wherein, DistanceIndex represents piece apart from index, and it is defined as follows:
All pixels of if block all belong to second (DISPLAY ORDER) of place horizontally interlaced image or all belong to the field, the end of place progressive scanning picture, and DistanceIndex equals picture_distance and takes advantage of 2 to add 1; Otherwise DistanceIndex equals picture_distance and takes advantage of 2.Wherein, picture_distance is meant image pitch.
The description of aforementioned calculation BlockDistance Φ may be summarized to be following expression formula:
BlockDistance=(|DistanceIndex Cur-DistanceIndex Ref|+512)%512?1(g)
DistanceIndex CurThe piece of expression current block Φ is apart from index, DistanceIndex RefThe piece of the reference block of expression piece Φ is apart from index, and " % " represents modulo operation.
By above description as can be seen, motion-vector prediction need be through the complex calculation such as multiplication, division and delivery among 1 (a), 1 (b), 1 (c), 1 (d), 1 (e), 1 (f), 1 (g), its computational process is very consuming time, and described acquisition apart from index also will expend the regular hour.In addition, the piece that needs to carry out aforementioned calculation in a two field picture usually occupies bigger ratio.Therefore as seen, calculate repeats loaded down with trivial details, amount of calculation very greatly, holding time is more, relatively poor these problems of system real time are particularly remarkable.
Summary of the invention
Technical problem to be solved by this invention is to provide a kind of method of motion vector prediction fast.
In order to solve the problems of the technologies described above, the present invention proposes the method for indexing position in a kind of coding and decoding video, may further comprise the steps:
A, calculate and be stored to all possible outcomes of small part double counting;
B, establishment are used to locate the address control device of read step A institute event memory;
C, in video encoding-decoding process, carry out to described and during to the small part double counting, call described address control device and read corresponding results.
Wherein, described method for indexing position is applied to the motion vector prediction process in the video encoding-decoding process, steps A calculate and the storing moving vector prediction in to all possible outcomes of small part double counting.
In addition, all possible outcomes described in the steps A are with the form storage of look-up table; And address control device described in the step B is the location index device, by index field determined to locate the described all possible result that need read.
Optionally, it is applied to AVS video encoding and decoding standard, MPEG2 video encoding and decoding standard or video encoding and decoding standard H.264.
Wherein: the decode procedure of described three kinds of video encoding and decoding standards comprises video code flow is carried out the entropy decoding step of entropy decoding and vision signal predicted rebuilding and step that the prediction of the vision signal of output after rebuilding is rebuild; And described steps A and step B are arranged on before the entropy decoding step, and described step C is in the middle of the prediction reconstruction procedures.
In addition, also comprise between described entropy decoding step and the prediction reconstruction procedures:
Dequantization step is used for the quantization parameter of the video code flow that receives is carried out convergent-divergent, obtains conversion coefficient;
The inverse transformation step is used for described conversion coefficient is converted to the spatial domain value.
On the other hand, the present invention also proposes a kind of Video Decoder, comprising:
Computing module is used for calculating kinematical vector prediction all possible outcomes to the small part double counting;
Memory module is used to store all possible outcomes that described computing module obtains;
The address control module is used for the location and reads described memory module institute event memory;
The motion-vector prediction module is used to carry out motion-vector prediction,, calls described address control device and reads corresponding results, and finish motion vector prediction process during to the small part double counting when carrying out to described.
Wherein, also comprise entropy decoder module, inverse quantization inverse transform block, motion compensating module and reconstructed module; Described video code flow passes through described entropy decoder module, inverse quantization inverse transform block, motion-vector prediction module, motion compensating module and reconstructed module successively, and finally exports the video image after the reconstruct; And described computing module and memory module work in before the described entropy decoding, and described address control module works in motion vector prediction process.
Preferably, described all possible outcomes are stored in the described memory module with the form of look-up table; Described address control module is the location index module, and it is by determining to locate the described all possible result that need read to index field.
Wherein, it is applied to AVS video encoding and decoding standard, MPEG2 video encoding and decoding standard or video encoding and decoding standard H.264.
The present invention since a large amount of computings that will need carry out originally by calculating in advance and storing, constitute look-up table with this, and then in the especially actual motion vector prediction process of the computing of reality, only need to obtain corresponding results by the result's (look-up table) who looks into prior storage, save the time so greatly, improved the real-time of system.The present invention is described in detail below in conjunction with accompanying drawing, and these and other purpose of the present invention, feature, aspect and advantage will become more obvious.
Description of drawings
Fig. 1 is the schematic diagram of a current block and the embodiment that position of piece concerns around it;
Fig. 2 is the FB(flow block) of an a kind of embodiment of video encoding/decoding method;
Fig. 3 is the flow chart of an a kind of embodiment of method of motion vector prediction;
Fig. 4 is the schematic diagram of an embodiment of the referring-to relation of present image and its reference picture;
Fig. 5 is based on the structural representation of an embodiment of the look-up table of Fig. 4 embodiment;
Fig. 6 is the schematic diagram of another embodiment of the referring-to relation of present image and its reference picture;
Fig. 7 is based on the structural representation of an embodiment of the look-up table of Fig. 6 embodiment;
Fig. 8 is the schematic diagram of another embodiment of the referring-to relation of present image and its reference picture;
Fig. 9 is based on the structural representation of an embodiment of the look-up table of Fig. 8 embodiment;
Figure 10 is the schematic diagram of the 4th embodiment of the referring-to relation of present image and its reference picture;
Figure 11 is the schematic diagram of the 5th embodiment of the referring-to relation of present image and its reference picture;
Figure 12 is based on the structural representation of an embodiment of the look-up table of Figure 11 embodiment;
Figure 13 is the schematic diagram of the 6th embodiment of the referring-to relation of present image and its reference picture;
Figure 14 is based on the structural representation of an embodiment of the look-up table of Figure 13 embodiment;
Figure 15 is the structural representation of an embodiment of a kind of Video Decoder of the present invention.
Embodiment
In order to make those skilled in the art person understand the present invention program better, and above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with embodiment and embodiment accompanying drawing.
With reference to figure 2, illustrate the schematic flow sheet of an a kind of embodiment of video encoding/decoding method.From present embodiment, also can clearly reflect the process of method for indexing position of the present invention, as shown in the figure, at first decode from the code stream that coding is brought in, behind the decoded data process of entropy inverse quantization, inverse transformation, prediction reconstruction and the loop filtering, finally export to outside the process chip through entropy.Wherein, the process that is calculated to the small part double counting and generates look-up table of the present invention was finished before described entropy decode procedure, promptly finished before decoding.Use in the motion vector prediction process of described look-up table in the diagram prediction is rebuild.For the calculating of all possible outcomes in the look-up table and the structure of table, can be with reference to aftermentioned embodiment.And, create address control device in the process of described generation look-up table simultaneously, preferably can be location index, read in motion vector prediction process, the result of calculation of being stored is positioned by this device.
With the AVS standard is example, with the method for indexing position that further specifies in the coding and decoding video of the present invention.With reference to figure 3, illustrate the flow chart of an embodiment of motion vector prediction process in the AVS standard.As shown in the figure, simultaneously in conjunction with Fig. 1, may further comprise the steps:
S31, judge current block left piece, go up whether have only among piece and upper right one available, if, execution in step S34 then, otherwise execution in step S32.That is, for Fig. 1, judge exactly current block E around A, B, C piece whether only have one available, whether available can the realization by the reference key value of judging relevant block described here is if the reference key value is that-1 expression is unavailable, otherwise available.For example, for piece B, judge whether its reference key value is-1, if then piece B is unavailable, otherwise available.The definition of described reference key value and calculating can here further not set forth with reference to related definition in the AVS standard and calculating;
S32 judges whether the macroblock encoding pattern at current block place is 8 * 16 or 16 * 8 coding modes, if, execution in step S35 then, otherwise execution in step S33.That is,, judge exactly whether the macro block at current block E place is 8 * 16 or 16 * 8 coding modes, because the coding mode of E may be 16 * 16,16 * 8,8 * 16 or 8 * 8 patterns for Fig. 1;
S33, by search look-up table and calculate left piece to current block, the original motion vector of going up piece and upper right carries out convergent-divergent, obtains the value behind the convergent-divergent, calculates the motion vector predictor of current block at last according to this value.Promptly, for Fig. 1, carry out convergent-divergent by the original motion vector of searching look-up table and calculate to A, B, C, concrete Zoom method can partly be described according to background technology and carry out, promptly formula 1 (a), 1 (b), 1 (c), 1 (d), 1 (e), 1 (f) are calculated, obtain value MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, MVC_y behind the convergent-divergent, carry out following calculating again:
Definition distance D ist (MV1, MV2)=Abs (x1-x2)+Abs (y1-y2), MV1=[x1 wherein, y1], MV2=[x2, y2].The definition VAB equal Dist (MVA, MVB), VBC equal Dist (MVB, MVC), VCA equal Dist (MVC, MVA).Being calculated as follows of the motion vector predictor of current block E:
The first step is calculated VAB, VBC, VCA three's mean value;
In second step, if the first step obtains mean value and VAB equates, then the motion vector predictor of current block E equals MVC; Otherwise, if described mean value and VBC equate that then the motion vector predictor of current block E equals MVA; Otherwise then the motion vector predictor of current block E equals MVB.
Wherein, MVA=[xA, yA]=[MVA_x, MVA_y]; MVB=[xB, yB]=[MVB_x, MVB_y]; MVC=[xC, yC]=[MVC_x, MVC_y].
The motion vector predictor of current block E in the time of can finally obtaining current block E and be not 8 * 16 or 16 * 8 coding modes by this step.Wherein, when calculating MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, MVC_y, this step obtains wherein by searching look-up table:
BlockDistanceE×(512/BlockDistanceA);
BlockDistanceE×(512/BlockDistanceB);
BlockDistanceE×(512/BlockDistanceC);
By after looking into look-up table and obtaining above-mentioned value, carry out remaining MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, the MVC_y of calculating again.
In another embodiment of the present invention, when calculating MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, MVC_y, this step obtains wherein by searching look-up table:
BlockDistanceE; BlockDistanceA; BlockDistanceB; BlockDistanceC; By after looking into look-up table and obtaining above-mentioned value, carry out remaining MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, the MVC_y of calculating again.
In another embodiment of the present invention, when calculating MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, MVC_y, this step by the piece searching look-up table and obtain each piece and corresponding reference block thereof apart from index, i.e. DistanceIndex in the background technology Cur, DistanceIndex RefAfter obtaining this value, carry out remaining computational process again, obtain MVA_x, MVA_y, MVB_x, MVB_y, MVC_x, MVC_y.
All above-mentioned residues are calculated and can be judged with the standard that is described as in the background technology, also can judge with other suitable standard.
The structure of look-up table can here not describe with reference to the description of subsequent embodiment in this step;
After this step finishes, execution in step S36;
S34, the motion vector predictor of current block is the original motion vector of described available block.That is, when Rule of judgment among the step S31 was set up, expression only had a piece available, and then the original motion vector of this available block is as the motion vector predictor of current block;
After this step finishes, execution in step S36
S35 carries out the motion vector predictor that corresponding motion-vector prediction obtains current block.That is, when step S32 judges that the macro block at current block place is 8 * 16 or 16 * 8 coding modes, then adopt method of motion vector prediction under this kind situation to obtain the motion vector predictor of current block;
In one embodiment of the invention, this step can obtain the motion vector predictor of current block in the following ways:
If current block place macro block is 8 * 16 coding modes:
1) when E is left piece: if the reference key value of A and E is identical, then the motion vector predictor of current block E equals the original motion vector of piece A; Otherwise execution in step S33;
2) when E is right piece: if the reference key value of C and E is identical, then the motion vector predictor of current block E equals the original motion vector of piece C; Otherwise execution in step S33;
If current block place macro block is 16 * 8 coding modes:
1) when E be last piece the time: if the reference key value of B and E is identical, then the motion vector predictor of current block E equals the original motion vector of piece B; Otherwise execution in step S33;
2) as E during for following piece: if the reference key value of A and E is identical, then the motion vector predictor of current block E equals the original motion vector of piece A; Otherwise execution in step S33.
S36 finishes.That is, end is to the motion-vector prediction of current block E.
Below only set forth the motion vector prediction process to a piece, other piece only need repeat said process and get final product.Need to prove, because motion-vector prediction is present in the AVS standard, H.264 in standard and the MPEG-4 standard, all need to carry out a large amount of repetitive operations in the motion-vector prediction of each standard, therefore the calculating that all can adopt the method for the invention that needs are carried out calculates in advance and stores for inquiry and uses.In addition, the whole process of embodiment shown in Figure 3 is the normal process of AVS, but improvement AVS method of motion vector prediction on this basis can be suitable for the method for the invention equally, and this too within the scope of the invention.
With reference to figure 4, illustrate the schematic diagram of an embodiment of the referring-to relation between present image and its reference picture.The P frame that will decode for a width of cloth, its reference picture can only be as shown in Figure 4, it promptly can only be the frame in two frames (I frame or P frame) of its front, the MbRefIndex of any one macro block in the current P frame (piece reference key value) has two kinds of possibilities, corresponding BlockDistance also can only have two kinds of possibilities, tmp_MV_predict (intermediate variable, the result of calculation value of storage BlockDistanceE * (512/BlockDistance Φ)) to 2 * 2=4 should be arranged, four kinds of possibilities are respectively (simultaneously with reference to figure 1):
Current block E and A (or B or C) are with reference to reference frame 0, and this moment, the MbRefIndex of piece E and A (or B or C) was " 0 ", promptly was the situation of tmp_MV_predict00 in Fig. 5 table;
Current block E and A (or B or C) are with reference to reference frame 1, and this moment, the MbRefIndex of piece E and A (or B or C) was " 1 ", promptly was the situation of tmp_MV_predict11 in Fig. 5 table;
The reference frame of current block E is 0, and the reference frame of A (or B or C) is 1, and the MbRefIndex of the two is respectively 0 and 1, promptly is the situation of tmp_MV_predict01 in Fig. 5 table;
The reference frame of current block E is 1, and the reference frame of A (or B or C) is 0, and the MbRefIndex of the two is respectively 1 and 0, promptly is the situation of tmp_MV_predict10 in Fig. 5 table;
Based on embodiment foregoing description shown in Figure 4, used look-up table when Fig. 5 has showed described current block E motion prediction, the content that this look-up table is stored is the result of calculation value of BlockDistanceE * (512/BlockDistance Φ), is designated as tmp_MV_predict; Wherein Φ is A, B or C.
With reference to figure 6, illustrate the schematic diagram of another embodiment of the referring-to relation between present image and its reference picture.For the B frame that a width of cloth will be decoded, its reference picture can only be as shown in Figure 6, promptly can only be a frame (I frame or P frame) of its front or a frame (I frame or P frame) of back.Simultaneously with reference to figure 1.
When predicting the forward motion vector of intra-frame macro block, the MbRefIndex of its macro block can only have a kind of possibility, accordingly, BlockDistance also can only be a kind of possibility, intermediate variable tmp_MV_predict also can only be a kind of possibility: current block E and A (or B or C) are with reference to forward reference frame 0, this moment, the MbRefIndex of the two was 0, the situation of tmp_MV_predict00 in the table promptly shown in Figure 7;
When predicting the backward motion vector of intra-frame macro block, the MbRefIndex of its macro block can only have a kind of possibility, accordingly, BlockDistance also can only be a kind of possibility, intermediate variable tmp_MV_predict also can only be a kind of possible: current block E and A (or B or C) reference are back to reference frame 0, this moment, the MbRefIndex of the two was 0, but to be that index is searched with MbRefIndex+2=2 in the process of tabling look-up, (all need add 2 on original MbRefIndex basis in reverse formula weight forecasting process tables look-up), the situation of tmp_MV_predict22 in the table promptly shown in Figure 7.
Based on foregoing description embodiment illustrated in fig. 6, used look-up table when Fig. 7 has showed described current block E motion prediction.
With reference to figure 8, illustrate the schematic diagram of another embodiment of the referring-to relation between present image and its reference picture.The first half of the P frame that will decode for a width of cloth, its reference picture can only be as shown in Figure 8, be that its reference picture can only be in the first and second half in two frames (I frame or P frame) of front, the MbRefIndex of any one macro block can only have four kinds of possibilities in, corresponding BlockDistance can only have four kinds of possibilities, and intermediate variable tmp_MV_predict has 4 * 4=16,16 kinds of possibility situations, shown in Fig. 9 table, not in explanation one by one.
In addition, with reference to figure 8, illustrate the schematic diagram of the 4th embodiment of the referring-to relation between present image and its reference picture.The second half of the P frame that will decode for a width of cloth, its reference picture can only be as shown in figure 10, the MbRefIndex of any one macro block has four kinds of possibilities on the spot, accordingly, BlockDistance has four kinds of possibilities, and intermediate variable tmp_MV_predict has 4 * 4=16,16 kinds of possibility situations, its list structure equally can be with reference to table shown in Figure 9, but wherein concrete value difference.
Based on foregoing description embodiment illustrated in fig. 8, used look-up table when Fig. 9 has showed described current block E motion prediction.
With reference to Figure 11, illustrate the schematic diagram of the 5th embodiment of the referring-to relation between present image and its reference picture.The first half of the B frame that will decode for a width of cloth, its reference picture can only be as shown in figure 11, that is:
As if the forward motion vector of prediction intra-frame macro block, the MbRefIndex of its macro block has two kinds of possibilities, and is corresponding, and BlockDistance has two kinds of possibilities, and intermediate variable tmp_MV_predict has 2 * 2=4, and four kinds of possibilities are respectively:
Current block E and A (or B or C) are with reference to forward direction reference field 0, and this moment, the MbRefIndex of piece E and A (or B or C) was 0, i.e. the situation of tmp_MV_predict00 in Figure 12 table;
Current block E and A (or B or C) are with reference to forward direction reference field 1, and this moment, the MbRefIndex of piece E and A (or B or C) was 1, i.e. the situation of tmp_MV_predict11 in Figure 12 table;
The forward direction reference field of current block E is 0, and the forward direction reference field of A (or B or C) is 1, and the MbRefIndex of the two is respectively 0 and 1, i.e. the situation of tmp_MV_predict01 in Figure 12 table;
The forward direction reference field of current block E is 1, and the forward direction reference field of A (or B or C) is 0, and the MbRefIndex of the two is respectively 1 and 0, i.e. the situation of tmp_MV_predict10 in Figure 12 table;
If the backward motion vector of prediction intra-frame macro block, the MbRefIndex of its macro block has two kinds of possibilities, and corresponding BlockDistance also has two kinds of possibilities, and intermediate variable tmp_MV_predict has 2 * 2=4, and four kinds of situations are respectively:
Current block E and A (or B or C) reference are back to reference field 0, this moment, the MbRefIndex of piece E and A (or B or C) was 0, but need be that index is searched with MbRefIndex+2=2 in the process of tabling look-up, i.e. the situation of tmp_MV_predict22 in Figure 12 table;
Current block E and A (or B or C) reference are back to reference field 1, this moment, the MbRefIndex of piece E and A (or B or C) was 1, but need be that index is searched with MbRefIndex+2=3 in the process of tabling look-up, i.e. the situation of tmp_MV_predict33 in Figure 12 table;
The back of current block E is 0 to reference field, the back of A (or B or C) is 1 to reference field, the MbRefIndex of the two is respectively 0 and 1, but in the process of tabling look-up, need be that index is searched with MbRefIndex+2=2 and MbRefIndex+2=3 respectively, i.e. the situation of tmp_MV_predict23 in Figure 12 table;
The back of current block E is 1 to reference field, the back of A (or B or C) is 0 to reference field, the MbRefIndex of the two is respectively 1 and 0, but in the process of tabling look-up, need be that index is searched with MbRefIndex+2=3 and MbRefIndex+2=2 respectively, i.e. the situation of tmp_MV_predict32 in Figure 12 table;
Based on foregoing description embodiment illustrated in fig. 11, used look-up table when Figure 12 has showed described current block E motion prediction.
With reference to Figure 13, illustrate the schematic diagram of the 6th embodiment of the referring-to relation between present image and its reference picture.The second half of the B frame that will decode for a width of cloth, its reference picture can only be as shown in figure 13, that is: if the forward motion vector of prediction intra-frame macro block, the MbRefIndex of its macro block has two kinds of possibilities, accordingly, BlockDistance also has two kinds of possibilities, and intermediate variable tmp_MV_predict has 2 * 2=4, four kinds of possibilities are respectively:
Current block E and A (or B or C) are with reference to forward direction reference field 0, and this moment, the MbRefIndex of piece E and A (or B or C) was 0, i.e. the situation of tmp_MV_predict00 in Figure 14 table;
Current block E and A (or B or C) are with reference to forward direction reference field 1, and this moment, the MbRefIndex of piece E and A (or B or C) was 1, i.e. the situation of tmp_MV_predict11 in Figure 14 table;
The forward direction reference field of current block E is 0, and the forward direction reference field of A (or B or C) is 1, and the MbRefIndex of the two is respectively 0 and 1, i.e. the situation of tmp_MV_predict01 in Figure 14 table;
The forward direction reference field of current block E is 1, and the forward direction reference field of A (or B or C) is 0, and the MbRefIndex of the two is respectively 1 and 0, i.e. the situation of tmp_MV_predict10 in Figure 14 table;
As if the backward motion vector of prediction intra-frame macro block, the MbRefIndex of its macro block has two kinds of possibilities, and is corresponding, and BlockDistance also has two kinds of possibilities, and intermediate variable tmp_MV_predict has 2 * 2=4, and four kinds of possibilities are respectively:
Current block E and A (or B or C) reference are back to reference field 0, this moment, the MbRefIndex of piece E and A (or B or C) was 0, but need be respectively that index is searched with MbRefIndex+2=2 in the process of tabling look-up, i.e. the situation of tmp_MV_predict22 in Figure 14 table;
Current block E and A (or B or C) reference are back to reference field 1, this moment, the MbRefIndex of piece E and A (or B or C) was 1, but need be respectively that index is searched with MbRefIndex+2=3 in the process of tabling look-up, i.e. the situation of tmp_MV_predict33 in Figure 14 table;
The back of current block E is 0 to reference field, the back of A (or B or C) is 1 to reference field, the MbRefIndex of the two is respectively 0 and 1, but respectively need be with MbRefIndex+2=2 and MbRefIndex+2=3 in the process of tabling look-up, i.e. the situation of tmp_MV_predict23 in Figure 14 table;
The back of current block E is 1 to reference field, the back of A (or B or C) is 0 to reference field, the MbRefIndex of the two is respectively 1 and 0, but respectively need be with MbRefIndex+2=3 and MbRefIndex+2=2 in the process of tabling look-up, i.e. the situation of tmp_MV_predict32 in Figure 14 table.
Based on foregoing description embodiment illustrated in fig. 13, used look-up table when Figure 14 has showed described current block E motion prediction.
Wherein, though look-up table 12 is identical with 14 structures, the occurrence difference of its storage.In addition, the MbRefIndex of MbRefIndex_E 0, MbRefIndex_E 1, MbRefIndex_E 2, MbRefIndex_E 3 expression current block E gets 0,1 in each look-up table of this embodiment part, four kinds of possibility situations of 2,3; The MbRefIndex of MbRefIndex_ Φ 0, MbRefIndex_ Φ 1, MbRefIndex_ Φ 2, MbRefIndex_ Φ 3 expression piece Φ gets 0,1, four kinds of possibility situations of 2,3, and Φ gets A, B or C; In each table, the MbRefIndex possibility situation of current block E is the value that will search with the respective value tmp_MV_predict that intersects of the MbRefIndex possibility situation of piece Φ, wherein be designated as tmp_MV_predictmn for MbRefIndex_E m and the pairing tmp_MV_predict of MbRefIndex_ Φ n, m, n is all [0,3] value in the scope, this is a kind of mode of representing comparatively intuitively certainly, should not limit to some extent the present invention.In addition, " 0 " in all look-up tables represents that this kind may situation can not take place.
What it should be noted that above look-up table adopts is a kind of form of expression of two-dimentional form, and it also can make the one dimension form according to all possible situations fully, and this is equally in protection scope of the present invention.
In addition, the possible situation of BlockDistanceE in this embodiment (corresponding one by one with MbRefIndex_E), BlockDistance Φ (corresponding one by one with MbRefIndex_ Φ) has all been enumerated at 4 kinds with interior situation, obtain for analogizing fully, be not elaborated at this above 4 kinds situation.
In addition, the numeral among Fig. 4, Fig. 6, Fig. 8, Figure 10, Figure 11, Figure 13 in the square frame is the number of this frame as the reference frame, and in the forward motion vector prediction, its value with MbRefIndex is consistent, is that index carries out with MbRefIndex when tabling look-up; In backward motion vector prediction, its value with MbRefIndex is consistent, but is that index carries out with MbRefIndex+2 when tabling look-up, and MbRefIndex+2 can be considered as new MbRefIndex at this moment.
Need to prove that core of the present invention is calculative content is calculated in advance and stores for inquiry, structure that it is not limited to show or file layout, above-mentioned only is specific embodiment, it should not limit the present invention.In a preferred embodiment of the invention, the concrete storage of all look-up tables can adopt the mode of array to realize; The content of being stored in the look-up table of the present invention is not limited to the tmp_MV_predict of embodiment part, its can also be the BlockDistance, piece of each piece apart from index data or 512/BlockDistance Φ or the like, the present invention also is not limited to institute's memory contents in the table.
In a kind of coding and decoding video of the present invention, among another embodiment of method of motion vector prediction, H.264 can adopt this method equally in the standard.H.264 in the standard under the time domain direct mode in the middle of the derivation of brightness movement vector and reference key, motion vector mvL0, mvL1 is as the scale value of the motion vector mvCol of the sub-macroblock partition piece of common location, calculates (specifically can with reference to H.264 received text) according to following process:
tx=(16384+Abs(td/2))/td 15-1
DistScaleFactor=Clip3(-1024,1023,(tb*tx+32)>>6)?15-2
mvL0=(DistScaleFactor*mvCol+128)>>8 15-3
mvL1=mvL0-mvCol 15-4
Wherein, tb and td press following formula derives:
tb=Clip3(-128,127,DiffPicOrderCnt(currPicOrField,pic0))?15-5
td=Clip3(-128,127,DiffPicOrderCnt(pic1,pic0)) 15-6
Above-mentioned DistscaleFactor represents apart from zoom factor, Clip3 is the defined function of standard H.264, mvL0 and mvL1 represent motion vector, DiffPicOrderCnt represents different images sequential counting function, currPicOrField represents present frame or field, pic0 and pic1 are frame/field variable, and tb, tx and td are intermediate variable.
Wherein, for 15-5 and two formulas of 15-6, because the number of pic1 and pic0 is limited (H.264 in the standard document disclosure being arranged), under the situation of given present frame or (currPicOrField), the number of tb and td also is limited; Because the number of tb and td also is limited, thereby according to formula 15-1, the value of tx also is limited; Thereby, according to formula 15-2, also be limited apart from the value of zoom factor (DistscaleFactor).Therefore, can before the entropy decoding, calculate result and storage equally in advance apart from limited value of zoom factor, and then when carrying out the calculating of formula 15-3 to motion vector prediction process, directly read good as calculated result and get final product, so also can accelerate the forecasting process of motion vector greatly, raise the efficiency.Same, can exist with the form of look-up table for the described data of calculating storage in advance, search corresponding value by relevant index field in use and get final product.Because process and AVS standard class are seemingly, those of ordinary skills can obtain application in standard H.264 not spending creative work on the basis described above fully, therefore, for avoiding repeating elaboration, further do not launch at this.
With reference to Figure 15, illustrate the structural representation of an embodiment of a kind of Video Decoder of the present invention.As shown in the figure, comprise processor 151, entropy decoder module 152, inverse transformation inverse quantization module 153, intra-framed prediction module 154, motion compensating module 155, reconstructed module 156, loop filtering module 157, computing module 158 and video output module 159, memory module 160.
Wherein, the function of each module is described below:
Entropy decoder module 152 is used to search initial code, removes byte of padding, and information such as decoding fixed length code, variable-length encoding, intra prediction mode and motion vector;
Inverse transformation inverse quantization module 153 is used to carry out contrary word (being zigzag scan) scanning, calculates the coefficient of the residual matrix behind the anti-cosine transform, and carries out inverse quantization inverse transformation output residual image matrix;
Intra-framed prediction module 154 is used for judging that reference sample value and reference prediction pattern at the deposit position of memory, calculate the intra prediction mode of current block, and the intra-frame prediction block that calculates current block;
Motion compensating module 155 is used to calculate motion vector, reads the compensation of reference pixel and controlled motion and carries out, and finishes non-integer point picture element interpolation etc. and handles prediction piece after the back output movement compensates; In this process, the result to the small part repetitive operation in the prediction of motion vector obtains by the result who searches prior storage;
Reconstructed module 156 is used for prediction piece and residual matrix addition output reconstructed blocks;
Loop filtering module 157 is used for determining filtering parameter and carrying out the video data that filtering obtains decoding and finishes according to current block position and macroblock coding information;
Computing module 158, the result to the small part double counting in the motion vector prediction process that is used for motion compensating module 155 is carried out calculated and stored before the entropy decoding, read use in described motion vector prediction process;
Processor 151 is used for the data processing and the process control of decode procedure;
Video output module 159 is used for the video data that output decoder finishes;
Memory module 160 is used for the result that storage computation module 158 calculates
Address control module 161 is used for controlling the result who reads in the described memory module 160 in the motion vector prediction process of motion compensating module 155.
Because above-mentioned processor 151, entropy decoder module 152, inverse transformation inverse quantization module 153, intra-framed prediction module 154, reconstructed module 156, loop filtering module 157, video output module 159 are prior art, are not described in detail at this; The part of removing motion-vector prediction in the described motion compensating module 155 is a prior art also, also is not explained in detail.In addition, described memory module 160 can adopt the FLASH memory, does not further set forth.The mode that described address control module 161 adopts field index is searched to read for motion-vector prediction and is used being stored in result in the memory module 160.
Below by being reached the H.264 introduction of decoder, the AVS decoder specifically sets forth the present invention.
At first, the double counting in the motion vector prediction process comprises following thrin at least in the AVS standard: the calculating apart from index data of piece; The calculating of the distance between a piece and its reference block; The distance between current block and its reference block and the left piece of 512 pairs of described current blocks, go up piece or upper right and round the calculating of the product of value afterwards with distance between its reference block.Three kinds of above-mentioned calculating can be carried out repeatedly in the motion vector prediction process of same frame/field, thereby it is calculated in advance and store, this function then is to be finished by computing module 158 and memory module 160, particular content can not repeat to set forth at this with reference to the description of figure 4 to Figure 14 related embodiment; Then can to one's heart's content not repeat to set forth at this for concrete motion vector prediction process with reference to embodiment illustrated in fig. 3 yet.
Secondly, in standard H.264 under the time domain direct mode in the middle of the derivation of brightness movement vector and reference key, motion vector mvL0, mvL1 is as the scale value of the motion vector mvCol of the sub-macroblock partition piece of common location, calculates (specifically can with reference to H.264 received text) according to following process:
tx=(16384+Abs(td/2))/td 15-1
DistScaleFactor=Clip3(-1024,1023,(tb*tx+32)>>6)?15-2
mvL0=(DistScaleFactor*mvCol+128)>>8 15-3
mvL1=mvL0-mvCol 15-4
Wherein, tb and td press following formula derives:
tb=Clip3(-128,127,DiffPicOrderCnt(currPicOrField,pic0)) 15-5
td=Clip3(-128,127,DiffPicOrderCnt(pic1,pic0)) 15-6
Wherein, for 15-5 and two formulas of 15-6, because the number of pic1 and pic0 is limited (H.264 in the standard document disclosure being arranged), under the situation of given present frame or (currPicOrField), the number of tb and td also is limited; Because the number of tb and td also is limited, thereby according to formula 15-1, the value of tx also is limited; Thereby, according to formula 15-2, also be limited apart from the value of zoom factor (DistscaleFactor).Therefore, equally can be before entropy decoding calculate apart from the result of limited value of zoom factor in advance and store by computing module 158 and memory module 160, and then when carrying out the calculating of formula 15-3 to motion vector prediction process, directly read good as calculated result and get final product by address control module 161, so also can accelerate the forecasting process of motion vector greatly, raise the efficiency.
Same, can exist with the form of look-up table for the described data of calculating storage in advance, search corresponding value by relevant index field in use and get final product.Because process and AVS standard class are seemingly, those of ordinary skills can obtain application in standard H.264 not spending creative work on the basis described above fully, therefore, for avoiding repeating elaboration, further do not launch at this.
By above whole elaboration; those of ordinary skills can be implemented in AVS fully, H.264 standard straight scoops out and uses the present invention; and under the prerequisite that does not spend performing creative labour and be applied to H.263, in the video encoding and decoding standard such as MPEG4, be applied to which kind of video encoding and decoding standard its all in protection scope of the present invention.
Above disclosed is a kind of preferred embodiment of the present invention only, can not limit the present invention's interest field certainly with this, and therefore the equivalent variations of doing according to claim of the present invention still belongs to the scope that the present invention is contained.

Claims (7)

1. the method for indexing position in the coding and decoding video may further comprise the steps:
A, calculate and storing moving vector prediction process in BlockDistanceE * (512/BlockDistanceA), BlockDistanceE * (512/BlockDistanceB) and all possible outcomes of BlockDistanceE * (512/BlockDistanceC);
B, establishment are used to locate the address control device of read step A institute event memory;
C, in video encoding-decoding process, when carrying out the calculating to described BlockDistanceE * (512/BlockDistanceA), BlockDistanceE * (512/BlockDistanceB) and BlockDistanceE * (512/BlockDistanceC), call described address control device and read corresponding results;
Wherein, all possible outcomes described in the steps A are with the form storage of look-up table; And address control device described in the step B is the location index device, by index field determined to locate the described all possible result that need read;
Described BlockDistanceA represents: piece A is to the distance between its reference block;
Described BlockDistanceB represents: piece B is to the distance between its reference block;
Described BlockDistanceC represents: piece C is to the distance between its reference block;
Described BlockDistanceE represents: piece E is to the distance between its reference block.
2. method for indexing position according to claim 1 is characterized in that, it is applied to AVS video encoding and decoding standard, MPEG2 video encoding and decoding standard or video encoding and decoding standard H.264.
3. method for indexing position according to claim 2 is characterized in that: the decode procedure of described three kinds of video encoding and decoding standards comprises video code flow is carried out the entropy decoding step of entropy decoding and vision signal predicted rebuilding and step that the prediction of the vision signal of output after rebuilding is rebuild; And described steps A and step B are arranged on before the entropy decoding step, and described step C is in the middle of the prediction reconstruction procedures.
4. method for indexing position according to claim 3 is characterized in that, also comprises between described entropy decoding step and the prediction reconstruction procedures:
Dequantization step is used for the quantization parameter of the video code flow that receives is carried out convergent-divergent, obtains conversion coefficient;
The inverse transformation step is used for described conversion coefficient is converted to the spatial domain value.
5. Video Decoder comprises:
Computing module is used for the calculating kinematical vector prediction
All possible outcomes of BlockDistanceE * (512/BlockDistanceA), BlockDistanceE * (512/BlockDistanceB) and BlockDistanceE * (512/BlockDistanceC);
Memory module is used to store all possible outcomes that described computing module obtains;
The address control module is used for the location and reads described memory module institute event memory;
The motion-vector prediction module, be used to carry out motion-vector prediction, when the calculating carried out to described BlockDistanceE * (512/BlockDistanceA), BlockDistanceE * (512/BlockDistanceB) and BlockDistanceE * (512/BlockDistanceC), call described address control module and read corresponding results, and finish motion vector prediction process;
Described all possible outcomes are stored in the described memory module with the form of look-up table; Described address control module is the location index module, and it is by determining to locate the described all possible result that need read to index field;
Wherein, described BlockDistanceA represents: piece A is to the distance between its reference block;
Described BlockDistanceB represents: piece B is to the distance between its reference block;
Described BlockDistanceC represents: piece C is to the distance between its reference block;
Described BlockDistanceE represents: piece E is to the distance between its reference block.
6. Video Decoder according to claim 5 is characterized in that, also comprises entropy decoder module, inverse quantization inverse transform block, motion compensating module and reconstructed module; Video code flow passes through described entropy decoder module, inverse quantization inverse transform block, motion-vector prediction module, motion compensating module and reconstructed module successively, and finally exports the video image after the reconstruct; And described computing module and memory module work in before the described entropy decoding, and described address control module works in motion vector prediction process.
7. according to claim 5 or 6 described Video Decoders, it is characterized in that it is applied to AVS video encoding and decoding standard, MPEG2 video encoding and decoding standard or video encoding and decoding standard H.264.
CN 200810015490 2008-04-01 2008-04-01 Method for indexing position in video decoder and video decoder Expired - Fee Related CN101257625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810015490 CN101257625B (en) 2008-04-01 2008-04-01 Method for indexing position in video decoder and video decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810015490 CN101257625B (en) 2008-04-01 2008-04-01 Method for indexing position in video decoder and video decoder

Publications (2)

Publication Number Publication Date
CN101257625A CN101257625A (en) 2008-09-03
CN101257625B true CN101257625B (en) 2011-04-20

Family

ID=39892041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810015490 Expired - Fee Related CN101257625B (en) 2008-04-01 2008-04-01 Method for indexing position in video decoder and video decoder

Country Status (1)

Country Link
CN (1) CN101257625B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102223543B (en) * 2011-06-13 2013-09-04 四川虹微技术有限公司 Reference pixel read and storage system
CN105847827B (en) * 2012-01-20 2019-03-08 索尼公司 The complexity of availability graph code reduces
CN102547292B (en) * 2012-02-15 2018-07-06 北京大学深圳研究生院 Use the coding-decoding method of on-fixed reference field forward prediction skip mode
CN107231559B (en) * 2017-06-01 2019-11-22 珠海亿智电子科技有限公司 A kind of storage method of decoded video data
WO2020088691A1 (en) 2018-11-02 2020-05-07 Beijing Bytedance Network Technology Co., Ltd. Harmonization between geometry partition prediction mode and other tools
WO2020140907A1 (en) * 2018-12-31 2020-07-09 Beijing Bytedance Network Technology Co., Ltd. Interaction between merge with mvd and amvr
CN113812165B (en) * 2019-05-09 2023-05-23 北京字节跳动网络技术有限公司 Improvements to HMVP tables

Also Published As

Publication number Publication date
CN101257625A (en) 2008-09-03

Similar Documents

Publication Publication Date Title
JP7004782B2 (en) Image prediction method and related equipment
CN101257625B (en) Method for indexing position in video decoder and video decoder
JP7123863B2 (en) Image prediction method and related device
CN110741640B (en) Optical flow estimation for motion compensated prediction in video coding
CN110213590B (en) Method and equipment for acquiring time domain motion vector, inter-frame prediction and video coding
JP6490203B2 (en) Image prediction method and related apparatus
CN101127902B (en) Interframe prediction processor with address management mechanism for motion vector storage
TWI262726B (en) Image processor, media recording program that computer can read, and method thereof
US10341679B2 (en) Encoding system using motion estimation and encoding method using motion estimation
CN102340664B (en) Techniques for motion estimation
US20080002772A1 (en) Motion vector estimation method
CN103079071A (en) Video decoder
CN102598670A (en) Method and apparatus for encoding/decoding image with reference to a plurality of frames
CN104717513A (en) Bidirectional interframe predicting method and device
CN111630860A (en) Video processing method and device
CN101309408A (en) Lightness block selection method of intra-frame prediction mode
JP2010288098A (en) Device, method and program for interpolation of image frame
CN105491380A (en) Method and apparatus for motion vector predictor derivation
JPH0846971A (en) Device for encoding moving picture
US20110090963A1 (en) Method and apparatus for zoom motion estimation
CN101227601B (en) Equipment and method for performing geometric transformation in video rendition
CN104918047B (en) A kind of method and device for removing of bi-directional motion estimation
CN102801982A (en) Estimation method applied on video compression and based on quick movement of block integration
JP4670085B2 (en) Method for determining reference picture block in direct coding mode
JP5083248B2 (en) Image data decoding arithmetic unit

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110420

Termination date: 20190401