CN101815218A - Method for coding quick movement estimation video based on macro block characteristics - Google Patents

Method for coding quick movement estimation video based on macro block characteristics Download PDF

Info

Publication number
CN101815218A
CN101815218A CN 201010140709 CN201010140709A CN101815218A CN 101815218 A CN101815218 A CN 101815218A CN 201010140709 CN201010140709 CN 201010140709 CN 201010140709 A CN201010140709 A CN 201010140709A CN 101815218 A CN101815218 A CN 101815218A
Authority
CN
China
Prior art keywords
search
motion
point
pred
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010140709
Other languages
Chinese (zh)
Other versions
CN101815218B (en
Inventor
刘鹏宇
贾克斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN 201010140709 priority Critical patent/CN101815218B/en
Publication of CN101815218A publication Critical patent/CN101815218A/en
Application granted granted Critical
Publication of CN101815218B publication Critical patent/CN101815218B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method for coding a quick movement estimation video based on macro block characteristics, relating to the field of video compression coding. The method comprises the following steps of: firstly, extracting brightness information of a current coding macro block from original video data; secondly, searching and designing a simple movement estimation search template aiming at an integral pixel movement vector and reasonably distributing search points; classifying movement sever degree of the current prediction macro block according to distortion principle, making a movement searching prior stopping strategy corresponding to movement characteristics from the movement characteristics of the current prediction macro block, dynamically adjusting the number of search layers and search points of the movement estimation and adaptively selecting a corresponding movement estimation search principle. Compared with the movement estimation search method adopted in a video coding standard H.264, the method can effectively accelerate the movement estimation search process, reduces the movement estimation time consumption, strictly controls the code rate increase, ensures better reconstructed Image quality, and realizes quick movement estimation coding.

Description

Method for coding quick movement estimation video based on macro block characteristics
Technical field
The present invention relates to the video compression coding field, design and realized a kind of quick video motion estimated coding compression method based on macro block characteristics.
Background technology
H.264/AVC unite the latest generation video encoding standard of formulation as ITU and ISO, have more excellent encoding compression performance and interference free performance, can adapt to different network environments, have important researching value and using value.Yet H.264 the raising of code efficiency is to be cost with encoder-side high computation complexity, has caused it to be difficult to the defective of using in real time.
Research and analyse and show, H.264 the overwhelming majority of the computation complexity of encoder-side comes from the inter prediction encoding of coding standard different from the past: that supports size and shape variable comprises 16 * 16,16 * 8,8 * 16,8 * 8,8 * 4,4 * 8, the interframe encoding mode (see figure 1) of 4 * 4 seven kinds of variable size blocks, 1/4 pixel motion vector, multi-reference frame utilize the estimation of Lagrangian rate-distortion optimization ROD (Rate Distortion Optimization) choice of technology optimum and interframe encoding mode to select or the like.Experimental analysis shows, adopts the motion estimation module of the whole interframe encoding modes of multi-reference frame to account for more than 85% of whole encoder amount of calculation; Only adopt the motion estimation module of single reference frame to account for the whole encoder amount of calculation more than at least 60%.Therefore, the computation complexity of huge scramble time expense an urgent demand reduction motion estimation module.
Estimation (Motion Estimation, ME), it is a core technology that generally adopts in the video code between frames, it also is the most important method of fundamental sum of time redundancy of eliminating video data, efficient and image that the quality of estimation directly has influence on coding recover quality, and the research of various algorithms also receives the concern of industrial quarters and academia always.
Various estimation and motion compensation process have been arranged at present, and popular algorithm mainly contains two kinds, and a kind of is the pixel-recursive algorithm, and another kind is a block matching algorithm.The PRA of comparing, BMA has been simplified problem greatly, though its precision has reduced, it is strong that it has the displacement tracking ability, easily advantage such as realizations has obtained application widely.At present popular video encoding standard (H.261/263/264, MPEG1/2/4 etc.) generally adopts the motion estimation algorithm based on the piece coupling.
The operation principle of block matching motion algorithm for estimating as shown in Figure 2.In estimation, frame of video is divided into equal-sized M * N piece, supposes that all pixels are all done identical translational motion in the piece, and subscript t is the time.The displacement of piece in t and t-Δ t interval Be called motion vector MV (Motion vector).Estimation is at reference frame F T-Δ tGiven window Ω in the match block of search present encoding piece, seek the process of optimum movement vector.In block matching method, with correlation function correlation function R (x y) represents relevance function between current macro and the reference macroblock, R (x, y) big more, the expression current block is strong more with correlation between the prediction piece, the two mates more.Usually use absolute error and SAD (Sum ofAbsolute Difference) to weigh the size of relevance function.
SAD ( x , y ) = MN × MAN ( x , y ) = Σ m = 1 M Σ n = 1 N | f k ( m , n ) - f k - 1 ( m + x , n + y ) |
f k(m, the n) brightness value of expression present frame macro block, f K-1(M, N represent the size of current macro for m+x, the n+y) brightness value of expression reference frame macro block, R (x, the y) relevance function between expression current macro and the reference macroblock is under this criterion, the correlation function R of current macro and predicted macroblock (x, y)=1/SAD (x, y).
In estimation, matching criterior is little to the precision influence of estimation, for the further computation complexity that reduces,
In existing block matching motion algorithm for estimating, that search precision is the highest at present is full-search algorithm (FS), but computation complexity is too high, is unfavorable for real-time application.Researchers have proposed various quick block matching motion algorithm for estimating for this reason, as three-step approach, new three-step approach, four step rule, rhombus therapy, MVFAST and PMVFAST scheduling algorithm.But all improvement algorithms all have in various degree decline than FS on search precision, search speed still is difficult to well satisfy actual requirement.Therefore, be necessary to seek block matching motion algorithm for estimating more efficiently.
General estimation comprises two parts: whole pixel motion is estimated and the decimal estimation of the optimum point that obtained by whole pixel search.Particularly whole fast pixel motion estimates to have obtained many concerns.This is because traditional fraction pixel estimation (such as 1/2 pixel) has only accounted for ratio seldom in the amount of calculation of whole estimation.In the recent period the main achievement of estimating to be optimized algorithm research at whole pixel motion in the compressed encoding H.264/AVC has the UMHexagonS (Unsymmetrical-Cross Multi-Hexagon Search) that people such as J.M.Zhang proposes, full name is " asymmetric cross multi-level hexagonal point search algorithm ", this method is compared with full-search algorithm, can reduce by 90% estimation computation complexity, bit rate is compared equally matched with full-search algorithm, average peak signal to noise ratio (peak signal-to-noise, PSNR) decline is less than 0.05dB, simultaneously can keep distortion performance preferably, be one of best fast search algorithm of present effect.H.264/AVC from reference software JM7.6 version, adopted the UMHexagonS algorithm, this algorithm combines multiple searching algorithm, has improved the validity and the robustness of prediction to a great extent.Comprise four steps, each step has adopted different search pattern (as Fig. 3), and basic step is as follows:
Step 1: the prediction of initial search point.
Adopt multiple predictive modes such as median prediction, upper strata prediction, former frame corresponding blocks motion-vector prediction, contiguous reference frame motion-vector prediction to ask motion vectors MV Pred, make initial search starting point more near the optimum prediction point.
Step 2: unsymmetrical cross searching.
Based on general hypothesis, in most of natural image sequences, the horizontal movement severe degree is generally much bigger than vertical direction, therefore can adopt asymmetric cross search to come optimized Algorithm.Shown in the step2 of Fig. 3, unsymmetrical cross searching horizon scan scope is 2 times of vertical search scope.Spacing is got two pixel units between the adjacency search point.Optimum Match is named a person for a particular job as the initial ranging central point of next step search.
Step 3: non-homogeneous multilayer hexagon graticule mesh search.
This search is divided into two stages and finishes in step.
Phase I Step3-1: with the optimal match point determined in the step 2 is the center, carries out 5 * 5 whole pixels and search for entirely in 4 * 4 zone around it, shown in the step 3-1 among Fig. 3.
Second stage Step3-2: is the center with optimal match point definite among the step3-1 as starting point, carries out non-homogeneous multilayer hexagonal grid search, shown in the step 3-2 among Fig. 3.Non-homogeneous multilayer hexagon graticule mesh search is used to handle significantly, the situation of irregular movement, and 4 layers of hexagonal meshes search heterogeneous are carried out in unification in the algorithm.Consider in the actual conditions that natural image sequence horizontal motion amplitude generally is greater than vertical direction, the non-homogeneous hexagon that adopts in the algorithm at 16 is as the basic search pattern, as shown in Figure 4.
Step 4: expand symmetrical hexagon search.
Step4-1: be that 2 pixels are carried out symmetrical hexagon six point search with step-length earlier, Correlation Centre point and 6 rate distortion costs values to angle point, as the central point of search next time, become the point of rate distortion costs value minimum with the point of rate distortion costs value minimum up to central point.Shown in the Step4-1 of Fig. 3.
Step4-2: reducing step-length again, is that 1 pixel is carried out diamond search with step-length, repeats the search procedure of Step4-1, ends during for smallest point up to the rate distortion costs value of central point.This point is whole pixel search optimal match point.Shown in the Step4-2 of Fig. 3.
As previously mentioned, though the UMHexagonS algorithm has significantly improved the arithmetic speed of full search, but this method is to the search strategy of seven kinds of interframe encoding modes employing traversal formula without exception, do not consider the motion feature of current predicted macroblock, its search point is still more, search strategy exists redundant, also has the very big rising space aspect the raising algorithm search speed.By researching and analysing whole pixel motion estimation characteristics and a large amount of experiment statistics data results in the video coding, find that its search point is still more, there is redundant phenomenon in algorithm, and search rule and search pattern still have room for promotion, are necessary further optimization.H.264/AVC original whole pixel search quick motion is estimated the shortcoming of UMHexagonS algorithm, mainly shows following several respects:
(1) all predicted macro blocks is adopted unified fixing motion search method of estimation, do not analyze at the macro block characteristic.
(2) search radius is fixed, and for the less region of search of uniform motion amplitude, excessive search radius is little to improving the motion estimation performance effect, has but spent more computing time.
(3) there is redundant phenomenon in search point, and not at the macro block self-characteristic, search is calculated like all search points are traveled through without exception, has caused coding consuming time.
Summary of the invention
In order effectively to reduce search point in the motion estimation process, accelerate the computational speed and the maintenance of estimation in the multi-reference frame, even raising motion search accuracy, the present invention sets about from the motion feature of predicted macro block, characteristics according to estimation in the standard interframe encode H.264/AVC, designed efficient at whole pixel motion vector search, motion estimation search matching template accurately, make corresponding motion search and preferentially stop strategy, propose a kind of rapid motion estimating method of the motion vector tagsort based on H.264/AVC, be intended to win with the less sacrifice in code efficiency aspect the significantly reduction of computation complexity.Experimental result shows the inventive method accelerated motion estimating searching process effectively, reduce amount of calculation, can keep simultaneously encoder bit rate lower in the former H.264 canonical algorithm and reconstructing video picture quality preferably, realize the doulbe-sides' victory of computation complexity and motion-estimation encoded efficient.
The concrete technical scheme of the present invention is: based on the method for coding quick movement estimation video of macro block characteristics, feature and inter prediction encoding Design Pattern motion estimation search template and dynamic method for fast searching according to current predicted macroblock, motion severe degree according to current predicted macro block, selection result in conjunction with the inter prediction encoding pattern, dynamic hunting zone and the corresponding motion estimation search criterion of adaptive selection adjusted, thereby realize fast motion estimation predictive coding, it is characterized in that specifically comprising the steps: inter macroblocks
Step 1: the luma component values of extracting current predicted macro block: the monochrome information with current predicted macro block is a coded object, extracts the luma component information of predicted macro block from current video frame;
Step 2: determine the motion estimation search template, distribute the search point: comprise four layers of search, along with the increase of search radius, the expansion of hunting zone progressively increases progressively from inside to outside every layer search point, and the search point on every layer is followed successively by 8,8,12,16;
Step 3: the initial search point of determining high-accuracy: adopt Lagrangian rate-distortion optimization (RDO-Rate DistortionOptimization) function, as estimation judgement foundation, best coupling prediction piece and optimal motion vector on the selection rate distortion sense makes the Bit Allocation in Discrete minimum of motion vector and residual coding; Utilize Lagrangian rate distortion criterion to select the optimal motion vector problem can be described as:
J motion(mv,ref|λ motin)=SAD[s,r(ref,mv)]+λ motin[R(mv-pred)+R(ref)](1)
(1) J in the formula MotionRate distortion costs value (RDcost for the motion vector of current prediction Motion); S is the current macro block pixels value; Mv is current vector, and pred is a predictive vector; The reference frame of ref for selecting; (ref mv) is the pixel value of reference macroblock to r; R is that motion vector carries out the bit number that differential coding consumes, and comprises the number of coded bits of difference of motion vector and its predicted value and the number of coded bits of reference frame; λ MotionBe Lagrange multiplier; SAD (Sum of absolute difference) be between current block and the reference block pixel absolute error and;
SAD [ s , r ( ref , mv ) ] = Σ x = 1 B 1 Σ y = 1 B 2 | s ( x , y ) - r ( x - m x , y - m y ) | - - - ( 2 )
B in the formula (2) 1And B 2Horizontal number of pixels and the Vertical number of pixels of representing piece respectively, according to different inter-frame forecast modes, its value can be 16,8, and 4; (x y) is the current macro block pixels value to s; (x y) is the pixel value of reference macroblock, m to r xAnd m yThe displacement of representing level and vertical direction respectively;
Utilize Lagrangian rate distortion criterion to select the problem of optimization model to be expressed as:
J mode(s,c,MODE|λ mode)=SSD(s,c,MODE|QP)+λ mode×R(s,c,MODE|QP)(3)
In the formula (3), MODE represents a kind of interframe encoding mode of current macro; J Mode(s, c, MODE| λ Moode) rate distortion costs value (RDcost under the expression MODE pattern Mode); S is original vision signal; C is the reconstructed video signal behind the employing MODE pattern-coding; λ ModeBe Lagrange multiplier; (s, c MODE|QP) are the total number of bits that comprise macro block header, motion vector and all DCT block messages relevant with pattern and quantization parameter to R, and it is by obtaining behind the coding that piece is carried out reality, so its operand is bigger; QP is the coded quantization step-length; SSD (s, c, MODE) (Sum of SquareDifference) be between primary signal and the reconstruction signal squared differences and, that is:
SSD ( s , c , MODE | QP ) = Σ i = 1 , j = 1 B 1 , B 2 ( S Y [ x , y ] - C Y [ x , y , MPDE | QP ) 2 +
Σ i = 1 , j = 1 B 1 , B 2 ( S U [ x , y ] - C U [ x , y , MPDE | QP ) 2 + - - - ( 4 )
Σ i = 1 , j = 1 B 1 , B 2 ( S V [ x , y ] - C V [ x , y , MPDE | QP ) 2
In formula (4) formula: B 1And B 2Represent the Horizontal number of pixels and the Vertical number of pixels of piece respectively, its value can be 16,8, and 4; S YThe value of [x, y] expression source macro block brightness information, C YThe value of the monochrome information of macro block is rebuild in [x, y, MODE|QP] expression; S U, S VAnd C U, C VThe value of representing corresponding colour difference information respectively; Specifically may further comprise the steps:
Step 3.1: utilize three kinds of predictive coding patterns choose best motion vector (Motion Vector, MV), promptly based on the median prediction MV of spatial domain Pred_spaceThe upper strata prediction MV of many sizes of the estimation macroblock partitions characteristics that adopt based on standard H.264 Pred_uplayerReference frame motion-vector prediction MV based on time-domain Pred_ref
(a) MV Pred_space: median prediction, utilize spatial coherence, get the left side of the present frame of having obtained, last, upper right median of facing the motion vector of piece;
(b) MV Pred_uplayer: the upper strata prediction, utilize H.264 many sizes of estimation macroblock partitions characteristics, comprise pattern 1 (16 * 16), pattern 2 (16 * 8), mode 3 (8 * 16), pattern 4 (8 * 8), pattern 5 (8 * 4), pattern 6 (4 * 8), the hierarchical search order of mode 7 (4 * 4) is got co-located, the upper level of having obtained (up layer), the motion vector that is twice piece;
(c) MV Pred_ref: the time-domain prediction, utilize temporal correlation, according to the motion vector MV of the current block in the former frame reference frame of having obtained RefPredict in proportion
Figure GSA00000075176400054
Wherein the time of current macro place frame is t, and the time of predicted macroblock place reference frame is t ';
Step 3.2: the motion vector MV of corresponding sad value and prediction has very strong correlation; With MV Red_space, MV Pred_uplayer, MV Pred_refThe rate distortion costs value of point pointed is designated as pred_space_mincost respectively, pred_uplayer_mincost, pred_ref_mincost, the motion vector points point that will have minimum rate distortion costs value is as initial search point, and this is named a person for a particular job as the initial ranging central point of next step unsymmetrical cross searching;
Step 4: unsymmetrical cross searching: asymmetric cross search horizon scan scope is made as W, and the vertical search scope is half of horizon scan scope, is W/2, and spacing is got two pixel units between the adjacency search point, and then the search point of horizontal direction is W/2; The search point of vertical direction is W/4; Total W/2+W/4=3W/4 candidate search point in unsymmetrical cross searching; The optimal match point that this step is determined in this 3W/4 candidate search point is as the initial ranging central point of next step non-homogeneous multilayer hexagonal grid search;
Step 5: judge current macro motion severe degree: current predicted macroblock is divided into three class by severe degree, i.e. motion severe degree is lower, the motion severe degree is medium, the motion severe degree is higher; The determination methods of current predicted macroblock motion severe degree is as follows:
Step 5.1: the rate distortion costs value of calculating the motion vector of current predicted macroblock according to formula (1) is designated as RDcost Motion, calculate the rate distortion of current predicted macroblock under the mode pattern according to formula (3) and be designated as RDcost for value Mode, therefrom choose rate distortion costs value with minimum prediction, be designated as RD_mincost, that is: RD_mincost=min{Pred_space_mincost, Pred_uplayer_mincost, Pred_ref_mincost};
Step 5.2: the value of determining pred_mincost according to the motion-vector prediction mode of initial ranging point selection:
If the initial search point of determining in step 3 adopts the motion vector of time-domain prediction mode, then pred_mincost=pred_ref_mincost;
If the initial search point of determining does not adopt the motion vector of time prediction mode, be further divided into two class situations in step 3:
If that the interframe encoding mode of current estimation predicted macroblock is selected is 16 * 16 large scale pattern, then pred_mincost=pred_space_mincost;
If that the interframe encoding mode of current estimation predicted macroblock is selected is not 16 * 16 large scale pattern, then pred_mincost=pred_uplayer_mincost;
Step 5.3: calculate the motion severe degree and adjust factor gamma, δ:
γ = Bsize [ blocktype ] pred _ min cos t 2 - α Radii 1 [ blocktype ]
(5)
δ = Bsize [ blocktype ] pred _ min cos t 2 - α Radii 2 [ blocktype ]
Bsize[blocktype in the formula (5)] size of the current predicted macroblock of expression, its value can be 16,8, and 4;
Matrix α RadiilAccording to 7 kinds of inter macroblocks partition modes, its value is defined as [blocktype]:
α Radiil[1]=-0.23;α Radiil[2]=-0.23;α Radiil[3]=-0.23;
α Radiil[4]=-0.25;α Radiil[5]=-0.27;α Radiil[6]=-0.27;α Radiil[7]=-0.28;
Matrix α Radii2[blocktype] according to 7 kinds of inter macroblocks partition modes, value be defined as:
α Radii2[1]=-2.39;α Radii2[2]=-2.40;α Radii2[3]=-2.40;
α Radii2[4]=-2.41;α Radii2[5]=-2.45;α Radii2[6]=-2.45;α Radii2[7]=-2.48;
Step 5.4: the judgement of current predicted macroblock motion severe degree:
When RD mincost<(1+ γ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is lower;
When (1+ γ) * pred_mincost<RD_mincost<(1+ δ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is medium;
When RD_mincost>(1+ δ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is higher;
Step 6: carry out 5 * 5 pixels and search for entirely: after step 5 has been judged current search content severe degree, be the center with current future position just when the search content severe degree is low only, the full search of 5 * 5 search points is carried out in 4 * 4 zones around it; Then abandon 5 * 5 search entirely when the motion severe degree when being medium or violent, directly enter step 7;
Step 7: non-homogeneous multilayer hexagonal mesh search: according to the motion severe degree Dynamic Selection motion estimation search number of plies of current macro: adopt non-homogeneous hexagon search as the basic search pattern, the search number of plies mostly is 4 layers most; When the motion severe degree is low, only carry out 2 layers of non-homogeneous hexagonal mesh search, every layer search point is 8; When the motion severe degree is medium, carry out 3 layers of non-homogeneous hexagonal mesh search, search point from inside to outside is followed successively by 8,8, and 12; When the motion severe degree is higher, just carry out 4 layers of non-homogeneous hexagonal mesh search, search point from inside to outside is followed successively by 8,8, and 12,16; The optimal match point that this step is determined in non-homogeneous hexagonal mesh search is as the initial ranging central point of the symmetrical hexagon search of step 8 expansion;
Step 8: expand symmetrical hexagon search: the optimal match point of determining with step 7 is as the initial ranging central point of this step; Here adopt two stage ways of search:
Phase I: be that 2 pixels are carried out symmetrical hexagon six point search with step-length earlier, Correlation Centre point and 6 rate distortion costs values to angle point, according to the rate distortion criterion with rate distortion costs value smallest point as the central point of search next time, become smallest point up to the rate distortion costs value of central point;
Second stage: reduce step-length again, with step-length is that 1 pixel is carried out rhombus four point search, repeat the search procedure of phase I, stop search up to the rate distortion costs value of central point during for smallest point according to the rate distortion criterion, this point is the final whole pixel search optimal match point of determining;
Step 9: output movement estimated coding information is in the Out.264 file, and record estimation time (Total ME timefor sequence), code check (Bit rate), Y-PSNR information (PSNR), to estimate the encryption algorithm quality.
The fast motion estimation search core algorithm of the inventive method is called NUMHexagonS (New-UMHexagonS), is convenient to distinguish with the UMHexagonS algorithm and compare.Mainly comprise following technology among the NUMHexagonS:
One, the more succinct search pattern of design
1.1 design principle one
The UMHexagonS algorithm is the center at step step3-2 with current prediction starting point, when carrying out the non-homogeneous hexagonal grid search of multilayer, is to adopt every layer of non-homogeneous hexagon search template that 16 search points are all arranged all the time, shown in the step3-2 among Fig. 3.The template of the search point of fixing so every layer of distribution easily causes the unreasonable distribution of search point.Just think, if be positioned on the non-homogeneous hexagon search template of ground floor of radius minimum of innermost layer and distribute the not waste of 16 search points, so along with the increase of search radius, still distribute 16 search points on the progressively abducent second layer, the 3rd layer, the 4th layer the non-homogeneous hexagon search template, the not enough phenomenon of search point must occur; On the contrary, distribute 16 search points enough to satisfy searching requirement on the 4th layer the non-homogeneous hexagon search template of outermost radius maximum if be positioned at, be positioned at so on the non-homogeneous hexagon search template of ground floor of radius minimum of innermost layer and still distribute 16 search points to certainly exist the phenomenon of search point waste.A more outstanding search pattern should be can be along with the increase of search radius, i.e. the expansion of hunting zone, and being distributed in every layer of search point on the non-homogeneous hexagon search template should increase progressively thereupon, rather than the search pattern of taking fixing search to count.
1.2 technical application
Change every layer of non-homogeneous hexagonal point search of fixing 16 search points into can adjust search point automatically non-homogeneous hexagonal point search, as shown in Figure 5 according to radius.Because former UMHexagonS algorithm itself has had the point prediction characteristic of accurate, therefore near the possibility of optimal match point current future position is bigger, in order to keep the precision of prediction of original algorithm, on the non-homogeneous hexagon search template of the ground floor that is positioned at internal layer and the second layer, distribute 8 point search points, along with the search point of every layer of the increase of search radius also can correspondingly increase, the 3rd layer of non-homogeneous hexagon distributes 12 search points, distributes 16 search points up to the 4th layer of non-homogeneous hexagon.The search pattern that experimental results demonstrate the inventive method can guarantee search accuracy, has avoided the waste of unnecessary search point again, has saved the estimation time.
Two, according to the adaptive selection search of the macro block characteristics number of plies
2.1 design principle two
In a single day the UMHexagonS algorithm enters step step3-2 when carrying out non-homogeneous multilayer hexagonal grid search, no matter the special severe degree of the motion of current predicted macroblock how, just can forward next step search to after all can only finishing whole 4 layers non-homogeneous hexagon searches without exception, as shown in Figure 3.4 layers non-homogeneous hexagon search is counted considerablely in the former algorithm, amounts to 4 layers * 16/layer=64 search points.Especially for having the UMHexagonS algorithm that plays point prediction of high-accuracy, beginning just finds the probability of optimum bigger, is unnecessary so be wasted in counting in the outer non-homogeneous hexagonal mesh search for uniform motion severe degree lower region.If the search number of plies in the time of can determining to carry out the non-homogeneous hexagon search of multilayer according to the search content severe degree will be saved search point greatly at every turn, reduce the computation complexity of estimation, save the estimation time.
2.2 technical application
With the number of plies is that 4 non-homogeneous hexagonal mesh search changes the non-homogeneous hexagonal mesh search of the search number of plies with the adaptive change of search content severe degree into, the dynamic search number of plies of selecting.At first, judge the severe degree of current predicted macroblock, the motion severe degree of current macro is divided into three class, that is: severe degree is lower, severe degree is medium, severe degree is higher.According to the search number of plies of the definite non-homogeneous hexagonal mesh search that need carry out of motion severe degree, as shown in Figure 5.
● when the motion severe degree of macro block is hanged down, only carry out the two-layer non-homogeneous hexagonal mesh search of the ground floor and the second layer, search point is 8+8=16;
● when the motion severe degree of macro block is medium, search for three layers of non-homogeneous hexagonal mesh search of three layers of ground floors to the, search point is 8+8+12=28;
● when the motion severe degree of macro block is higher, just search for four layers of non-homogeneous hexagonal mesh search of four layers of ground floors to the, search point is 8+8+12+16=44.
Three, the full search technique of selectable 5 * 5 pixels
3.1 design principle three
The UMHexagonS algorithm is the center at step Step3-1 with current prediction starting point, carries out the full search of 5 * 5 whole pixels around it in 4 * 4 the zone, as Fig. 3 Step3-1.Step3-1 has 25 search points, and amount of calculation is obviously considerable.Why former UMHexagonS algorithm carries out 5 * 5 full search is because after having carried out a point prediction of high-accuracy, current search point just near optimum prediction point, carries out 5 * 5 full search this moment and very likely find the optimum prediction match point in 5 * 5 scope probably.But when the search content movement degree is relatively more violent, near the probability of optimal match point current future position is lower, carry out 5 * 5 full search this moment again and obviously no longer be applicable to the macro block that movement degree is more violent, caused the significant wastage of unnecessary search point on the contrary.Also will save search point greatly if can whether carry out 5 * 5 full search according to the adaptive selection of motion severe degree of current search content.
3.2 technical application
After judging current search content motion severe degree, only when search content motion severe degree is low, just carry out 5 * 5 search entirely, and then carry out multi-level non-homogeneous hexagonal mesh search; Then abandon 5 * 5 search entirely when the motion severe degree when being medium or violent, directly carry out multi-level non-homogeneous hexagonal mesh search.
Technical thought of the present invention is characterized as:
1. the present invention prolongs with the rate distortion criterion in the canonical algorithm H.264/AVC, and the motion vector of rate distortion costs value of selecting to have optimum meaning is as the estimation predictive vector of optimum.Utilize three kinds of optimisation techniques mentioning among the present invention, guaranteeing a little increase of code check, estimation precision and video quality are constant substantially, keep under the prerequisite of original code flow structure, increase substantially motion-estimation encoded speed.
2. the motion estimation search template of design the inventive method is determined the search point in the middle of every layer of the search pattern, guarantees the reasonable distribution of search point.
3. design motion estimation search scheme is divided into six steps, the i.e. first steps: the initial search point forecast of high-accuracy; Second step: unsymmetrical cross searching; The 3rd step: the motion severe degree of calculating current predicted macroblock; The 4th step:, selectively carry out 5 * 5 pixels and search for entirely according to the motion severe degree of current predicted macroblock; The 5th step:, carry out non-homogeneous multilayer hexagonal grid search according to the motion severe degree of current predicted macroblock; The 6th step: expand symmetrical hexagon search.
To sum up analyze, by above three kinds of technology, the inventive method, simplified search pattern (as shown in Figure 6), effectively designed the reasonable distribution of search point, saved motion estimation search and counted, continue to use primary standard method starting point and predicted advantage accurately, taken into full account the feature of current predicted macroblock, not only 2 layers non-homogeneous hexagonal mesh search can only have been arranged to the direct employing of the lower macro block of movement degree, the method for anticipation in advance realizes fast prediction; Simultaneously again the higher macro block of movement degree is given up 5 * 5 pixels and searched for entirely, kept the method for 4 layers of non-homogeneous hexagonal mesh search of primary standard.Fully take into account all macro block motion features like this, according to its motion severe degree, the Dynamic Selection search strategy, the accuracy and the encoded video quality of estimation can either have been guaranteed, the growth of strict control code check, can significantly improve motion-estimation encoded speed again, save the motion-estimation encoded time.
The present invention has following beneficial effect:
A large amount of experiment statistics data results show, the inventive method is compared with the UMHexagonS method of up-to-date employing among the present H264, under the prerequisite that guarantees the estimation accuracy, the code check increase has been controlled in strictness, guaranteed that reconstruction video picture quality is constant substantially, kept original code flow structure, reduced search point effectively, it is consuming time to have significantly reduced estimation, has improved inter prediction encoding speed.Realized fast motion estimation coding to inter macroblocks.
Below in conjunction with description of drawings and embodiment the present invention is described in further detail.
Description of drawings
Inter macroblocks partition mode in the algorithm predicts method between Fig. 1 standard frame;
Fig. 2 block matching motion is estimated schematic diagram;
The asymmetrical hexagonal-shaped search pattern of 16 search points of Fig. 3;
The asymmetric right-angled intersection multilayer of Fig. 4 UMHexagonS hexagon graticule mesh searching algorithm schematic diagram;
The with different levels non-homogeneous hexagonal mesh search pattern of Fig. 5 the present invention;
The motion estimation searching method schematic diagram of Fig. 6 the inventive method;
The technical scheme flow chart of Fig. 7 the inventive method;
Fig. 8 motion vector median prediction schematic diagram;
Fig. 9 motion vector upper strata prediction schematic diagram;
Figure 10 motion vector time-domain prediction schematic diagram;
Embodiment
In order to check the present invention to propose the validity of method, the cycle tests of having selected to have different characteristics is as motion comparatively violent sequence C oastguard and Forman; The comparatively mild sequence A kiyo that moves, Miss America and Mother andDaughter; Sequence Mobile with more details smooth motion.And from representing three performances of method for video coding quality, promptly motion-estimation encoded time, compression bit rate and Y-PSNR compare statistics with the inventive method and the H.264/AVC middle UMHexagonS motion estimation coding method that adopts.In performance test, the up-to-date coding checkout model JM12.2 that adopts JVT to announce, the experiment condition configuration is as follows: main frame is P42.8CPU, 512M internal memory, 100 frames of encoding, frame per second 30f/s code flow structure is IPPP, promptly first frame adopts I frame coding, and remaining adopts P frame coding, and quantization parameter QP is made as 28, entropy coding is CAVLC, 5 reference frames.
Because this method is to finish at the luminance component in the video sequence, read the video sequence of one section yuv format in actual use earlier, extract its luma component information value, encoder calls the inter macroblocks fast motion estimation coding module of the present invention's design and finishes concrete video compression coding.
In concrete the enforcement, in computer, finish following steps:
The first step: read in the video sequence of yuv format according to encoder configuration file encoder.cfg, according to the parameter configuration encoder in the configuration file.As finish the frame number FramesToBeEncoded of required video sequence coding; Frame per second FrameRate; The length and width size SourceWidth of original video files, SourceHeight; Output file title OutputFile; Quantization step QP value QPISlice, QPPSlice; Motion estimation search scope SearchRange; Motion estimation search mode type SearchMode; The reference frame number NumberReferenceFrames that allows; Activity ratio distortion cost function R DOptimization; Parameter configuration such as entropy coding type SymbolMode;
Second step: from the video file of original yuv format, read out the luma component values of video sequence, take out the luma component values of the macro block that needs coding in order;
The 3rd step: determine initial search point: current predicted macroblock is made median prediction MV Pred_space, upper strata prediction MV Pred_uplayerWith time-domain prediction MV Pred_ref
Median prediction MV Pred_space: utilize the mean prediction device, with a left side that obtained, that be positioned at current predicted macroblock, go up, the median of the motion vector (MV) of upper right adjacent piece is as median prediction vector MV Pred_space
Median prediction is a kind of technology commonly used, and its theoretical foundation is to exist between piece and the piece very big spatial coherence, and therefore the optimum movement vector of current predicted macroblock can be obtained by the motion-vector prediction of adjacent piece around known.As shown in Figure 8, E is current predicted macroblock, and its left side adjacent macroblocks is A, and the top adjacent macroblocks is B, and upper right adjacent macroblocks is C, and the calculating formula of median prediction is as follows:
MV pred_space=median(MV_A,MV_B,MV_C) (9)
When piece A is outside picture or outside GOB (sheet group) border, MV Pred_spaceSubstitute with (0,0); When piece C is outside picture or outside GOB (sheet group) border, MV Pred_spaceMotion vector with piece D substitutes, i.e. MV Pred_spaceMV_D.
Upper strata prediction MV Pred_uplayer: utilize interframe movement in the standard H.264/AVC to estimate the characteristics of many sizes macroblock partitions, employing is from pattern 1 (16 * 16), pattern 2 (16 * 8), mode 3 (8 * 16), pattern 4 (8 * 8), pattern 5 (8 * 4), pattern 6 (4 * 8), up to the hierarchical search of mode 7 (4 * 4) order from get obtained, co-located, upper level (up layer), be twice the motion vector of sized blocks, as shown in Figure 9; (for example, pattern 5 (mode5) (8 * 4) or pattern 6 (mode6) (4 * 8) are the upper stratas of mode 7 (mode7) (4 * 4), and pattern 4 (mode4) (8 * 8) is the upper strata of pattern 5 (mode5) (8 * 4) or pattern 6 (mode6) (4 * 8)).
Time-domain prediction MV Pred_ref: utilize temporal correlation, get the motion vector of current block in the reference frame that obtained, last and predict in proportion.Because the continuity of video sequence, the motion vector of current predicted macroblock in different reference frames also has certain correlation.As shown in figure 10: the time of supposing current predicted macroblock place frame is t, the time at its reference frame place, front is t ', if in its front reference frame during the best matching blocks of search current block, can utilize current predicted macroblock to estimate for the motion vector in the reference frame of t '+1 in the time that current predicted macroblock be motion vector in the reference frame of t ' in the time:
MV pred _ ref = MV ref × t - t ′ t - t ′ - 1 - - - ( 10 )
Similar motion vector MV, corresponding sad value also has very strong correlation.The rate distortion costs value of these three kinds of motion vector prediction mode points pointed is designated as Pred_space_mincost respectively, Pred_uplayer_mincost, Pred_ref_mincost, the motion vector points point that will have minimum rate distortion costs value is determined the initial ranging initial point position as initial search point; The UMHexagonS motion estimation algorithm that the inventive method has kept former H.264 standard to adopt has the characteristics of the starting point prediction of high-accuracy, makes it more near best estimation match point position; The optimum Match of this step is named a person for a particular job as the initial ranging central point of next step unsymmetrical cross searching;
The 4th step: carry out unsymmetrical cross searching; Based in most of natural image sequences, the feature that the horizontal movement severe degree is generally much bigger than vertical direction.In this implementation process, get W=32, promptly open a size and be 32 * 16 cross-shape search window, spacing is got two pixel units between the adjacency search point, and the hunting zone of horizontal direction is 32, and search point is 16; The hunting zone of vertical direction is 16, and search point is 8; Total 16+8=24 candidate search point in unsymmetrical cross searching; According to the rate distortion criterion, utilize formula (3) 24 search points to be calculated the rate distortion costs value of its motion vector respectively, therefrom choose the optimal match point of the point of rate distortion costs value minimum as this step, with it as the initial ranging central point in next step non-homogeneous multilayer hexagonal mesh search step;
The 5th step: the severe degree of judging current macro: according to the rate distortion costs criterion current predicted macroblock is divided into three class by severe degree, i.e. motion severe degree is lower, the motion severe degree is medium, the motion severe degree is higher; The determination methods of current predicted macroblock motion severe degree is as follows:
Calculate the rate distortion costs value RDcost of the motion vector of current predicted macroblock according to formula (3) Motion, calculate the rate distortion of current predicted macroblock under the mode pattern for value RDcost according to formula (5) Motion, therefrom choose rate distortion costs value with minimum prediction, be designated as RD_mincost, utilize formula (7) to judge the motion severe degree of current predicted macroblock:
The motion severe degree of definition is adjusted factor gamma, δ, the matrix variables α that relates in formula (7) Radiil, its attached value is as follows:
AlphaRadii1[1]=-0.23f;
AlphaRadii1[2]=-0.23f;
AlphaRadii1[3]=-0.23f;
AlphaRadii1[4]=-0.25f;
AlphaRadii1[5]=-0.27f;
AlphaRadii1[6]=-0.27f;
AlphaRadii1[7]=-0.28f;
The matrix variables α that relates to Radii2, its attached value is as follows:
AlphaRadii2[1]=-2.39f;
AlphaRadii2[2]=-2.40f;
AlphaRadii2[3]=-2.40f;
AlphaRadii2[4]=-2.41f;
AlphaRadii2[5]=-2.45f;
AlphaRadii2[6]=-2.45f;
The 6th step: carry out 5 * 5 pixels and search for entirely: the inventive method has been judged current search content severe degree in the 5th step after, be the center with current future position just when the search content severe degree is low only, the full search of 5 * 5 search points is carried out in 4 * 4 zones around it; Then abandon 5 * 5 search entirely when the motion severe degree when being medium or violent, directly entered for the 7th step;
The 7th step: carry out non-homogeneous multilayer hexagonal mesh search; The motion severe degree Dynamic Selection motion estimation search number of plies according to current macro: adopt non-homogeneous hexagon search as the basic search pattern in the present invention, the search number of plies mostly is 4 layers most; According to the motion severe degree of the current predicted macroblock of judging in the 6th step, determine the search number of plies:
With the innermost layer of asymmetrical hexagonal-shaped grid search in the former algorithm, i.e. ground floor search point coordinates is an example, and 16 search points on the ground floor are set at the coordinate of level and vertical direction:
static?const?int?Big_Hexagon_x[16]={0,-2,-4,-4,-4,-4,-4,-2,0,2,4,4,4,4,4,2};
static?const?int?Big_Hexagon_y[16]={4,3,2,1,0,-1,-2,-3,-4,-3,-2,-1,0,1,2,3};
If the motion severe degree of current predicted macroblock is lower, then only carry out interior 2 layers of non-homogeneous hexagonal mesh search, every layer search point is 8, amounts to 16 search points, more former algorithm is saved 48 search points;
New multi-level asymmetrical hexagonal-shaped grid search template, level is put in 8 search on the ground floor and the vertical direction coordinate is set to:
static?int?Big_Hexagon_x1[8]={0,-4,-4,-4,0,4,4,4};
static?int?Big?Hexagon_y1[8]={4,2,0,-2,-4,-2,0,2};
Level is put in 8 search on the second layer and the vertical direction coordinate is set to:
static?int?Big_Hexagon_x2[8]={0,-4,-4,-4,0,4,4,4};
static?int?Big_Hexagon_y2[8]={4,2,0,-2,-4,-2,0,2};
If the motion severe degree of current predicted macroblock is medium, then carry out 3 layers of non-homogeneous hexagonal mesh search from inside to outside, search point is followed successively by 8,8,12 from inside to outside, amounts to 28 search points; More former algorithm is saved 36 search points;
Level is put in 12 search on the 3rd layer and the vertical direction coordinate is set to:
static?int?Big_Hexagon_x3[12]={0,-4,-4,-4,-4,-4,0,4,4,4,4,4};
static?int?Big_Hexagon_y3[12]={4,2,1,0,-1,-2,-4,-2,-1,0,1,2};
If the motion severe degree of current predicted macroblock is higher, then carry out 4 layers of non-homogeneous hexagonal mesh search, search point is followed successively by 8,8,12,16 from inside to outside, amounts to 44 search points; More former algorithm is saved 20 search points;
Level is put in 16 search on the 4th layer and the vertical direction coordinate is set to:
static?int?Big_Hexagon_x4[16]={0,-2,-4,-4,-4,-4,-4,-2,0,2,4,4,4,4,4,2};
static?int?Big_Hexagon_y4[16]={4,3,2,1,0,-1,-2,-3,-4,-3,-2,-1,0,1,2,3};
The optimal match point that this step is determined in the asymmetrical hexagonal-shaped grid search is expanded the initial ranging central point of symmetrical hexagon search as next step;
The 8th step: expand symmetrical hexagon search, the optimal match point of determining with previous step is as the initial ranging central point of this step; Be divided into two search phases:
Phase I: be that 2 pixels carry out symmetrical hexagon six point search with step-length earlier, Correlation Centre point and 6 rate distortion costs values to angle point, according to the rate distortion criterion with rate distortion costs value smallest point as the central point of search next time, become smallest point up to the rate distortion costs value of central point;
Second stage: reducing step-length again, is that 1 pixel carries out the search of rhombus module with step-length, repeats searching of phase I. the rope process stops search according to the rate distortion criterion during for smallest point up to the rate distortion costs value of central point; This point is the whole pixel search optimal match point that the inventive method is determined;
The search point level of rhombus module and the coordinate of vertical direction are set to:
static?int?Diamond_x[4]={-1,0,1,0};
static?int?Diamond_y[4]={0,1,0,-1};
The 9th step: finish motion estimation search, preserve relevant motion-estimation encoded information in output file out.264, and the code stream behind the output encoder.
Experimental result such as table 1.As can be seen from Table 1, method of the present invention is compared with the UMHexagonS motion estimation coding method of H.264/AVC standard employing, and Y-PSNR slightly increases, and on average increases 0.007dB, and video quality is loss not; Code check has been controlled in strictness, on average increases by 0.29%, has kept the superior function of primary standard algorithm high compression ratio; Shorten the motion-estimation encoded time greatly, on average saved the motion-estimation encoded time 18.46%.The experiment statistics data have been verified validity of the present invention effectively.The inventive method has good portability, can combine with other fast video coding methods (as inter-frame mode selecting method etc.), improves video coding speed jointly.
Table 1. New-UMHexagonS algorithm of the present invention and UMHexagonS algorithm motion-estimation encoded performance are relatively
Figure GSA00000075176400151

Claims (1)

1. based on the method for coding quick movement estimation video of macro block characteristics, feature and inter prediction encoding Design Pattern motion estimation search template and dynamic method for fast searching according to current predicted macroblock, motion severe degree according to current predicted macro block, selection result in conjunction with the inter prediction encoding pattern, dynamic hunting zone and the corresponding motion estimation search criterion of adaptive selection adjusted, thereby realize fast motion estimation predictive coding, it is characterized in that specifically comprising the steps: inter macroblocks
Step 1: the luma component values of extracting current predicted macro block: the monochrome information with current predicted macro block is a coded object, extracts the luma component information of predicted macro block from current video frame;
Step 2: determine the motion estimation search template, distribute the search point: comprise four layers of search, along with the increase of search radius, the expansion of hunting zone progressively increases progressively from inside to outside every layer search point, and the search point on every layer is followed successively by 8,8,12,16;
Step 3: the initial search point of determining high-accuracy: adopt Lagrangian rate-distortion optimization function, as estimation judgement foundation, best coupling prediction piece and optimal motion vector on the selection rate distortion sense makes the Bit Allocation in Discrete minimum of motion vector and residual coding; Utilize Lagrangian rate distortion criterion to select the optimal motion vector problem can be described as:
J motion(mv,ref|λ motin)=SAD[s,r(ref,mv)]+λ motin[R(mv-pred)+R(ref)] (1)
(1) J in the formula MotionRate distortion costs value (RDcost for the motion vector of current prediction Motion); S is the current macro block pixels value; Mv is current vector, and pred is a predictive vector; The reference frame of ref for selecting; (ref mv) is the pixel value of reference macroblock to r; R is that motion vector carries out the bit number that differential coding consumes, and comprises the number of coded bits of difference of motion vector and its predicted value and the number of coded bits of reference frame; λ MotionBe Lagrange multiplier; SAD be between current block and the reference block pixel absolute error and;
SAD [ s , r ( ref , mv ) ] = Σ x = 1 B 1 Σ y = 1 B 2 | s ( x , y ) - r ( x - m x , y - m y ) | - - - ( 2 )
B in the formula (2) 1And B 2Horizontal number of pixels and the Vertical number of pixels of representing piece respectively, according to different inter-frame forecast modes, its value can be 16,8, and 4; (x y) is the current macro block pixels value to s; (x y) is the pixel value of reference macroblock, m to r xAnd m yThe displacement of representing level and vertical direction respectively;
Utilize Lagrangian rate distortion criterion to select the problem of optimization model to be expressed as:
J mode(s,c,MODE|λ mode)=SSD(s,c,MODE|QP)+λ mode×R(s,c,MODE|QP) (3)
In the formula (3), MODE represents a kind of interframe encoding mode of current macro; J Mode(s, c, MODE| λ Moode) rate distortion costs value (RDcost under the expression MODE pattern Mode); S is original vision signal; C is the reconstructed video signal behind the employing MODE pattern-coding; λ ModeBe Lagrange multiplier; (s, c MODE|QP) are the total number of bits that comprise macro block header, motion vector and all DCT block messages relevant with pattern and quantization parameter to R, and it is by obtaining behind the coding that piece is carried out reality, so its operand is bigger; QP is the coded quantization step-length; SSD (s, c, MODDE) be between primary signal and the reconstruction signal squared differences and, that is:
SSD ( s , c , MODE | QP ) = Σ i = 1 , j = 1 B 1 , B 2 ( S Y [ x , y ] - C Y [ x , y , MPDE | QP ) 2
Σ i = 1 , j = 1 B 1 , B 2 ( S U [ x , y ] - C U [ x , y , MPDE | QP ) 2 + - - - ( 4 )
Σ i = 1 , j = 1 B 1 , B 2 ( S V [ x , y ] - C V [ x , y , MPDE | QP ) 2
In formula (4) formula: B 1And B 2Represent the Horizontal number of pixels and the Vertical number of pixels of piece respectively, its value can be 16,8, and 4; S YThe value of [x, y] expression source macro block brightness information, C YThe value of the monochrome information of macro block is rebuild in [x, y, MODE|QP] expression; S U, S VAnd C U, C VThe value of representing corresponding colour difference information respectively; Specifically may further comprise the steps:
Step 3.1: utilize three kinds of predictive coding patterns choose best motion vector (Motion Vector, MV), promptly based on the median prediction MV of spatial domain Pred_spaceThe upper strata prediction MV of many sizes of the estimation macroblock partitions characteristics that adopt based on standard H.264 Pred_uplayerReference frame motion-vector prediction MV based on time-domain Pred_ref
(a) MV Pred_space: median prediction, utilize spatial coherence, get the left side of the present frame of having obtained, last, upper right median of facing the motion vector of piece;
(b) MV Pred_uplayer: the upper strata prediction, utilize H.264 many sizes of estimation macroblock partitions characteristics, comprise pattern 1 (16 * 16), pattern 2 (16 * 8), mode 3 (8 * 16), pattern 4 (8 * 8), pattern 5 (8 * 4), pattern 6 (4 * 8), the hierarchical search order of mode 7 (4 * 4) is got co-located, the upper level of having obtained (up layer), the motion vector that is twice piece;
(c) MV Pred_ref: the time-domain prediction, utilize temporal correlation, according to the motion vector MV of the current block in the former frame reference frame of having obtained RefPredict in proportion Wherein the time of current macro place frame is t, and the time of predicted macroblock place reference frame is t ';
Step 3.2: the motion vector MV of corresponding sad value and prediction has very strong correlation; With MV Pred_space, MV Pred_uplayer, MV Pred_refThe rate distortion costs value of point pointed is designated as pred_space_mincost respectively, pred_uplayer_mincost, pred_ref_mincost, the motion vector points point that will have minimum rate distortion costs value is as initial search point, and this is named a person for a particular job as the initial ranging central point of next step unsymmetrical cross searching;
Step 4: unsymmetrical cross searching: asymmetric cross search horizon scan scope is made as W, and the vertical search scope is half of horizon scan scope, is W/2, and spacing is got two pixel units between the adjacency search point, and then the search point of horizontal direction is W/2; The search point of vertical direction is W/4; Total W/2+W/4=3W/4 candidate search point in unsymmetrical cross searching; The optimal match point that this step is determined in this 3W/4 candidate search point is as the initial ranging central point of next step non-homogeneous multilayer hexagonal grid search;
Step 5: judge current macro motion severe degree: current predicted macroblock is divided into three class by severe degree, i.e. motion severe degree is lower, the motion severe degree is medium, the motion severe degree is higher; The determination methods of current predicted macroblock motion severe degree is as follows:
Step 5.1: the rate distortion costs value of calculating the motion vector of current predicted macroblock according to formula (1) is designated as RDcost Motion, calculate the rate distortion of current predicted macroblock under the mode pattern according to formula (3) and be designated as RDcost for value Mode, therefrom choose rate distortion costs value with minimum prediction, be designated as RD_mincost, that is: RD_mincost=min{Pred_space_mincost, Pred_uplayer_mincost, Pred_ref_mincost};
Step 5.2: the value of determining pred_mincost according to the motion-vector prediction mode of initial ranging point selection:
If the initial search point of determining in step 3 adopts the motion vector of time-domain prediction mode, then pred_mincost=pred_ref_mincost;
If the initial search point of determining does not adopt the motion vector of time prediction mode, be further divided into two class situations in step 3:
If that the interframe encoding mode of current estimation predicted macroblock is selected is 16 * 16 large scale pattern, then pred_mincost=pred_space_mincost;
If that the interframe encoding mode of current estimation predicted macroblock is selected is not 16 * 16 large scale pattern, then pred_mincost=pred_uplayer_mincost;
Step 5.3: calculate the motion severe degree and adjust factor gamma, δ:
γ = Bsize [ blocktype ] pred _ min cos t 2 - α Radii 1 [ blocktype ] - - - ( 5 )
δ = Bsize [ blocktype ] pred _ min cos t 2 - α Radii 2 [ blocktype ]
Bsize[blocktype in the formula (5)] size of the current predicted macroblock of expression, its value can be 16,8, and 4;
Matrix α Radii1According to 7 kinds of inter macroblocks partition modes, its value is defined as [blocktype]:
α Radii1[1]=-0.23;α Radii1[2]=-0.23;α Radii1[3]=-0.23;
α Radii1[4]=-0.25;α Radii1[5]=-0.27;α Radii1[6]=-0.27;α Radii1[7]=-0.28;
Matrix α Radii2[blocktype] according to 7 kinds of inter macroblocks partition modes, value be defined as:
α Radii2[1]=-2.39;α Radii2[2]=-2.40;α Radii2[3]=-2.40;
α Radii2[4]=-2.41;α Radii2[5]=-2.45;α Radii2[6]=-2.45;α Radii2[7]=-2.48;
Step 5.4: the judgement of current predicted macroblock motion severe degree:
When RD_mincost<(1+ γ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is lower;
When (1+ γ) * pred_mincost<RD_mincost<(1+ δ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is medium;
When RD_mincost>(1+ δ) * pred_mincost, judge that the motion severe degree of current predicted macroblock is higher;
Step 6: carry out 5 * 5 pixels and search for entirely: after step 5 has been judged current search content severe degree, be the center with current future position just when the search content severe degree is low only, the full search of 5 * 5 search points is carried out in 4 * 4 zones around it; Then abandon 5 * 5 search entirely when the motion severe degree when being medium or violent, directly enter step 7;
Step 7: non-homogeneous multilayer hexagonal mesh search: according to the motion severe degree Dynamic Selection motion estimation search number of plies of current macro: adopt non-homogeneous hexagon search as the basic search pattern, the search number of plies mostly is 4 layers most; When the motion severe degree is low, only carry out 2 layers of non-homogeneous hexagonal mesh search, every layer search point is 8; When the motion severe degree is medium, carry out 3 layers of non-homogeneous hexagonal mesh search, search point from inside to outside is followed successively by 8,8, and 12; When the motion severe degree is higher, just carry out 4 layers of non-homogeneous hexagonal mesh search, search point from inside to outside is followed successively by 8,8, and 12,16; The optimal match point that this step is determined in non-homogeneous hexagonal mesh search is as the initial ranging central point of the symmetrical hexagon search of step 8 expansion;
Step 8: expand symmetrical hexagon search: the optimal match point of determining with step 7 is as the initial ranging central point of this step; Here adopt two stage ways of search:
Phase I: be that 2 pixels are carried out symmetrical hexagon six point search with step-length earlier, Correlation Centre point and 6 rate distortion costs values to angle point, according to the rate distortion criterion with rate distortion costs value smallest point as the central point of search next time, become smallest point up to the rate distortion costs value of central point;
Second stage: reduce step-length again, with step-length is that 1 pixel is carried out rhombus four point search, repeat the search procedure of phase I, stop search up to the rate distortion costs value of central point during for smallest point according to the rate distortion criterion, this point is the final whole pixel search optimal match point of determining;
Step 9: output movement estimated coding information is in the Out.264 file, and record estimation time, code check, Y-PSNR information, to estimate the encryption algorithm quality.
CN 201010140709 2010-04-02 2010-04-02 Method for coding quick movement estimation video based on macro block characteristics Expired - Fee Related CN101815218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010140709 CN101815218B (en) 2010-04-02 2010-04-02 Method for coding quick movement estimation video based on macro block characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010140709 CN101815218B (en) 2010-04-02 2010-04-02 Method for coding quick movement estimation video based on macro block characteristics

Publications (2)

Publication Number Publication Date
CN101815218A true CN101815218A (en) 2010-08-25
CN101815218B CN101815218B (en) 2012-02-08

Family

ID=42622318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010140709 Expired - Fee Related CN101815218B (en) 2010-04-02 2010-04-02 Method for coding quick movement estimation video based on macro block characteristics

Country Status (1)

Country Link
CN (1) CN101815218B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102118617A (en) * 2011-03-22 2011-07-06 成都市华为赛门铁克科技有限公司 Method and device for searching motion
CN103024390A (en) * 2012-12-21 2013-04-03 天津大学 Self-adapting searching method for motion estimation in video coding
CN103034455A (en) * 2012-12-13 2013-04-10 东南大学 Method and system for managing data information buffer based on pre-decoding and analyzing
CN103067711A (en) * 2012-12-31 2013-04-24 北京联微泰芯集成电路软件开发服务有限责任公司 Integer pixel motion estimation method based on H264 protocol
WO2013071669A1 (en) * 2011-11-18 2013-05-23 杭州海康威视数字技术股份有限公司 Motion analysis method based on video compression code stream, code stream conversion method and apparatus thereof
CN103167288A (en) * 2013-02-28 2013-06-19 深圳市云宙多媒体技术有限公司 Method and device of P frame interframe prediction block partition
CN103188496A (en) * 2013-03-26 2013-07-03 北京工业大学 Fast motion estimation video encoding method based on motion vector distribution forecast
CN103596003A (en) * 2013-11-11 2014-02-19 中国科学技术大学 Interframe predication quick mode selecting method for high-performance video coding
CN104581180A (en) * 2014-12-31 2015-04-29 乐视网信息技术(北京)股份有限公司 Video coding method and device
CN104601993A (en) * 2014-12-31 2015-05-06 乐视网信息技术(北京)股份有限公司 Video coding method and device
CN105847833A (en) * 2010-12-15 2016-08-10 Sk电信有限公司 Video decoding device
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video coding and decoding method, and video coding and decoding device
US9710935B2 (en) 2012-01-19 2017-07-18 Siemens Aktiengesellschaft Pixel-prediction for compression of visual data
CN106998437A (en) * 2017-03-31 2017-08-01 武汉斗鱼网络科技有限公司 A kind of method and device for rebuilding video image
CN107071414A (en) * 2011-03-09 2017-08-18 佳能株式会社 Video coding and decoding
CN107087195A (en) * 2010-12-13 2017-08-22 韩国电子通信研究院 The method decoded based on inter prediction to vision signal
CN107181959A (en) * 2011-03-17 2017-09-19 寰发股份有限公司 The method and apparatus of derive motion vector prediction
CN107483954A (en) * 2017-08-11 2017-12-15 电子科技大学 Video coding inter-frame prediction method based on multiple linear regression
CN107645663A (en) * 2016-07-20 2018-01-30 阿里巴巴集团控股有限公司 The determination method and device of a kind of motion estimation search range
CN107707913A (en) * 2017-09-29 2018-02-16 福州大学 Error propagation method in anti-frame in Fast video coding
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A kind of layering motion estimation method and device for ultra high-definition Video Applications
CN107948647A (en) * 2017-11-23 2018-04-20 上海交通大学 A kind of hierarchical motion estimation circuit for ultra high-definition Video Applications
CN107948658A (en) * 2011-03-21 2018-04-20 Lg 电子株式会社 Select the method for motion vector predictor and use its equipment
CN110740322A (en) * 2019-10-23 2020-01-31 李思恒 Video encoding method and device, storage medium and video encoding equipment
CN111327895A (en) * 2018-12-17 2020-06-23 深圳市中兴微电子技术有限公司 Data processing method and device
CN111327898A (en) * 2018-12-14 2020-06-23 中国移动通信集团广西有限公司 Video coding method and device, electronic equipment and storage medium
CN112738529A (en) * 2020-12-23 2021-04-30 北京百度网讯科技有限公司 Inter-frame prediction method, device, equipment, storage medium and program product
CN112868233A (en) * 2019-02-28 2021-05-28 华为技术有限公司 Encoder, decoder and corresponding inter-frame prediction method
CN112866699A (en) * 2019-03-11 2021-05-28 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113115038A (en) * 2021-04-16 2021-07-13 维沃移动通信有限公司 Motion estimation method and device, electronic equipment and readable storage medium
CN113365081A (en) * 2021-05-27 2021-09-07 深圳市杰理微电子科技有限公司 Method and device for optimizing motion estimation in video coding
CN115529459A (en) * 2022-10-10 2022-12-27 格兰菲智能科技有限公司 Central point searching method and device, computer equipment and storage medium
CN117412065A (en) * 2023-12-15 2024-01-16 福州时芯科技有限公司 Optimization scheme of spiral search algorithm
CN117440168A (en) * 2023-12-19 2024-01-23 福州时芯科技有限公司 Hardware architecture for realizing parallel spiral search algorithm
CN117640939A (en) * 2024-01-25 2024-03-01 宁波康达凯能医疗科技有限公司 Method for discriminating motion estimation search mode for inter-frame image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070297512A1 (en) * 2006-06-21 2007-12-27 Samsung Electronics Co., Ltd. Motion estimation method, medium, and system with fast motion estimation
CN101237580A (en) * 2008-02-29 2008-08-06 西北工业大学 Integer pixel quick mixing search method based on center prediction
CN101494757A (en) * 2009-01-23 2009-07-29 上海广电(集团)有限公司中央研究院 Motion estimating method based on time-space domain mixing information
CN101621694A (en) * 2009-07-29 2010-01-06 深圳市九洲电器有限公司 Motion estimation method, motion estimation system and display terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070297512A1 (en) * 2006-06-21 2007-12-27 Samsung Electronics Co., Ltd. Motion estimation method, medium, and system with fast motion estimation
CN101237580A (en) * 2008-02-29 2008-08-06 西北工业大学 Integer pixel quick mixing search method based on center prediction
CN101494757A (en) * 2009-01-23 2009-07-29 上海广电(集团)有限公司中央研究院 Motion estimating method based on time-space domain mixing information
CN101621694A (en) * 2009-07-29 2010-01-06 深圳市九洲电器有限公司 Motion estimation method, motion estimation system and display terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
> 20031231 Chen Zh ibo , Zhou Peng, He Yun JVT-G016 fast motion estimation for JVT , 2 *
> 20090930 丁 燕,宋雪桦,闫述,彭琛 基于快速运动估计UM HexagonS 算法的改进 660-663 第24卷, 第5期 2 *

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087195B (en) * 2010-12-13 2020-03-13 韩国电子通信研究院 Method for decoding video signal based on interframe prediction
US10425653B2 (en) 2010-12-13 2019-09-24 Electronics And Telecommunications Research Institute Method and device for determining reference unit
US11252424B2 (en) 2010-12-13 2022-02-15 Electronics And Telecommunications Research Institute Method and device for determining reference unit
US11843795B2 (en) 2010-12-13 2023-12-12 Electronics And Telecommunications Research Institute Method and device for determining reference unit
CN107087195A (en) * 2010-12-13 2017-08-22 韩国电子通信研究院 The method decoded based on inter prediction to vision signal
CN105847833A (en) * 2010-12-15 2016-08-10 Sk电信有限公司 Video decoding device
CN105847833B (en) * 2010-12-15 2018-12-04 Sk电信有限公司 Video decoder
CN107071414A (en) * 2011-03-09 2017-08-18 佳能株式会社 Video coding and decoding
US10764597B2 (en) 2011-03-09 2020-09-01 Canon Kabushiki Kaisha Video encoding and decoding
US10812821B2 (en) 2011-03-09 2020-10-20 Canon Kabushiki Kaisha Video encoding and decoding
CN107071414B (en) * 2011-03-09 2020-12-22 佳能株式会社 Method and apparatus for video encoding and decoding
CN107181959A (en) * 2011-03-17 2017-09-19 寰发股份有限公司 The method and apparatus of derive motion vector prediction
CN107181959B (en) * 2011-03-17 2020-02-18 寰发股份有限公司 Method and apparatus for deriving motion vector predictor
CN107948658A (en) * 2011-03-21 2018-04-20 Lg 电子株式会社 Select the method for motion vector predictor and use its equipment
CN107948658B (en) * 2011-03-21 2021-04-30 Lg 电子株式会社 Method of selecting motion vector predictor and apparatus using the same
US10999598B2 (en) 2011-03-21 2021-05-04 Lg Electronics Inc. Method for selecting motion vector predictor and device using same
CN102118617A (en) * 2011-03-22 2011-07-06 成都市华为赛门铁克科技有限公司 Method and device for searching motion
US9380309B2 (en) 2011-11-18 2016-06-28 Hangzhou Hikvision Digital Technology Co. Ltd Motion analysis method and code stream conversion method based on video compression code stream and apparatus thereof
WO2013071669A1 (en) * 2011-11-18 2013-05-23 杭州海康威视数字技术股份有限公司 Motion analysis method based on video compression code stream, code stream conversion method and apparatus thereof
US9710935B2 (en) 2012-01-19 2017-07-18 Siemens Aktiengesellschaft Pixel-prediction for compression of visual data
CN104081436B (en) * 2012-01-19 2017-10-03 西门子公司 For the method and apparatus of the pixel prediction compressed for vision data
CN103034455A (en) * 2012-12-13 2013-04-10 东南大学 Method and system for managing data information buffer based on pre-decoding and analyzing
CN103034455B (en) * 2012-12-13 2015-09-16 东南大学 Based on data message buffer memory management method and the system of Decoding Analysis in advance
CN103024390A (en) * 2012-12-21 2013-04-03 天津大学 Self-adapting searching method for motion estimation in video coding
CN103024390B (en) * 2012-12-21 2015-09-09 天津大学 For the self-adapted search method of the estimation in Video coding
CN103067711A (en) * 2012-12-31 2013-04-24 北京联微泰芯集成电路软件开发服务有限责任公司 Integer pixel motion estimation method based on H264 protocol
CN103167288A (en) * 2013-02-28 2013-06-19 深圳市云宙多媒体技术有限公司 Method and device of P frame interframe prediction block partition
CN103167288B (en) * 2013-02-28 2016-08-10 深圳市云宙多媒体技术有限公司 The method and device that a kind of P frame interframe prediction block divides
CN103188496B (en) * 2013-03-26 2016-03-09 北京工业大学 Based on the method for coding quick movement estimation video of motion vector distribution prediction
CN103188496A (en) * 2013-03-26 2013-07-03 北京工业大学 Fast motion estimation video encoding method based on motion vector distribution forecast
CN103596003B (en) * 2013-11-11 2015-05-06 中国科学技术大学 Interframe predication quick mode selecting method for high-performance video coding
CN103596003A (en) * 2013-11-11 2014-02-19 中国科学技术大学 Interframe predication quick mode selecting method for high-performance video coding
CN104581180A (en) * 2014-12-31 2015-04-29 乐视网信息技术(北京)股份有限公司 Video coding method and device
CN104601993A (en) * 2014-12-31 2015-05-06 乐视网信息技术(北京)股份有限公司 Video coding method and device
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video coding and decoding method, and video coding and decoding device
US10523965B2 (en) 2015-07-03 2019-12-31 Huawei Technologies Co., Ltd. Video coding method, video decoding method, video coding apparatus, and video decoding apparatus
CN107645663A (en) * 2016-07-20 2018-01-30 阿里巴巴集团控股有限公司 The determination method and device of a kind of motion estimation search range
CN107645663B (en) * 2016-07-20 2021-01-08 阿里巴巴集团控股有限公司 Method and device for determining motion estimation search range
CN106998437A (en) * 2017-03-31 2017-08-01 武汉斗鱼网络科技有限公司 A kind of method and device for rebuilding video image
CN107483954A (en) * 2017-08-11 2017-12-15 电子科技大学 Video coding inter-frame prediction method based on multiple linear regression
CN107483954B (en) * 2017-08-11 2019-12-03 电子科技大学 Video coding inter-frame prediction method based on multiple linear regression
CN107707913B (en) * 2017-09-29 2019-12-17 福州大学 Method for preventing intra-frame error transmission in fast video coding
CN107707913A (en) * 2017-09-29 2018-02-16 福州大学 Error propagation method in anti-frame in Fast video coding
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A kind of layering motion estimation method and device for ultra high-definition Video Applications
CN107948647A (en) * 2017-11-23 2018-04-20 上海交通大学 A kind of hierarchical motion estimation circuit for ultra high-definition Video Applications
CN111327898A (en) * 2018-12-14 2020-06-23 中国移动通信集团广西有限公司 Video coding method and device, electronic equipment and storage medium
CN111327898B (en) * 2018-12-14 2022-05-13 中国移动通信集团广西有限公司 Video coding method and device, electronic equipment and storage medium
CN111327895A (en) * 2018-12-17 2020-06-23 深圳市中兴微电子技术有限公司 Data processing method and device
WO2020125649A1 (en) * 2018-12-17 2020-06-25 深圳市中兴微电子技术有限公司 Data processing method and apparatus, and storage medium
CN111327895B (en) * 2018-12-17 2022-05-24 深圳市中兴微电子技术有限公司 Data processing method and device
US11736719B2 (en) 2019-02-28 2023-08-22 Huawei Technologies Co., Ltd. Encoder, a decoder and corresponding methods for inter-prediction
CN112868233A (en) * 2019-02-28 2021-05-28 华为技术有限公司 Encoder, decoder and corresponding inter-frame prediction method
US11336916B2 (en) 2019-02-28 2022-05-17 Huawei Technologies Co., Ltd. Encoder, a decoder and corresponding methods for inter-prediction
CN112866699A (en) * 2019-03-11 2021-05-28 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113709468A (en) * 2019-03-11 2021-11-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
US11902563B2 (en) 2019-03-11 2024-02-13 Hangzhou Hikvision Digital Technology Co., Ltd. Encoding and decoding method and device, encoder side apparatus and decoder side apparatus
CN113709470A (en) * 2019-03-11 2021-11-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN110740322A (en) * 2019-10-23 2020-01-31 李思恒 Video encoding method and device, storage medium and video encoding equipment
CN112738529A (en) * 2020-12-23 2021-04-30 北京百度网讯科技有限公司 Inter-frame prediction method, device, equipment, storage medium and program product
CN113115038A (en) * 2021-04-16 2021-07-13 维沃移动通信有限公司 Motion estimation method and device, electronic equipment and readable storage medium
WO2022218299A1 (en) * 2021-04-16 2022-10-20 维沃移动通信有限公司 Motion estimation method and apparatus, electronic device, and readable storage medium
CN113115038B (en) * 2021-04-16 2022-03-29 维沃移动通信有限公司 Motion estimation method and device, electronic equipment and readable storage medium
CN113365081A (en) * 2021-05-27 2021-09-07 深圳市杰理微电子科技有限公司 Method and device for optimizing motion estimation in video coding
CN115529459A (en) * 2022-10-10 2022-12-27 格兰菲智能科技有限公司 Central point searching method and device, computer equipment and storage medium
CN115529459B (en) * 2022-10-10 2024-02-02 格兰菲智能科技有限公司 Center point searching method, center point searching device, computer equipment and storage medium
CN117412065A (en) * 2023-12-15 2024-01-16 福州时芯科技有限公司 Optimization scheme of spiral search algorithm
CN117412065B (en) * 2023-12-15 2024-03-08 福州时芯科技有限公司 Optimization scheme of spiral search algorithm
CN117440168A (en) * 2023-12-19 2024-01-23 福州时芯科技有限公司 Hardware architecture for realizing parallel spiral search algorithm
CN117440168B (en) * 2023-12-19 2024-03-08 福州时芯科技有限公司 Hardware architecture for realizing parallel spiral search algorithm
CN117640939A (en) * 2024-01-25 2024-03-01 宁波康达凯能医疗科技有限公司 Method for discriminating motion estimation search mode for inter-frame image

Also Published As

Publication number Publication date
CN101815218B (en) 2012-02-08

Similar Documents

Publication Publication Date Title
CN101815218B (en) Method for coding quick movement estimation video based on macro block characteristics
CN102186070B (en) Method for realizing rapid video coding by adopting hierarchical structure anticipation
CN103188496B (en) Based on the method for coding quick movement estimation video of motion vector distribution prediction
CN102282852B (en) Image encoder and decoder using unidirectional prediction
CN104539962B (en) It is a kind of merge visually-perceptible feature can scalable video coding method
CN101461248B (en) Method and apparatus for adaptively determining a bit budget for encoding video pictures
CN104980736A (en) Method and apparatus for encoding video, and method and apparatus for decoding video
CN100586184C (en) Infra-frame prediction method
CN103248895B (en) A kind of quick mode method of estimation for HEVC intraframe coding
CN103634606B (en) Video encoding method and apparatus
CN103618900B (en) Video area-of-interest exacting method based on coding information
CN103873861A (en) Coding mode selection method for HEVC (high efficiency video coding)
CN101404766B (en) Multi-view point video signal encoding method
CN104333756B (en) HEVC predictive mode fast selecting methods based on relativity of time domain
CN100574447C (en) Fast intraframe predicting mode selecting method based on the AVS video coding
CN102065298A (en) High-performance macroblock coding implementation method
CN101621694A (en) Motion estimation method, motion estimation system and display terminal
CN101304529A (en) Method and device for selecting macro block pattern
CN101883275B (en) Video coding method
CN100409690C (en) Video interframe compression method based on space-time correlation
CN104702959B (en) A kind of intra-frame prediction method and system of Video coding
CN102946533B (en) Video coding
CN1194544C (en) Video encoding method based on prediction time and space domain conerent movement vectors
CN101867818B (en) Selection method and device of macroblock mode
CN105282557A (en) H264 rapid movement estimation method for prediction movement vector

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120208

Termination date: 20150402

EXPY Termination of patent right or utility model