CN103561263B - Based on motion vector constraint and the motion prediction compensation method of weighted motion vector - Google Patents
Based on motion vector constraint and the motion prediction compensation method of weighted motion vector Download PDFInfo
- Publication number
- CN103561263B CN103561263B CN201310547287.5A CN201310547287A CN103561263B CN 103561263 B CN103561263 B CN 103561263B CN 201310547287 A CN201310547287 A CN 201310547287A CN 103561263 B CN103561263 B CN 103561263B
- Authority
- CN
- China
- Prior art keywords
- motion vector
- motion
- amvp
- block
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
A kind of based on motion vector constraint and the motion prediction compensation method of weighted motion vector, comprise the following steps: the motion vector of the coded block around present encoding block is carried out motion vector constraint fractional analysis;The encoding block of the Uniform Domains that motion vector meets constraints directly uses fusion mode coding to use the advanced motion vector prediction method of weighting to carry out inter motion compensation predictive coding the encoding block being in the complex region that motion vector is unsatisfactory for constraints.The present invention is by motion vector about beam analysis, inter motion compensation is predicted process refinement, and the A weighting MVP for complex scene predicts, the degree of accuracy of inter prediction is improve under conditions of not introducing extra coding information, thus improve coding efficiency, reduce the code check after video compress.
Description
Technical field
The invention belongs to video coding and decoding technology field, especially one based on motion vector constraint and weighting fortune
The motion prediction compensation method of dynamic vector.
Background technology
Digital video technology obtains in communication and broadcast world and is increasingly widely applied, particularly in the nineties
After, along with the Internet and the fast development of mobile communication technology, video information and multimedia messages are in the Internet
Process and transmission in network and mobile network become the focus in current information technical research.
Video has dominated mankind's information source in the information age, but the original video data of uncompressed is past
Toward the hugest, bring the biggest difficulty to storage and transmission.Meanwhile, huge video data exists
The redundancy in a lot of information, this is that the compression of video data provides sufficient probability.Video compress is not
But pursue high compression ratio (lower code check), also want to farthest keep video image to rebuild simultaneously
After quality.But, it is contradiction between these demands, outstanding encryption algorithm will seek these contradictions
Between optimum balance.Shannon has successively delivered complete rate distortion theory in 1948 and nineteen fifty-nine,
Mathematically analyze obtained how balancing distortion and code check, have found under constant bit rate distortion up to the upper limit
Or under given distortion code check up to lower limit, rate distortion theory is one of foundation stone of Video coding.
Video coding, by excavating the spatial coherence in video data and temporal correlation, eliminates in data
Time redundancy, spatial redundancy, statistical redundancy and visual redundancy, to reach the purpose of compression.
ADPCM technology (DPCM, the Differential Pulse that nineteen fifty-two is invented by Cutler
Code Modulation) it is the research the earliest to image coding technique, the DPCM coding in frame has excavated a frame
Dependency on image space is compressed.For video signal, time domain there is also the dependency (time
Redundancy), 1969, F.W.Mounts and F.Rocca proposed the DPCM in time domain.The seventies,
Along with going deep into of the understanding to relativity of time domain, motor-function evaluation technology is suggested, until today,
One of motion compensation technique core remaining video compress.Motion compensation is by Video Image Segmentation in bulk,
Mate between temporally adjacent image, then the residual error portion after coupling is encoded, so
Can preferably remove the time redundancy between the frame of video in video signal and frame, reach the purpose of compression.
Meanwhile, converter technique is paid attention in image and video coding field, and Ahmed et al. proposes famous
Block-based discrete cosine transform (DCT, Discrete Cosine Transform), up to now, still
So it is used in video compress.The core concept of transition coding is partitioning video data in bulk, utilizes orthogonal
The energy of data is focused on less several conversion coefficient by conversion.Predictive coding and the continuous of transition coding send out
Exhibition, has promoted the birth of hybrid encoding frame.J.R.Jain and A.K.Jain was the world of 1979
Hybrid encoding frame (MC/DCT based on block motion compensation and transition coding is proposed in picture coding association
Hybrid Coding).It also becomes the core of modern almost all of video encoding standard.Hybrid coding
The nucleus module of framework, in addition to predictive coding and transition coding, also has and quantifies and entropy code two parts.Depending on
The loss of information and the acquisition of compression ratio in frequency coding, largely derive from quantization modules.Quantify basic
On all use scalar quantisation technique, certain in predefined code table will be mapped to by the single sample in source signal
One fixed level value, forms the fewest mapping, thus reaches the purpose of compression, certainly in the process of compression
In just introduce loss.Signal after quantization carries out lossless entropy code again, eliminates the statistical redundancy in signal.
The research of entropy code can trace back to the 1950's the earliest, Huffman coding, Run-Length Coding
(Run-length coding) and the invention of arithmetic coding (Arithmetic coding).Through decades
Development, entropy code application in Video coding is more ripe, more exquisite, makes full use of in video data
Contextual information, probabilistic model is estimated more accurate, thus improves the efficiency of entropy code.
As the most ripe coding standard, in March, 2003, H.264 video encoding standard is pushed out.H.264
After releasing, obtained the extensive concern of domestic and international academia and industrial circle, it code efficiency, picture quality,
All many-sides such as network adaptability and anti-error code all achieve good effect, but its encryption algorithm has the highest
Complexity.Along with terminal and the fast development of network technology, the requirement to Video coding improves constantly, for
H.264, the requirement that adaptation is new, needs by the most perfect.From in January, 2010, Video coding is special
Family group (VCEG) and Motion Picture Experts Group (MPEG) jointly set up Video coding joint specialist group (JCT-VC,
Joint Collaborative Team on Video Coding) formulate new video encoding and decoding standard,
High-performance video coding standard (HEVC, High Efficiency Video Coding).HEVC is suggested work
For a new generation's video encoding standard, and in January, 2013, ISO/IDE and ITU-T issues HEVC
Whole draft internation standard.HEVC has become some major companies in one's power of many Research Centers at International video coding mark
The study hotspot of quasi-aspect.The target of HEVC is under the conditions of keeping same video quality, with H.264 standard
Comparing, code check saves 50%.But, HEVC up-to-date official identifying code code check compared with H.264 is saved and is
22% to 37%, not yet reach 50% this target.Therefore, HEVC still has bigger proposing in terms of compression performance
Rise space and necessity.
Similar with standard before, HEVC still uses hybrid encoding frame, but pre-in data structure, frame
The links such as survey, inter prediction, conversion, loop filtering have all carried out careful improvement and optimization.HEVC
Key technical feature include the following aspects:
(1) before replacing, standard uses the macro block dividing mode of 16x16 fixed size, uses 4 fork tree flexibly
Data structure, introduce coding unit (CU, Coding Unit), predicting unit (PU, Prediction Unit),
Converter unit (TU, Transform Unit) concept, uses quaternary tree to carry out coded data block and transform block
Express.
Along with the increase of video resolution, image occurring, the probability of larger area flat site improves, based on
The motion compensation of more large scale (more than 16x16) can more efficiently improve code efficiency, and HEVC adopts
Quad-tree partition mode flexibly, similar with the concept of macro block, divide the image into into several code tree unit
(CTU, Coding Tree Unit), each CTU can the most iteratively carry out four points, directly
To minimum code unit (SCU, Smallest Coding Unit).
(2) the image division mode of parallel encoding mechanism is supported, parallel including image sheet (Tiles) and row ripple
Process (WPP, Wave-front Parallel Processing).Image averaging can be divided by Tiles technology
Becoming some row and columns, each Tile can independently decode or independent parsing, so can give parallel computation
Offer convenience.WPP technology is to carry out parallel by image according to CTU row, initial two CTU of each CTU row
Encoding and decoding complete after, the encoding and decoding of next CTU row can start parallel, has both added degree of concurrence,
Keep the context of prediction and entropy code the most as much as possible, compared with Tile and figure slice (Slice), can
Preferably to keep code efficiency.
(3) infra-frame prediction have employed Angular technology, prediction direction increases to 35 and (includes direct current DC
With plane Planar pattern) and improve smooth (MDIS, Mode in intra prediction value generates frame based on pattern
Dependent Intra Smooth).HEVC have employed and the most similar Spatial intra-prediction technology, will
The direction of infra-frame prediction increases to 35, makes prediction direction more careful, in order to effectively catch in image
Texture, eliminates spatial redundancy, improves compression efficiency.Meanwhile, HEVC in the generation of intra prediction value, root
According to difference and the difference of intra prediction mode of infra-frame prediction block size, predictive value is carried out different filtering
Process, i.e. MDIS technology, so can improve the accuracy of infra-frame prediction, improve compression efficiency.Colourity is divided
The infra-frame prediction of amount has excavated the dependency between colourity and brightness, when colourity is similar with the texture of luminance component
Time, chromatic component can directly use the prediction direction of luminance component, to avoid transmitting extra pattern information,
Improve code efficiency.
(4) advanced motion vector prediction, the motion vector of HEVC can carry out pre-by spatially and temporally two ways
Survey.Compared with standard before, the motion-vector prediction of HEVC make use of more spatial neighboring information, no
Use median method to calculate fixing predicted motion vector again, but generate candidate prediction motion vector collection, so
The predicted motion vector that rear percent of pass aberration optimizing algorithms selection is optimal, and in code stream, transmit optimum prediction fortune
The call number of dynamic vector.
(5) new inter-frame forecast mode Merge pattern exists relatively on room and time due to interframe prediction block
Strong concordance, block to be encoded is directly used the movable information of adjacent block, can be eliminated redundancy, reaches coding
The lifting of efficiency.Using the block of Merge pattern, the movable information that can directly replicate adjacent block moves
Compensate, thus save the encoding overhead of movable information, improve compression efficiency.
(6) inter prediction asymmetrical movement divides (AMP, Asymmetric Motion Partitions), removes
The 2Nx2N similar with the variable-block in H.264/AVC, 2NxN, Nx2N, NxN (N ∈ 32,16,8,
4}) in addition to such piece divides, HEVC also adopts AMP pattern, can catch image more neatly
In moving object, improve code efficiency.
(7) sub-pixel interpolation wave filter based on DCT, the motion vector accuracy of HEVC as H.264/AVC,
Remain as 1/4 precision, but have employed separable one-dimensional filtering device based on DCT, further increase sub-picture
The accuracy of element interpolation.
(8) quaternary tree shape mapped structure and the conversion of multiple size.The transition coding of the prediction residual in HEVC is adopted
With many size change overs 32x32 of quaternary tree shape, the conversion of 16x16,8x8,4x4, transformation kernel is integer DCT.
Additionally, the prediction residue block to some intra prediction mode, can use 4x4 discrete sine transform (DST,
Discrete Sine Transform, due in infra-frame prediction, the position that the distance upper left corner is the most remote, it was predicted that property
Can be the poorest, from border, upper left more away from, residual error is the biggest, converts such data block, and DST is better than
DCT。
(9) loop filtering part removes outside block filtering technique, also add self adaptation sample bias (SAO,
Sample Adaptive Offset).The block elimination filtering technology of HEVC is with similar, but due to block
The difference of dividing mode, needs to process respectively the border of CU, PU, TU etc..Rebuild image block elimination filtering it
After, according to pixel value and the difference of location of pixels, pixel is divided into different classifications, according to the difference of classification,
Introduce different pixel bias, rebuild distortion, i.e. SAO technology to reduce.
In video data, video interframe temporal redundancy is much larger than spatial redundancies in frame, for condensed frame
Between the motion compensated prediction algorithms of temporal redundancy in whole video data compaction coding system, occupy the biggest ratio
Weight.Therefore, the quality of motion compensated prediction performance has been largely fixed the compression performance of encoder.
Skip/Merge pattern, as introducing new important coding tools in HEVC interframe encode, can enter one
Step promotes interframe encode efficiency.Encode owing to video image divides in bulk, being correlated with between adjacent block
Property is the highest, and their texture or motion have the strongest similarity, based on such consideration, before
(Skip) pattern of in video standard, skipping, in infra-prediction techniques, has very important status.SKIP
The motion vector of mode block is derived by decoding end, and reconstructed value generates entirely by reference to frame, because of
This has only to little side information (a usually flag) can express the quantity of information of whole macro block.HEVC
Also use similar thought, design Skip and Merge pattern.In order to reduce required for interframe prediction block
The expense of the side information of transmission, Skip and Merge fully excavates between the adjacent block on spatial domain and in time domain
Similarity, by the way of transmission indexes, expressive movement information, including motion vector, reference key, in advance
Surveying direction etc., decoder can recover the movable information of Skip or Merge, saves code check expense.
Skip is (CU) in units of coding unit, the most each coding unit transmission Skip pattern identification and motion letter
Breath index, but the most communicating predicted residual error, it was predicted that with the block of 2Nx2N as elementary cell;And Merge is with prediction
Unit (PU) is unit, transmits Merge pattern identification and movable information index, in each CU in PU
All or part of PU uses Merge pattern, communicating predicted residual error.SKIP with Merge employing is identical
Movable information derives mode, and wherein the required movable information derived includes: motion vector, reference key and
Prediction direction.
The movable information of Skip/Merge pattern is derived, and uses the adjacent motion information of four types as candidate
Generate candidate list, expressed by the index of transmission candidate list.Four class adjacent motion information are respectively as follows:
Spatial domain candidate, time domain candidate, recombinate bi-directional predicted candidate and zero motion vector candidate, and according to institute
The order stated, adds in candidate list successively, until meeting the maximum number of candidate list.Skip is to compile
Code unit is unit (CU), the most each coding unit transmission Skip pattern identification and movable information index, but not
Communicating predicted residual error, it was predicted that with the block of 2Nx2N as elementary cell;And Merge (PU) in units of predicting unit,
Transmitting Merge pattern identification and movable information index in PU, all or part of PU in each CU uses
Merge pattern, communicating predicted residual error.The movable information derivation mode that SKIP with Merge employing is identical,
Wherein the required movable information derived includes, motion vector, reference key and prediction direction.
In HEVC, the selection for Skip/Merge pattern is that traversal search based on rate-distortion optimization compares
Method, i.e. the coding for each layer depth CU will carry out the Skip of interframe, Merge, 2Nx2N, Nx2N,
The rate distortion of the coding of 2Nx2N, NxN and PCM pattern in 2NxN, NxN, asymmetric AMP segmentation and frame
Cost compares, and only just uses this pattern when all patterns all calculate complete and Skip/Merge Least-cost.
For non-Skip/Merge pattern, need movable information is transmitted, in order to save transmission MV's
Code check expense, HEVC, with similar, have employed motion-vector prediction technology, by spatial domain and time
Territory neighbor information forms motion-vector prediction candidate list (motion-vector prediction candidate list a length of 2), passes through
MV is predicted by transmission candidate list index, and transmits prediction residual MVD of MV.
But, in existing inter prediction compensation process, in a reference frame, only one of which reference block is
As the predictive value of present encoding block, and for complicated encoding block or the coding inconsistent with peripheral motor
A block then block is difficult to present encoding block is carried out Accurate Prediction.Therefore, some compound movement scene or
Person local, the accuracy of traditional motion compensated prediction is unsatisfactory, and this have impact on entering of video compression efficiency
One step promotes.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of reasonable in design, precision is high, fast
Spend fast motion prediction compensation method based on motion vector constraint with weighted motion vector.
The present invention solves it and technical problem is that and take techniques below scheme to realize:
A kind of based on motion vector constraint and the motion prediction compensation method of weighted motion vector, including following step
Rapid:
Step 1, motion vector to the coded block around present encoding block carry out motion vector about fasciculation and divide
Analysis;
Step 2, the encoding block of the Uniform Domains that motion vector meets constraints directly use fusion mode
Coding;
Step 3, the height that the encoding block employing being in the complex region that motion vector is unsatisfactory for constraints is weighted
Level method of motion vector prediction carries out inter motion compensation predictive coding;
Specifically comprising the following steps that of described step 1
(1) the spatial domain motion vectors reference collection of present encoding block being divided into both of which, a kind of pattern takes currently
3 motion vectors of the encoding block on the left of block, another kind of pattern takes 3 fortune of the encoding block above current block
Dynamic vector;
(2) motion vector difference between 2 motion vectors in 3 motion vectors is calculated in each pattern, its
Computing formula is as follows:
D=| vx(A)-vx(B)|+|vy(A)-vy(B)|
D is any two motion vector difference, and A, B represent two motion vectors, v respectivelyxRepresent motion vector
Horizontal component, vyRepresent the vertical component of motion vector;
(3) present encoding block and the motion vector of surrounding block are judged according to the result of any two motion vector difference D
Field is the most consistent.
And, described step (3) judges the side that present encoding block is the most consistent with the motion vector field of surrounding block
Method is: for a length of W, and width is the present encoding block of H, when the horizontal component of D all in two patterns
It is respectively less than W/8, and when the vertical component of all D is respectively less than H/8, the most current motion vector field is continuous
, it is judged that present encoding block is consistent with the block of surrounding in motion.
And, the fusion mode coded method of described step 2 comprises the following steps:
(1) in merging reference set, by the calculating of absolute difference sum, reference motion vector is most preferably merged,
Its computing formula is as follows:
Wherein, MVmergeRepresenting optimal fusion mode reference motion vector, d represents motion vector, VSMCTable
Showing fusion mode reference set, SAD is the absolute difference sum of all pixels between current block and reference block, for generation
The matching degree of two blocks of table, FnS () represents the pixel value of present frame,Represent the i.e. reference frame of former frame
Pixel value;
(2) residual error data after being compensated by optimal fusion mode reference motion vector, through change quantization
After the coefficient sum that obtains be zero, then use skip mode to encode, and by the mark of skip mode and
The index value of good fusion mode reference motion vector is transferred to decoding end;
(3) residual error data after being compensated by optimal fusion mode reference motion vector, through change quantization
After the coefficient sum that obtains be not zero, then by after the index value of fusion mode reference motion vector and change quantization
Residual error coefficient encoded be transferred to decoding end.
And, described step 3 uses the advanced motion vector prediction method of weighting to carry out the concrete of interframe encode
Step includes:
(1) from advanced motion vector prediction reference set, 2 final AMVP predictive values are selected, then to it
Averaging, its computing formula is as follows:
Wherein,Represent AMVP mean prediction vector,WithRepresent 2 final AMVP respectively
Predictive value;
(2) when 2 final AMVP predictive values do not include zero motion, zero motion vector note power is added,
Obtaining the AMVP predictive value of weighting, its computing formula is as follows:
Wherein, A (s) represents the AMVP predictive value of weighting, wmIt is represented to the weight of AMVP mean prediction vector,
wzeroIt is represented to the weight of zero motion vector;
(3) after the AMVP predictive value obtaining weighting, on the basis of this predictive value, carry out motion search, depend on
According to minimum absolute difference sum criterion, obtaining final motion vector, computing formula is as follows:
Wherein, MVweightedRepresenting the motion vector scanning for obtaining after A weighting MVP predicts, d represents
All possible motion vector, w in the hunting zone that motion search window i.e. specifiesAMVPIt is represented to weighting
The weight of AMVP predictive value, wsearchIt is represented to the weight of the motion vector obtained through motion search;
(4) for final MVweighted, by it with a motion compensation after the minimum AMVP of absolute difference sum pre-
Measured value subtracts each other and obtains residual error motion vector, and its computing formula is as follows:
MVD=MVweighted-MVcandidate
Wherein, MVD represents residual error motion vector, MVcandidateRepresent the motion vector predictor of present encoding block.
And, in described step (2), when 2 final AMVP predictive values include zero motion, wmTake 1
And wzeroTake 0;When 2 final AMVP predictive values do not include zero motion, wmTake 0.5 and wzeroTake 0.5.
And, in described step (3), wAMVPTake 0.2, and wsearchTake 0.8.
Advantages of the present invention and good effect be:
The present invention judges present encoding block and the concordance of surrounding block and complexity by motion vector constrained procedure
Property, the inter motion compensation of encoding block is predicted and is divided into two kinds of flow processs the most accurate, thus it is pre-to increase interframe
Survey precision;Fusion mode or skip mode are used for the encoding block that motion is simple or concordance is good, saves
Go motion search to obtain the process of motion vector, accelerate judgement speed;For complicated or with surrounding
The encoding block moving inconsistent uses A weighting MVP Forecasting Methodology the most accurately, can improve complex scene
Precision of prediction, simultaneously do not introduce extra coding information (such as: extra reference frame, motion vector and mould
Formula mark etc.).
Accompanying drawing explanation
Fig. 1 is the overall process flow figure of the present invention;
Fig. 2 is AMVP prediction and the Candidate Set location drawing of Merge pattern;
Fig. 3 is the flow chart choosing two final AMVP predictive values;
Fig. 4 is that the present invention compares schematic diagram (BasketballDrive sequence) with standard code rate distortion curve;
Fig. 5 is that the present invention compares schematic diagram (RaceHorses sequence) with standard code rate distortion curve.
Detailed description of the invention
Below in conjunction with accompanying drawing, the embodiment of the present invention is further described:
A kind of based on motion vector constraint and the motion prediction compensation method of weighted motion vector, as it is shown in figure 1,
Comprise the following steps:
Step 1, carries out motion vector about fasciculation to the motion vector of the coded block around present encoding block and divides
Analysis.
In this step, the motion letter of the coded block adjacent with present encoding block time domain and spatial domain is first obtained
Breath, then carries out computational analysis to available movable information, finally judges present encoding block and surrounding coding
Whether block belongs to same actual object, specifically comprises the following steps that
(1) the spatial domain motion vectors reference collection of present encoding block is divided into both of which: pattern one takes current block
3 motion vectors of the encoding block in left side, pattern two takes 3 motion vectors of the encoding block above current block.
A when left side0In the presence of, choose A0And B2As pattern one;Work as A0Do not exist is to choose A1And B2As mould
Formula one;B when top0In the presence of, choose B0And B2As pattern two, work as B0Do not exist is to choose B1And B2Make
For pattern two.
(2) motion vector difference between 2 motion vectors of 3 motion vectors in each pattern is calculated,
Calculate lower formula as follows:
D=| vx(A)-vx(B)|+|vy(A)-vy(B)|
Wherein, D is any two motion vector difference, and A, B represent two motion vectors, v respectivelyxRepresent fortune
The horizontal component of dynamic vector, vyRepresent the vertical component of motion vector.
(3) judge that the motion of present encoding block and surrounding block is vowed according to the result of any two motion vector difference D
Whether amount field consistent, or current block is the most complicated.Difference between adjacent two motion vectors is less than
Predetermined threshold value then thinks that the motion of the two pixel is consistent, and the two pixel belongs to same reality in other words
Border object.
Test result indicate that if the difference between adjacent two motion vectors is less than 1/8th pixels, recognize
Being consistent for the motion always of this pixel, the two pixel belongs to same actual object in other words.So
For the present encoding block that a length of W width is H, by the result obtained in step (2) is analyzed,
If the horizontal component of all of D in two patterns can be obtained be less than W/8, and the vertical of all of D divides
Amount is less than H/8, and current motion vector field is continuous print, then may determine that the block of present encoding block and surrounding exists
It is consistent in motion.Whereas if any one component exceedes this threshold value, then it is assumed that present encoding block
Around block is inconsistent.
By above 3 steps, present encoding block can be obtained and whether surrounding encoding block belongs to same reality
Border object, moves the most consistent in other words, and the selection for step 2 and step 3 provides basis for estimation: according to
The result that motion vector constraint fractional analysis obtains is by motion compensated prediction process classification, if at motion vector about
The conclusion obtained in fasciculation analysis is that present encoding block is consistent with the motion vector field of surrounding block, then directly adopt
Predict by fusion mode;The encoding block being in the complex region that motion vector is unsatisfactory for constraints is used and adds
The advanced motion vector prediction method of power carries out interframe encode.
Step 2, the encoding block of the Uniform Domains that motion vector meets constraints directly uses fusion mode
Coding.
After motion vector constraint fractional analysis, if present encoding block is consistent with the motion vector field of surrounding block,
The most directly using fusion mode to carry out encoding (Merge pattern-coding), it specifically comprises the following steps that
(1) in merging reference set, by the calculating of SAD, obtain optimal fusional movement minimum for SAD and vow
Amount, can represent such as below equation.
Wherein MVmergeRepresenting optimal fusion mode reference motion vector, d represents motion vector, VSMCRepresent
Fusion mode reference set, SAD is the absolute difference sum of all pixels between current block and reference block, is used for representing
The matching degree of two blocks, FnS () represents the pixel value of present frame,Represent the former frame i.e. picture of reference frame
Element value.
(2) if by the residual error data after optimal fusion mode reference motion vector compensation, through converted quantity
The coefficient sum obtained after change is zero, and namely coded block flag is zero, then use skip mode (skip)
Encode, and the mark of skip mode and the index value of optimal fusion mode reference motion vector are transferred to
Decoding end.
(3) if by the residual error data after optimal fusion mode reference motion vector compensation, through converted quantity
The coefficient sum obtained after change is not zero, then by index value and the change quantization of fusion mode reference motion vector
After residual error coefficient encoded be transferred to decoding end.
Step 3, uses the height of weighting to the encoding block being in the complex region that motion vector is unsatisfactory for constraints
Level motion-vector prediction (AMVP) method carries out inter motion compensation predictive coding.
After motion vector constraint fractional analysis, if the motion vector field of present encoding block and surrounding block is inconsistent
Time, then use the AMVP of weighting to predict, it specifically comprises the following steps that
(1) in order to consistent with two AMVP predictive values of reservation in HEVC standard, this method is also from advanced motion
Vector prediction (AMVP) reference set selects 2 final AMVP predictive values, its handling process such as Fig. 3
Shown in, first from 5 spatial domain AMVP predictive values, choose 2, according to A0, A1, B0, B1, B2's
Order is chosen, until taking full 2;Then from 2 time domain AMVP predictive values, 1 is chosen, if H
Exist and then choose H, otherwise choose C3.If surrounding motion vectors value is less than 3, can be default.Motion
The ordering of vector is that first spatial domain is in time domain.Then, remove the repetition values obtained in AMVP predictive value,
If the motion vector number now retained is less than 2, then add zero motion vector i.e. (0,0);If still
3 motion vectors are also had then to remove the 3rd.So obtain final 2 AMVP predictive value.
(2) then 2 final AMVP predictive values are averaged, carry out as follows.
Represent AMVP mean prediction vector,WithRepresent 2 final AMVP predictive values respectively.
(3) when 2 final AMVP predictive values do not include zero motion, zero motion vector note power is added,
So can avoid when the motion vector of surrounding block and the true motion vector difference of current block cause relatively greatly
Forecast error is excessive, obtains the AMVP predictive value of weighting, and its detailed process is carried out as follows:
A (s) represents the AMVP predictive value of weighting, wmIt is represented to the weight of AMVP mean prediction vector, wzeroTable
Show to the weight of zero motion vector.Test result indicate that when 2 final AMVP predictive values include zero motion,
wmTake 1 and wzeroTake 0;When 2 final AMVP predictive values do not include zero motion, wmTake 0.5 and wzeroTake
0.5, higher prediction effect can be obtained.
(4) after the AMVP predictive value obtaining weighting, on the basis of this predictive value, carry out motion search, depend on
According to minimum SAD criterion, obtaining final motion vector, computing formula is as follows.
Wherein MVweightedRepresenting the motion vector scanning for obtaining after A weighting MVP predicts, d represents in fortune
All possible motion vector, w in the hunting zone that dynamic search window i.e. specifiesAMVPIt is represented to the AMVP of weighting
The weight of predictive value, wsearchIt is represented to the weight of the motion vector obtained through motion search.Experimental result table
Bright, wAMVPTake 0.2, and wsearchTake 0.8 and can obtain optimal prediction effect.
(5) for final MVweighted, by AMVP predictive value minimum for sad value after itself and a motion compensation
Subtract each other and obtain residual error motion vector (MVD), it is therefore an objective to remove redundancy further, and meet main flow encoder
Code stream specification.This process is calculated as follows.
MVD=MVweighted-MVcandidate
Wherein, MVcandidateRepresenting the motion vector predictor of present encoding block, this predictive value is in (1)
With MV in two the final AMVP predictive values obtainedweightedMinimum one of MVD value.
In sum, the feature of the inventive method is to first pass through Part I motion vector is carried out about fasciculation
Analyze, when present encoding block is consistent with the motion vector field of surrounding block, then the method using step 2, carry out
Fusion mode encodes;When the motion vector field of present encoding block and surrounding block is inconsistent, then use step 3
Method, the AMVP being weighted predicts.
In order to the effect of this motion prediction compensation method is described, uses and verify as follows:
Experiment use HEVC official standard video test sequence as test object, in the process of the present invention and HEVC
Official's code compares and carries out experiment simulation.Coded quantization parameter QP is set to: 22,27,32,37.This
The Y-PSNR (BD-PSNR) of method and HEVC official identifying code and code check (BD-rate) performance ratio
Relatively such as following table.Fig. 4 is that the inventive method of BasketballDrive sequence is bent with the rate distortion of standard code
Line compares, and Fig. 5 is that the inventive method of RaceHorses sequence compares with the rate distortion curve of standard code.
Table 1 BD-PSNR and BD-rate Performance comparision
BD-PSNR represents under conditions of same video code check, the difference of Y-PSNR, and positive represents this
Invention Y-PSNR compared with standard code increases, and unit is dB.BD-rate represents at identical peak
Under conditions of value signal to noise ratio, the present invention is the difference of video code rate compared with standard code, and negative value represents this
Bright compared with standard code video code rate declined.Unit is percentage ratio.For cycle tests Vidyo1,
Three individual sports in video are very simple and slow, occupy the large-area background parts of video and keep static,
So the frequency ratio that the advanced motion vector prediction of weighting proposed by the invention occurs is relatively low, coding gain is relatively
Little.For cycle tests BasketballDrill, the complicated movement of the basket baller in video acutely and
Quickly, significant portion of motion vector is discontinuous, therefore the senior fortune of the weighting that the inventive method proposes
The frequency that dynamic vector prediction occurs is higher, and coding gain is bigger.
It is emphasized that embodiment of the present invention is illustrative rather than determinate, therefore
The present invention is not limited to the embodiment described in detailed description of the invention, every by those skilled in the art according to this
Other embodiments that the technical scheme of invention draws, also belong to the scope of protection of the invention.
Claims (6)
1., based on motion vector constraint and a motion prediction compensation method for weighted motion vector, its feature exists
In, comprise the following steps:
Step 1, motion vector to the coded block around present encoding block carry out motion vector about fasciculation and divide
Analysis;
Step 2, the encoding block of the Uniform Domains that motion vector meets constraints directly use fusion mode
Coding;
Step 3, the height that the encoding block employing being in the complex region that motion vector is unsatisfactory for constraints is weighted
Level method of motion vector prediction carries out inter motion compensation predictive coding;
Specifically comprising the following steps that of described step 1
(1) the spatial domain motion vectors reference collection of present encoding block being divided into both of which, a kind of pattern takes currently
3 motion vectors of the encoding block on the left of block, another kind of pattern takes 3 fortune of the encoding block above current block
Dynamic vector;
(2) motion vector difference between 2 motion vectors in 3 motion vectors is calculated in each pattern, its
Computing formula is as follows:
D=| vx(A)-vx(B)|+|vy(A)-vy(B)|
D is any two motion vector difference, and A, B represent two motion vectors, v respectivelyxRepresent motion vector
Horizontal component, vyRepresent the vertical component of motion vector;
(3) present encoding block and the motion vector of surrounding block are judged according to the result of any two motion vector difference D
Field is the most consistent.
Motion compensation based on motion vector constraint and weighted motion vector the most according to claim 1 is pre-
Survey method, it is characterised in that: described step (3) judges that the motion vector field of present encoding block and surrounding block is
No consistent method is: for a length of W, and width is the present encoding block of H, as D all in two patterns
Horizontal component be respectively less than W/8, and when the vertical component of all D is respectively less than H/8, the most current motion is vowed
Amount field is continuous print, it is judged that present encoding block is consistent with the block of surrounding in motion.
Motion compensation based on motion vector constraint and weighted motion vector the most according to claim 1 is pre-
Survey method, it is characterised in that: described step 2 fusion mode coded method comprises the following steps:
(1) in merging reference set, by the calculating of absolute difference sum, reference motion vector is most preferably merged,
Its computing formula is as follows:
Wherein, MVmergeRepresenting optimal fusion mode reference motion vector, d represents motion vector, VSMCTable
Showing fusion mode reference set, SAD is the absolute difference sum of all pixels between current block and reference block, for generation
The matching degree of two blocks of table, FnS () represents the pixel value of present frame,Represent the i.e. reference frame of former frame
Pixel value;
(2) residual error data after being compensated by optimal fusion mode reference motion vector, through change quantization
After the coefficient sum that obtains be zero, then use skip mode to encode, and by the mark of skip mode and
The index value of good fusion mode reference motion vector is transferred to decoding end;
(3) residual error data after being compensated by optimal fusion mode reference motion vector, through change quantization
After the coefficient sum that obtains be not zero, then by after the index value of fusion mode reference motion vector and change quantization
Residual error coefficient encoded be transferred to decoding end.
Motion compensation based on motion vector constraint and weighted motion vector the most according to claim 1 is pre-
Survey method, it is characterised in that: described step 3 uses the advanced motion vector prediction method of weighting to carry out interframe
The concrete steps of coding include:
(1) from advanced motion vector prediction reference set, 2 final AMVP predictive values are selected, then to it
Averaging, its computing formula is as follows:
Wherein,Represent AMVP mean prediction vector,WithRepresent 2 final AMVP respectively
Predictive value;
(2) when 2 final AMVP predictive values do not include zero motion, zero motion vector note power is added,
Obtaining the AMVP predictive value of weighting, its computing formula is as follows:
Wherein, A (s) represents the AMVP predictive value of weighting, wmIt is represented to the weight of AMVP mean prediction vector,
wzeroIt is represented to the weight of zero motion vector;
(3) after the AMVP predictive value obtaining weighting, on the basis of this predictive value, carry out motion search, depend on
According to minimum absolute difference sum criterion, obtaining final motion vector, computing formula is as follows:
Wherein, MVweightedRepresenting the motion vector scanning for obtaining after A weighting MVP predicts, d represents
All possible motion vector, w in the hunting zone that motion search window i.e. specifiesAMVPIt is represented to weighting
The weight of AMVP predictive value, wsearchIt is represented to the weight of the motion vector obtained through motion search;
(4) for final MVweighted, by it with a motion compensation after the minimum AMVP of absolute difference sum pre-
Measured value subtracts each other and obtains residual error motion vector, and its computing formula is as follows:
MVD=MVweighted-MVcandidate
Wherein, MVD represents residual error motion vector, MVcandidateRepresent the motion vector predictor of present encoding block.
Motion compensation based on motion vector constraint and weighted motion vector the most according to claim 4 is pre-
Survey method, it is characterised in that: in described step (2), when 2 final AMVP predictive values include zero motion
Time, wmTake 1 and wzeroTake 0;When 2 final AMVP predictive values do not include zero motion, wmTake 0.5 and wzero
Take 0.5.
Motion compensation based on motion vector constraint and weighted motion vector the most according to claim 4 is pre-
Survey method, it is characterised in that: in described step (3), wAMVPTake 0.2, and wsearchTake 0.8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310547287.5A CN103561263B (en) | 2013-11-06 | 2013-11-06 | Based on motion vector constraint and the motion prediction compensation method of weighted motion vector |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310547287.5A CN103561263B (en) | 2013-11-06 | 2013-11-06 | Based on motion vector constraint and the motion prediction compensation method of weighted motion vector |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103561263A CN103561263A (en) | 2014-02-05 |
CN103561263B true CN103561263B (en) | 2016-08-24 |
Family
ID=50015400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310547287.5A Active CN103561263B (en) | 2013-11-06 | 2013-11-06 | Based on motion vector constraint and the motion prediction compensation method of weighted motion vector |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103561263B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230101932A (en) * | 2016-04-29 | 2023-07-06 | 인텔렉추얼디스커버리 주식회사 | Image signal encoding/decoding method and appratus |
CN109905714B (en) | 2017-12-08 | 2022-12-27 | 华为技术有限公司 | Inter-frame prediction method and device and terminal equipment |
WO2020065520A2 (en) | 2018-09-24 | 2020-04-02 | Beijing Bytedance Network Technology Co., Ltd. | Extended merge prediction |
CN108270945B (en) * | 2018-02-06 | 2020-10-30 | 上海通途半导体科技有限公司 | Motion compensation denoising method and device |
EP3788787A1 (en) | 2018-06-05 | 2021-03-10 | Beijing Bytedance Network Technology Co. Ltd. | Interaction between ibc and atmvp |
WO2019244055A1 (en) * | 2018-06-19 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Mode dependent mvd precision set |
WO2019244118A1 (en) | 2018-06-21 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Component-dependent sub-block dividing |
CN110662046B (en) * | 2018-06-29 | 2022-03-25 | 北京字节跳动网络技术有限公司 | Video processing method, device and readable storage medium |
CN115695791A (en) * | 2018-07-02 | 2023-02-03 | 华为技术有限公司 | Video image encoding method and apparatus for encoding video data |
CN110876059B (en) | 2018-09-03 | 2022-06-10 | 华为技术有限公司 | Method and device for acquiring motion vector, computer equipment and storage medium |
KR102635047B1 (en) | 2018-09-19 | 2024-02-07 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Syntax reuse for affine modes with adaptive motion vector resolution |
CN112913247B (en) | 2018-10-23 | 2023-04-28 | 北京字节跳动网络技术有限公司 | Video processing using local illumination compensation |
CN112868238B (en) | 2018-10-23 | 2023-04-21 | 北京字节跳动网络技术有限公司 | Juxtaposition between local illumination compensation and inter-prediction codec |
CN109618227B (en) * | 2018-10-26 | 2021-04-20 | 深圳市野生动物园有限公司 | Video data storage method and system |
WO2020094151A1 (en) | 2018-11-10 | 2020-05-14 | Beijing Bytedance Network Technology Co., Ltd. | Rounding in pairwise average candidate calculations |
WO2020135368A1 (en) * | 2018-12-24 | 2020-07-02 | 华为技术有限公司 | Inter-frame prediction method and apparatus |
CN111355961B (en) | 2018-12-24 | 2023-11-03 | 华为技术有限公司 | Inter-frame prediction method and device |
CN111385575A (en) | 2018-12-29 | 2020-07-07 | 华为技术有限公司 | Inter-frame prediction method and device and corresponding encoder and decoder |
WO2020143832A1 (en) * | 2019-01-12 | 2020-07-16 | Beijing Bytedance Network Technology Co., Ltd. | Bi-prediction constraints |
WO2020147747A1 (en) | 2019-01-15 | 2020-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Weighted prediction in video coding |
WO2020147805A1 (en) | 2019-01-17 | 2020-07-23 | Beijing Bytedance Network Technology Co., Ltd. | Deblocking filtering using motion prediction |
CN113424537A (en) * | 2019-02-08 | 2021-09-21 | 北京达佳互联信息技术有限公司 | Method and apparatus for video encoding and decoding selectively applying bi-directional optical flow and decoder-side motion vector refinement |
US11616966B2 (en) * | 2019-04-03 | 2023-03-28 | Mediatek Inc. | Interaction between core transform and secondary transform |
CN117676134A (en) | 2019-04-25 | 2024-03-08 | 北京字节跳动网络技术有限公司 | Constraint on motion vector differences |
JP7332721B2 (en) * | 2019-05-25 | 2023-08-23 | 北京字節跳動網絡技術有限公司 | Encoding Block Vectors in Intra-Block Copy Encoding Blocks |
CN113163206B (en) * | 2019-06-24 | 2022-11-01 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
KR20220023338A (en) * | 2019-06-25 | 2022-03-02 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Constraints on motion vector difference |
CN110636311B (en) * | 2019-09-18 | 2021-10-15 | 浙江大华技术股份有限公司 | Motion vector acquisition method and related prediction method and device |
CN111050182B (en) * | 2019-12-27 | 2022-02-18 | 浙江大华技术股份有限公司 | Motion vector prediction method, video coding method, related equipment and device |
CN111901590B (en) * | 2020-06-29 | 2023-04-18 | 北京大学 | Refined motion vector storage method and device for inter-frame prediction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1976467A (en) * | 2002-01-09 | 2007-06-06 | 松下电器产业株式会社 | Motion vector coding method and motion vector decoding method |
CN102263947A (en) * | 2010-05-27 | 2011-11-30 | 香港科技大学 | Method and system for motion estimation of images |
WO2013062174A1 (en) * | 2011-10-26 | 2013-05-02 | 경희대학교 산학협력단 | Method for managing memory, and device for decoding video using same |
CN103379322A (en) * | 2012-04-16 | 2013-10-30 | 乐金电子(中国)研究开发中心有限公司 | Parallel implementation method, device and system for advanced motion vector prediction AMVP |
-
2013
- 2013-11-06 CN CN201310547287.5A patent/CN103561263B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1976467A (en) * | 2002-01-09 | 2007-06-06 | 松下电器产业株式会社 | Motion vector coding method and motion vector decoding method |
CN102263947A (en) * | 2010-05-27 | 2011-11-30 | 香港科技大学 | Method and system for motion estimation of images |
WO2013062174A1 (en) * | 2011-10-26 | 2013-05-02 | 경희대학교 산학협력단 | Method for managing memory, and device for decoding video using same |
CN103379322A (en) * | 2012-04-16 | 2013-10-30 | 乐金电子(中国)研究开发中心有限公司 | Parallel implementation method, device and system for advanced motion vector prediction AMVP |
Non-Patent Citations (2)
Title |
---|
A new motion compensation method using superimposed inter-frame signals;Ka-Ho,et al;《2012 IEEE International conference on Acoustics,speech and signal processing》;20120330;正文第1214页右栏第25行-第1215页右栏第26行 * |
基于运动复杂度分类的快速模式选择算法;韦耿等;《计算机工程与科学》;20080215;第30卷(第2期);正文第59页左栏第6行-右栏第28行 * |
Also Published As
Publication number | Publication date |
---|---|
CN103561263A (en) | 2014-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103561263B (en) | Based on motion vector constraint and the motion prediction compensation method of weighted motion vector | |
JP6753979B2 (en) | Motion information decoding method and coding method | |
CN105874797B (en) | Decoding method, device, equipment and the storage media of video data | |
CN107211156B (en) | A kind of method, apparatus and computer-readable storage medium of coded video data | |
CN103503460B (en) | The method and apparatus of coded video data | |
CN103141100B (en) | Smoothing filter in the frame of video coding | |
CN109644272A (en) | Geometric type priority for construction candidate list | |
CN104303502B (en) | The method, apparatus and computer readable storage medium that multi-view video data is encoded, decoded and is decoded | |
CN104081774B (en) | The method decoded using decoding apparatus to vision signal | |
CN107409225A (en) | Movable information derivation pattern (DM) determines in video coding | |
CN104756499B (en) | Method, equipment and computer-readable storage medium for video coding | |
CN110463201A (en) | Use the prediction technique and device of reference block | |
CN109076218A (en) | Multiple filters in video coding in adaptive loop filter are obscured | |
CN109792516A (en) | Method and device in image and coding and decoding video for coloration encoding and decoding in frame | |
CN108605136A (en) | Motion vector based on picture order count is simplified | |
CN109644276A (en) | Image coding/decoding method | |
CN109691106A (en) | The offset vector identification of temporal motion vector prediction symbol | |
CN109076235A (en) | Consistency constraint for the juxtaposition reference key in video coding | |
CN107979756A (en) | Method for video coding and device used in a kind of video coding system | |
CN108141605A (en) | Intra block replicates merging patterns and unavailable intra block replicates the filling of reference zone | |
CN109891890A (en) | Bi directional motion compensation based on sub- PU in video coding | |
CN103188496B (en) | Based on the method for coding quick movement estimation video of motion vector distribution prediction | |
CN107211125A (en) | The flexible partition of predicting unit | |
CN107710764A (en) | It is determined that the system and method for the illumination compensation state for video coding | |
CN103248895B (en) | A kind of quick mode method of estimation for HEVC intraframe coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |