CN1323555C - Quick selecting method for H.264/AVC multiple reference frame - Google Patents

Quick selecting method for H.264/AVC multiple reference frame Download PDF

Info

Publication number
CN1323555C
CN1323555C CNB2005100235775A CN200510023577A CN1323555C CN 1323555 C CN1323555 C CN 1323555C CN B2005100235775 A CNB2005100235775 A CN B2005100235775A CN 200510023577 A CN200510023577 A CN 200510023577A CN 1323555 C CN1323555 C CN 1323555C
Authority
CN
China
Prior art keywords
reference frame
frame
optimal
rate distortion
patterns
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100235775A
Other languages
Chinese (zh)
Other versions
CN1649413A (en
Inventor
张兆扬
滕国伟
赵海武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CNB2005100235775A priority Critical patent/CN1323555C/en
Publication of CN1649413A publication Critical patent/CN1649413A/en
Application granted granted Critical
Publication of CN1323555C publication Critical patent/CN1323555C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention relates to a rapid selection method of H. 264/AVC multiple reference frames. In the present invention, the number of needed reference frames are determined by the texture characteristic of the current encoding sequence, the motion information of the current block and the selection correlativity of multiple reference frames in different modes in a self-adaptive mode; the rate distortion of the reference frames of the current block within a small range is compared, and the reference frame with the minimal rate distortion as the optimal reference frame. Indicated by a test that the present invention can save much encoding time under the condition that code rates and the signal-to-noise ratio are scarcely changed.

Description

H.264/AVC the fast selecting method of multi-reference frame
Technical field
The present invention relates to a kind of method of video coding of information processing, particularly a kind of fast selecting method of H.264/AVC multi-reference frame.
Background technology
After H.261 ITU-T had issued, block-based hybrid coding method was generally adopted not broken hair exhibition as framework.International standard H.261, H.263, the formulation of MPEG-1, MPEG-2, MPEG-4 etc. greatly promoted application of multimedia technology.Yet, along with rolling up of new service and be about to popularizing of HDTV (High-Definition Television), and existing transmission medium such as Cable Modem, Xdsl, transmission code rate that UMTS is lower, press for the raising code efficiency; Also require video encoding standard can adapt to the existing or following network simultaneously.These problems provide practical foundation for the generation H.264/AVC of new video coding standard.
These video encoding standards all adopt hybrid encoding frame, be about to the input video image Segmentation and become macro block, to its predict, conversion, quantification, entropy coding.Prediction comprises infra-frame prediction and inter prediction, and infra-frame prediction can be removed spatial redundancy, and inter prediction can be removed temporal redundancy.
H.264/AVC still defer to this block-based hybrid coding scheme, but comprised many new features: the VCL layer separates with the NAL layer, the integer transform of estimations in the frame of spatial domain, 1/4 pixel precision estimation, adaptive block sizes motion compensation, multi-reference frame motion compensated prediction, the notion of general B frame, low complicated 16bit and quantification, the interior de-blocking filter of ring and entropy coding efficiently.These new features can significantly improve code efficiency on the one hand; Have the network friendly on the other hand, can be effective to diverse network and applied environment.
It wherein H.264/AVC is the code efficiency that further improves the interframe image, the macroblock partition of inter prediction is become 7 kinds of block modes (16 * 16,16 * 8,8 * 16,8 * 8,8 * 4,4 * 8,4 * 4), and the block under every kind of pattern carried out the multi-reference frame motion estimation and compensation, so that remove time redundancy to greatest extent.The estimation of these block modes and reference frame selection be with rate distortion RDO (Rate-Distortion Optimization) serve as the judgement foundation, as shown in Equation (1).Except that these block modes, the P frame is also supported SKIP pattern (or the B frame is supported the Direct pattern) and intra prediction mode (INTRA).In the reality, the forward direction inter prediction adopts 5 reference frames usually.
J MOTION(mv,ref|λ MOTION)=SAD(s,r(ref,mv))+λ MOTION(R(mv-pred)+R(ref)) (1)
Wherein, s is the current block pixel value, the reference frame of ref for selecting, mv is the motion vector with reference to ref, pred is a predictive vector, and (ref mv) is the pixel value of reference block to r, SAD (sum of absolute difference) be current block and reference block pixel value difference absolute value and, code check R comprises the differential coding bit of motion vector and its predicted value and the coded-bit of reference frame.
The verification model has adopted a kind of all direction search method, and as shown in Figure 1, current block carries out earlier estimation in nearest reference frame, can calculate J in this reference frame by formula (1) MOTIONEstimation and J relatively in other reference frame successively then MOTlON, final J MOTIOThe reference frame of N minimum is the optimal reference frame.Detailed process as shown in Figure 1,16 * 16,16 * 8,8 * 16, in 8 * 8 patterns, each sub-piece is respectively in different reference frame estimation; And in 8 * 8 the sub-piece of all 8 * 4,4 * 8,4 * 4 patterns at first at the same reference frame estimation and the J that adds up MOTION, just between different reference frames, compare then.
This all direction search method to multi-reference frame can obtain best coding effect, but has significantly increased the complexity of coding, and its processing time is linear growth along with reference frame quantity increases.In the reality, the sequence-dependent feature of minimizing of multi-reference frame predictive compensation residual error.Therefore also not obvious for the code efficiency of considerable sequence raising, and the calculating that increases becomes a kind of waste.
Summary of the invention
The multi-reference frame fast selecting method that the purpose of this invention is to provide a kind of H.264/AVC real-time encoder, with respect to all direction search method, it can signal to noise ratio and code check change very little in, save the scramble time in a large number.
For achieving the above object, design of the present invention is: according to sequence signature determine the search reference frame quantity, in this quantitative range, search for again, can reduce the reference frame number of actual search, can effectively reduce computation complexity, signal to noise ratio and code check change and can ignore simultaneously.
According to above-mentioned design, the present invention adopts following technical proposals:
A kind of fast selecting method of H.264/AVC multi-reference frame, it is characterized in that utilizing the correlation that multi-reference frame is selected between the movable information of textural characteristics, current block of current sequence and different mode to come to determine adaptively required reference frame quantity, at the rate distortion of this lesser amt scope internal reference frame, is optimal reference frame with the reference frame of rate distortion minimum by current block relatively.。
The step of selecting is as follows fast:
A) the optimal mode distribution of the intra-frame macro block of encoding is added up,, determined the reference frame quantity that present frame may need with this textural characteristics of weighing current sequence;
B) rate distortion in the reference frame number weight range in step a, determined of current block relatively, if dull increasing, common nearest reference frame is the optimal reference frame; Otherwise under non-dull situation, further come to determine the required number of reference frames of current block in conjunction with movable information, compare the rate distortion of this scope internal reference frame, the reference frame of rate distortion minimum is confirmed as the optimal reference frame the most at last;
C) method among usefulness step a and the step b is determined the optimal reference frame of 16 * 16,8 * 8 and 4 * 4 patterns fast;
D) utilize the correlation of selecting with 8 * 8 and 4 * 4 pattern multi-reference frames to determine the reference frame that each piece may be selected under 8 * 4 and 4 * 8 patterns, if possible the reference frame of Xuan Zeing is a frame, and then this frame is the optimal reference frame; If be two frames, then compare the rate distortion of the two, finally determine the optimal reference frame of 8 * 4 and 4 * 8 patterns;
E) utilize the correlation of selecting with 16 * 16 and 8 * 8 pattern multi-reference frames to determine the distribution of the reference frame that each piece may be selected under 16 * 8 and 8 * 16 patterns, the rate distortion of this scope internal reference frame relatively obtains the optimal reference frame of 16 * 8 and 8 * 16 patterns.
The present invention compared with prior art has following conspicuous outstanding substantive distinguishing features and remarkable advantage: the present invention utilizes the correlation that multi-reference frame is selected between the movable information of textural characteristics, current block of present encoding sequence and different mode to come to determine adaptively required number of reference frames.Experiment shows that method of the present invention can be saved the scramble time in a large number when keeping code check and signal to noise ratio almost constant.
Description of drawings
Fig. 1 is that the multi-reference frame of different mode in the verification model is selected
Fig. 2 is a process chart of the present invention
Embodiment
A specific embodiment of the multi-reference frame fast selecting method of H.264/AVC real-time encoder of the present invention is described below in conjunction with accompanying drawing:
On verification model Jm72 encoder H.264/AVC, concrete operational environment is Hewlett-Packard's work station of 3.06GHz CPU, and the sequential test structure is IBBP, and the major parameter of encoder comprises: entropy coding adopts CABAC; Estimation adopts full search, and the hunting zone is 16,1/4 search precisions; Quantization parameter is 24,28 and 32.Experiment test comprises 5 CIF sequence: Foreman, Flower, Paris, Bus and Mobile.
At first we are adopting the method for full search to test the coding situation of these 5 sequences under the varying number reference frame on the Jm72 encoder, the comparison of code efficiency (only signal to noise ratio changes and the code check variation) when using 5 reference frames respectively when the data that are listed in table 1 are to use 1 to 4 reference frame, Δ PSNR=0.00 for example, the signal to noise ratio during 5 reference frames of expression and use is identical.
The performance of table 1 encoder under the reference frame of varying number relatively
Sequence Foreman Paris Flower Bus Mobile
4ref:ΔPSNR(db)|ΔR(%) 0.01|0.2 0.00|0.2 0.00|0.4 0.01|0.2 -0.02|0.5
3ref:ΔPSNR(db)|ΔR(%) -0.01|0.0 -0.01|0.2 0.00|1.0 0.02|-0.2 -0.05|2.4
2ref:ΔPSNR(db)|ΔR(%) -0.03|0.5 -0.03|0.8 -0.03|2.6 0.03|0.0 -0.06|6.0
1ref:ΔPSNR(db)|ΔR(%)-0.05|16.7 0.03|18.2 -0.11|25.9 -0.10|18.0 -0.19|30.2
Note: QP=28, Δ PSNR are that signal to noise ratio changes, and Δ R is that code check changes (comparing with 5 reference frame situations)
Can be seen that by table 1 reference frame of diverse location is different to the effect that improves code efficiency, preceding two reference frames are the most obvious to the effect that improves code efficiency.Along with the increase of distance, reference frame far away reduces gradually to the effect that improves code efficiency.Therefore generally, the reference frame correlation near more apart from present frame is strong more, and the priority of selection is also high more.But at some in particular cases, distance reference frame far away can obviously improve code efficiency, and for example when target edges was blocked and appear, perhaps when scene was switched fast, a certain frame in the multi-reference frame may more meet the current macro needs.By in the table as can be seen encoded SNR (Signal to Noise Ratio) and code check be not that completely monotone changes in some sequence, illustrate that multi-reference frame is not all is effectively to all sequences, and these change and the textural characteristics and the movement tendency of sequence are closely related.Therefore, as can be seen, the code efficiency that multi-reference frame improves is relevant with the textural characteristics and the movable information of sequence.
As seen change obvious sequence for texture exquisiteness, scene by table 1 experimental result, Mobile sequence for example, reference frame quantity is many more, and its code efficiency is high more.The present invention describes the texture variations of sequence by adding up the mode profile of coded frame: if the ratio P (sub-MB) of the smaller piece pattern of choosing (8 * 8,8 * 4,4 * 8 and 4 * 4) is big more, illustrate that texture is fine and smooth; If the reference frame inner estimation mode chooses ratio P (INTRA) bigger recently, illustrate that the sequence scene may be in switching, at this moment reference frame quantity needn't be too much; If P (SKIP) is bigger, it is very little to illustrate that the sequence scene changes, and only needs nearest reference frame to get final product; If choose the ratio P (MB) of relatively large pattern (16 * 16,16 * 8,8 * 16) higher, illustrate that this sequence texture is more smooth, therefore scene changes does not slowly need too much reference frame.In addition can be by comparing the motion vector (MV of current block in different reference frames i) change and determine the motion change degree.If | MV i-MV I-1| be worth greatlyyer, the expression current block is obvious in different reference frame motion change, needs the more reference frame of search.Otherwise the expression motion change is less, needn't detect too much reference frame.
Conclusive judgement optimal reference frame is with rate distortion J MOTIONBe criterion, therefore can be in conjunction with above-mentioned textural characteristics and movable information analysis, by judging the J of current block at different reference frames to sequence MOTIONMonotonicity judge the optimal reference frame fast.At first add up textural characteristics, need to determine the number of reference frame; The J that compares current block then MOTIONMonotonicity in this number range.If dull increasing, reference frame is the optimal reference frame recently usually; Otherwise under non-dull situation, determine the number of the required reference frame of current block in conjunction with movable information, so can in the reference frame of limited quantity, compare J MOTION, finally determine the optimal reference frame.
In the present invention, only adopt said method to select the optimal reference frame fast to 16 * 16,8 * 8 and 4 * 4 patterns, other pattern can adopt with the correlation of these three kinds of patterns when the reference frame selection to be judged fast.Its detailed process is as follows:
Utilize above-mentioned fast method that 16 * 16 patterns of current macro are carried out the multi-reference frame estimation and obtain optimal reference frame Ref 16 * 16With corresponding SAD 16 * 16Utilize above-mentioned fast method that four 8 * 88 * 8 and 4 * 4 patterns of current macro are carried out the optimal reference frame Ref that the multi-reference frame estimation obtains 8 * 8 patterns respectively equally 8 * 8(i) and corresponding SAD 8 * 8(i) and the optimal reference frame Ref of 4 * 4 patterns 4 * 4(i), i represents 8 * 8 sequence number.For whole macro block,, there is following relational expression usually if get rid of the influence of the coded-bit of middle motion vector of formula (1) and reference frame:
Therefore deducibility, the optimal reference frame of each piece most possibly is distributed between the optimal reference frame of 16 * 16 patterns and 8 * 8 patterns under 16 * 8 and 8 * 16 patterns, and its distributed area can be represented by the formula:
Ref 16 * 8(i) and Ref 8 * 16(i) ∈ (3)
[min(Ref 16×16,minRef 8×8),max(Ref 16×16,maxRef 8×8)]i ∈[0,1]
MinRef wherein 8 * 8=min (Ref 8 * 8(i)), maxRef 8 * 8=max (Ref 8 * 8(i)) i ∈ [0,3].Therefore only need check that the reference frame in this interval range gets final product.
For 8 * 8, as shown in Figure 1, after all sub-pieces carry out estimation under 8 * 4,4 * 8 and 4 * 4 patterns in same reference frame, calculate J MOTIONWith after, carry out interframe more relatively.Therefore can infer that the optimal reference frame of 8 * 8 and 4 * 4 patterns in 8 * 8 most possibly is the optimal reference frame of 8 * 4 and 4 * 8 patterns.Detailed process is as follows: for i 8 * 8, the optimal reference frame of its 8 * 8 pattern is Ref 8 * 8And the optimal reference frame of 4 * 4 patterns is Ref (i), 4 * 4(i), if Ref 8 * 8(i) equal Ref 4 * 4(i), then this reference frame is the optimal reference frame of 8 * 4 and 4 * 8 patterns; Otherwise Ref 8 *8 (i) and Ref 4 * 4(i) these two reference frames may be the optimal reference frame of 8 * 4 and 4 * 8 patterns, therefore only compare the J of these two reference frames MOTIONGet final product.
The present invention makes full use of above-mentioned feature, and it is organically combined, and obtains the fast selecting method of multi-reference frame, its flow process as shown in Figure 2, it is as follows that concrete steps are implemented:
1. coded frame is carried out the pattern statistics and analysis, determine the reference frame quantity that present frame is required: if P (INTRA) with P's (SKIP) with greater than certain threshold value (T0), only needs two nearest reference frames to get final product; If P (sub-MB) less than P (MB), needs four reference frames, otherwise checks whole reference frames.
2. first reference frame is carried out estimation, if SAD less than certain threshold value (T1), premature termination is to the estimation of other reference frame.
3. the J of two relatively contiguous reference frames successively in fixed reference frame scope MOTION, as belonging to monotonic increase, then reference frame is best recently; Otherwise judge whether to detect next reference frame according to movable information.
4. repeating step 1-3 determines the optimal reference frame of 16 * 16,8 * 8 and 4 * 4 patterns successively.
5. utilize the correlation of the reference frame selection of 8 * 4 and 4 * 8 patterns and 8 * 8 and 4 * 4 patterns to determine the reference frame that may select of these two kinds of patterns, if possible the reference frame of Xuan Zeing is a frame, and then this frame is the optimal reference frame; If be two frames, then compare the rate distortion of the two, finally determine the optimal reference frame.
6. utilize the possible reference frame distribution of determining 16 * 8 and 8 * 16 patterns with the correlation of 16 * 16 and 8 * 8 patterns, the J of comparison range internal reference frame MOTION, determine the optimal reference frame.
Observing said method can find, has utilized textural characteristics and movable information in step 1 and 3.In step 1, to add up if will further be divided into less zone entire frame, effect can be better.Threshold value T0 can be 0.4 in experiment of the present invention according to the actual conditions assignment.Threshold value T1 in the step 2 is relevant with quantization parameter, whether is equal to or less than 1 according to the DC coefficient that quantizes back 4 * 4 matrixes and determines.
Experimental result shows, many all direction search methods with reference to fast selecting method and verification model of the present invention relatively, it is about about 30% to improve coding rate, signal to noise ratio and code check variation simultaneously can be ignored.

Claims (1)

1. fast selecting method of multi-reference frame H.264/AVC, it is characterized in that utilizing the correlation that multi-reference frame is selected between the movable information of textural characteristics, current block of present encoding sequence and different mode to come to determine adaptively required number of reference frames, at the rate distortion of this lesser amt scope internal reference frame, is optimal reference frame with the reference frame of rate distortion minimum by current block relatively; The step of described quick selection is as follows:
A. the optimal mode distribution of the intra-frame macro block of encoding is added up,, determined the reference frame quantity that present frame may need with this textural characteristics of weighing current sequence;
B. the rate distortion in the reference frame number weight range in step a, determined of current block relatively, if dull increasing, the nearest reference frame of chosen distance is the optimal reference frame; Otherwise under non-dull situation, further come to determine the required number of reference frames of current block in conjunction with movable information, compare the rate distortion of this scope internal reference frame, the reference frame of rate distortion minimum is confirmed as the optimal reference frame the most at last;
C. determine the optimal reference frame of 16 * 16,8 * 8 and 4 * 4 patterns fast with the method among step a and the step b;
D. utilize the correlation of selecting with 8 * 8 and 4 * 4 pattern multi-reference frames to determine the reference frame that each piece may be selected under 8 * 4 and 4 * 8 patterns, if possible the reference frame of Xuan Zeing is a frame, and then this frame is the optimal reference frame; If be two frames, then compare the rate distortion of the two, finally determine the optimal reference frame of 8 * 4 and 4 * 8 patterns;
E. utilize the correlation of selecting with 16 * 16 and 8 * 8 pattern multi-reference frames to determine the distribution of the reference frame that each piece may be selected under 16 * 8 and 8 * 16 patterns, the rate distortion of this scope internal reference frame relatively obtains the optimal reference frame of 16 * 8 and 8 * 16 patterns.
CNB2005100235775A 2005-01-26 2005-01-26 Quick selecting method for H.264/AVC multiple reference frame Expired - Fee Related CN1323555C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100235775A CN1323555C (en) 2005-01-26 2005-01-26 Quick selecting method for H.264/AVC multiple reference frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100235775A CN1323555C (en) 2005-01-26 2005-01-26 Quick selecting method for H.264/AVC multiple reference frame

Publications (2)

Publication Number Publication Date
CN1649413A CN1649413A (en) 2005-08-03
CN1323555C true CN1323555C (en) 2007-06-27

Family

ID=34875913

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100235775A Expired - Fee Related CN1323555C (en) 2005-01-26 2005-01-26 Quick selecting method for H.264/AVC multiple reference frame

Country Status (1)

Country Link
CN (1) CN1323555C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387361A (en) * 2010-09-02 2012-03-21 乐金电子(中国)研究开发中心有限公司 Reference frame processing method of video coding-decoding and video coder-decoder

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100362869C (en) * 2005-09-14 2008-01-16 南京大学 Adaptive reference frame selecting method based on mode inheritance in multiframe movement estimation
CN102075756B (en) * 2011-01-27 2012-10-24 北京视博云科技有限公司 Video multiframe prediction encoding and decoding method and device
CN102843554A (en) * 2011-06-21 2012-12-26 乐金电子(中国)研究开发中心有限公司 Interframe image prediction encoding and decoding methods and video encoding and decoding device
CN102404570B (en) * 2011-11-16 2014-06-04 浙江工业大学 Method for rapidly selecting multi-view video coding modes based on rate distortion sensitivity
CN102833535B (en) * 2012-07-03 2017-08-25 深圳市云宙多媒体技术有限公司 A kind of reference frame screening technique, device based on macro block statistical information
CN102801996B (en) * 2012-07-11 2015-07-01 上海大学 Rapid depth map coding mode selection method based on JNDD (Just Noticeable Depth Difference) model
CN103813166B (en) * 2014-01-28 2017-01-25 浙江大学 Low-complexity method for selecting HEVC coding multiple reference frames
CN106303570B (en) * 2016-08-22 2019-07-05 北京奇艺世纪科技有限公司 A kind of Video coding reference frame selecting method and device
CN111510727B (en) * 2020-04-14 2022-07-15 腾讯科技(深圳)有限公司 Motion estimation method and device
CN112351278B (en) * 2020-11-04 2023-07-07 北京金山云网络技术有限公司 Video encoding method and device and video decoding method and device
CN112738522A (en) * 2020-12-17 2021-04-30 腾讯科技(深圳)有限公司 Video coding method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1333634A (en) * 2001-01-12 2002-01-30 北京航空航天大学 Quick video motion estimating method
US20030163281A1 (en) * 2002-02-23 2003-08-28 Samsung Electronics Co., Ltd. Adaptive motion estimation apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1333634A (en) * 2001-01-12 2002-01-30 北京航空航天大学 Quick video motion estimating method
US20030163281A1 (en) * 2002-02-23 2003-08-28 Samsung Electronics Co., Ltd. Adaptive motion estimation apparatus and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Novel Approach to Fast Multi-frame Selection forH.264Video Coding Andy Chang,Oscar C. Au,Y.M.Yeung,ISCAS,No.2 2003 *
A Novel Approach to Fast Multi-frame Selection forH.264Video Coding Andy Chang,Oscar C. Au,Y.M.Yeung,ISCAS,No.2 2003;Analysis And Reduction of Reference Frames forMotionEstimation in MPEG-4 AVC/JVT/H.264 Yu.Wen Huang,Bing.Yu Hsieh,Tu.Chih Wang,et al,ICASSP 2003,IEEE,ISSN:0.7803.7663.3 2003;Rate-Constrained Coder Control and ComparisonofVideoCoding Standards Thomas Wiegand,Heiko Schwarz,Anthony Joch,et al,IEEE Transactions on Circuits and Systems for Video Technology.ISSN:1051-8215,Vol.7 No.13 2003 *
Analysis And Reduction of Reference Frames forMotionEstimation in MPEG-4 AVC/JVT/H.264 Yu.Wen Huang,Bing.Yu Hsieh,Tu.Chih Wang,et al,ICASSP 2003,IEEE,ISSN:0.7803.7663.3 2003 *
Rate-Constrained Coder Control and ComparisonofVideoCoding Standards Thomas Wiegand,Heiko Schwarz,Anthony Joch,et al,IEEE Transactions on Circuits and Systems for Video Technology.ISSN:1051-8215,Vol.7 No.13 2003 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387361A (en) * 2010-09-02 2012-03-21 乐金电子(中国)研究开发中心有限公司 Reference frame processing method of video coding-decoding and video coder-decoder
CN102387361B (en) * 2010-09-02 2016-06-01 乐金电子(中国)研究开发中心有限公司 The reference frame processing method of coding and decoding video and Video Codec

Also Published As

Publication number Publication date
CN1649413A (en) 2005-08-03

Similar Documents

Publication Publication Date Title
CN1323555C (en) Quick selecting method for H.264/AVC multiple reference frame
CN100401789C (en) Quick selection of prediction modes in H.264/AVC frame
US7764740B2 (en) Fast block mode determining method for motion estimation, and apparatus thereof
CN101247525B (en) Method for improving image intraframe coding velocity
WO2007117711A2 (en) Dynamic selection of motion estimation search ranges and extended motion vector ranges
KR20050089090A (en) Fast mode decision making for interframe encoding
Saha et al. A neighborhood elimination approach for block matching in motion estimation
CN111901597B (en) CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity
CN101141647A (en) AVS video coding based fast intraframe predicting mode selecting method
CN103384326A (en) Quick intra-frame prediction mode selection method for AVS-M video coding
KR100856392B1 (en) Video Encoding and Decoding Apparatus and Method referencing Reconstructed Blocks of a Current Frame
KR100757208B1 (en) Fast multiple reference frame selection method in motion estimation for H.264 video encoding
CN1457196A (en) Video encoding method based on prediction time and space domain conerent movement vectors
WO2007089916A2 (en) Dynamic reference frame decision method and system
CN101150722A (en) Quick mode identification method and device in video coding
Milicevic et al. H. 264/AVC standard: A proposal for selective intra-and optimized inter-prediction
CN100586185C (en) Mode selection method for transcoding 264 video to reduce resolving capability
Cai et al. Fast motion estimation for H. 264
KR100978596B1 (en) Motion estimation procedure by fast multiple reference frame selection procedure
CN101277449A (en) Method for transferring code of pixel field capable of reducing resolution with random proportion for 264 video
Abdelazim et al. Low complexity hierarchical prediction algorithm for H. 264/SVC
Yang et al. H. 264 fast inter-mode selection based on coded block patterns
CN100588255C (en) Self-adapting movement vector synthesis method
Wang et al. A fast multiple reference frame selection algorithm based on H. 264/AVC
KR20070031752A (en) An MPEG2-to-H.264 Transcoding Method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070627

Termination date: 20110126