CN100471277C - A method for quickly implementing flexible time domain coding of the dual frame reference video stream - Google Patents

A method for quickly implementing flexible time domain coding of the dual frame reference video stream Download PDF

Info

Publication number
CN100471277C
CN100471277C CN 200710051542 CN200710051542A CN100471277C CN 100471277 C CN100471277 C CN 100471277C CN 200710051542 CN200710051542 CN 200710051542 CN 200710051542 A CN200710051542 A CN 200710051542A CN 100471277 C CN100471277 C CN 100471277C
Authority
CN
China
Prior art keywords
frame
time domain
coding
picture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200710051542
Other languages
Chinese (zh)
Other versions
CN101018334A (en
Inventor
胡瑞敏
刘琼
王启军
夏洋
路依沙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN 200710051542 priority Critical patent/CN100471277C/en
Publication of CN101018334A publication Critical patent/CN101018334A/en
Application granted granted Critical
Publication of CN100471277C publication Critical patent/CN100471277C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosed fast time-domain flexible coding method for dual-frame reference video based on B frame comprises: for the basic layer with I or P frame image code, and the reinforcement layer with B frame image code, time-domain labeling every code frame in image group, searching two reference frames of current code B frame according to time-domain hierarchy during coding B frame, I or P frame, or B frame. This invention realizes code flow rate zoom as the multiple of integer power of 2, and improves coding efficiency greatly.

Description

A kind of method of flexible time domain coding of quick realization dual frame reference video stream
Technical field
The invention belongs to the scalable video field, particularly relate to flexible time domain coding realization technology based on double-frame reference video encoding standard video flowings such as AVS, MPEG-2.
Background technology
Continuous development along with the Internet technology, be that ubiquitous Internet provides a wide platform for Video service by now, but Internet itself has the isomerism of network, the fluctuation of bandwidth and the characteristics such as unreliability in the transmission, and video coding technique has been proposed new challenge.In order to overcome these shortcomings of Internet, scalable video encoding technology (SVC, Scalable Video Coding) is one and well selects.The scalability of scalable video encoding technology generally comprises the combination (asking for an interview list of references 1) of flexible time domain, spatial domain scalability, quality scalability and above these options.
Flexible time domain is exactly the changeability that the requirement code stream has frame per second, to satisfy different network condition and different terminal equipment decoding and demonstration needs.The technology that realizes flexible time domain at present mainly is the interframe wavelet technology, just based on the time-domain filtering technology (MCTF of motion compensation, Motion-Compensated Temporal Filtering), this technology is by introducing wavelet decomposition on time domain, obtain the multiresolution analysis of video on time domain, and then realize video scalable on time domain.MCTF has formed two kinds of implementations gradually in its development, promptly based on the MCTF of piece displacement with based on the MCTF of boosting algorithm.At first can not well obtain the information of coding image sports ground based on the MCTF of piece displacement, cause that certain number of pixels is marked as " unconnected " between coding image and reference frame, influence code efficiency; Secondly, the estimation of sub-pixel precision and motion compensation and other small echos except that the Haar small echo are difficult in its coding framework to be realized, has greatly influenced the flexibility and the code efficiency of coding.MCTF based on boosting algorithm is used in now based in the scalable extension video coding international standard H.264, the researcher at first adopts complete MCTF process to realize flexible time domain, because the computation complexity of MCTF technology itself is than higher, and employing is the coder structure of open loop, cause the reference picture of encoding and decoding end inconsistent, can cause error " drift ", reduce the efficient of coding, gradually adopted the method for hierarchical B figure to realize flexible time domain afterwards, hierarchical B figure is the MCTF that does not have renewal process, belong to the method for using the motion compensated prediction technology to realize flexible time domain in itself, just realize flexible time domain by a certain amount of B frame is abandoned.But, still very high based on the hierarchical B drawing method calculating degree of realizing flexible time domain in the scalable extension video encoding standard H.264, need in the management process of reference frame in the coding buffer memory forward direction and back to a plurality of reference frames, the video flowing that encodes like this is multiframe reference video stream (asking for an interview list of references 2).
In current industrial circle, except H.264, the B frame of most of coding standard all is that the double-frame reference that adopts carries out estimation and motion compensation, promptly behind forward reference frame and one to reference frame, particularly China has the audio/video encoding standard of independent intellectual property right---AVS, in AVS, not only the used reference frame of B frame is two frames, even the reference frame of P frame also adopts double-frame reference.In the present invention, be that the video encoding standard of double-frame reference is called the double-frame reference video encoding standard with the B frame, will be called dual frame reference video stream based on the video flowing of double-frame reference video encoding standard.
The double-frame reference video encoding standard that extensively adopts in current industrial circle has AVS, MPEG-2, H.261, H.263 wait, in these standards, in gradable grammer, defined the classification syntactic structure of code stream except MPEG-2, have outside the scalability, remaining all is non-telescoping coding standard.In order to make these non-scalable double-frame reference video encoding standards in compatible primary standard, can expand telescopic function, to meet new challenge, aspect flexible time domain, making up fast and effectively, the flexible time domain coding implementation method just seems highly significant.The video encoding standard of these double-frame references is being carried out flexible time domain when expansion, by document " digital audio/video encoding and decoding technique standard operation group " (document is downloaded network address: Http:// www.avs.org.cn) as can be known, need to solve three problems: the compatibling problem of (1) and non-scalable coding standard.(2) reference frame management problem.(3) code efficiency problem.
Summary of the invention
Technical problem to be solved by this invention is: provide a kind of at dual frame reference video stream can with non-scalable video standard compatible mutually can apparent in view raising code efficiency quick flexible time domain coding implementation method.
The present invention solves its technical problem and adopts following technical scheme:
Dual frame reference video stream is based on the video encoding standard that the B frame is a double-frame reference, the method of its flexible time domain coding is: basic layer adopts I frame or P frame image type of coding, what enhancement layer adopted is B frame image type of coding, by each coded frame in the group of picture being carried out time domain layer time mark, and in coding B frame process, search for two reference frames that obtain present encoding B frame according to the time domain stratum level, reference frame can be I frame or P frame, also can be the B frame, so just make code stream frame rate stretch according to the multiple of 2 integral number power.
The present invention can realize the flexible time domain coding of dual frame reference video stream fast, makes the frame per second of code stream to stretch according to the multiple of 2 integral number power, and compares with the original encoding standard, can improve code efficiency by a relatively large margin.
Description of drawings
Fig. 1 is the structural representation of B frame time domain layer level of the present invention;
Fig. 2 is the schematic diagram that concerns of flexible time domain coding of the present invention and reference picture;
Fig. 3 is the schematic diagram that concerns of the coded frame of the non-scalable coding of time domain of the present invention and reference picture;
Fig. 4 is the computational process of each frame image time domain level in the group of picture of the present invention;
Fig. 5 is the search procedure of flexible time domain coding process of the present invention and reference frame;
Fig. 6 is the code efficiency test result figure of foreman.qcif cycle tests of the present invention.
Embodiment
The present invention is to provide a kind of flexible time domain coding implementation method based on double-frame reference video encoding standard video flowings such as AVS, MPEG-2.Its theoretical foundation is: utilize the reference frame of the B frame of current time domain level as the B frame of next time domain level, be the structure (see figure 1) of a level iteration like this with regard to the production process that has formed B frame in a group of picture, when obtaining the reference frame of current encoded frame, taked fast search algorithm.Compare with non-scalable video process, therefore the time domain between reference frame and the coded frame can utilize the correlation between reference frame and the coded frame better apart from having shortened, and improves code efficiency (seeing Fig. 2 and Fig. 3) effectively.
Method provided by the invention is: by code stream being carried out the layering of basic layer and enhancement layer, basic layer is compatible mutually with non-scalable video standard, the frame of basic layer is called the key frame of code stream, be used for extending the frame of enhancement layer, all coded frame in the current group of picture are carried out the time domain level to be calculated and marks, from basic layer, be reference at first with the key frame of current group of picture and the key frame of previous group of picture, the coded time domain level is 1 B frame.And then be that 1 B frame serves as to be 2 B frame with reference to expanding the time domain level with the time domain level of above-mentioned two key frames and reconstruction, so carry out the expansion of the frame of 2 integral number power like this, make the frame per second of code stream increase, till extending to needed time domain level (being target frame rate) according to the speed that doubles at every turn.In the management process of reference frame, use from coding image nearest and also the time domain rank be lower than the reference picture of the frame of coding image as this coding image.
1. method provided by the invention may further comprise the steps:
(1) with the code stream layering:
Be divided into basic layer and enhancement layer, the non-scalable video standard that basic layer employing structure is IPP...P is encoded, the minimum time domain resolution that corresponding video transmission and decoding terminals show, the enhancement layer correspondence be the B frame, by the flexible choice of B frame being realized the scalability of time domain; When group of picture of coding, need the basic layer in elder generation's this group of picture of coding, i.e. I frame or P frame.
(2) check the legitimacy that is provided with about the flexible time domain parameter in the configuration file:
Whether the size that specifically comprises the hierachy number of size, flexible time domain of group of picture and group of picture is 2 integral number power; If it is illegal to check out that parameter is provided with, program withdraws from so, the cataloged procedure failure.
(3) calculate the time domain level of each coded frame in the current group of picture, and coded frame is carried out time domain layer time mark, the coding configuration parameter is upgraded.
In this process, the I frame of basic layer and the time domain level of P frame are set to 0, and remaining B frame calculates according to the time domain level computational algorithm in the layer level.
The original encoding configuration parameter upgraded being meant that the coding image type is set to frame, the frequency of frame-skip, and the number of the B frame that will insert between I frame and P frame or P frame is that the size of group of picture subtracts 1.
(4) obtain the reference frame of present encoding image:
If current is the B frame, then reference frame comprises forward reference frame and back to reference frame, if current be not the B frame, (for example then obtain according to the reference picture of the corresponding present frame type of non-scalable video prescribed by standard, AVS regulation P frame needs two reference frames, and the I frame does not need reference frame); If coded frame is the B frame in this process, just be starting point and center with the current encoded frame, forward search image group image reconstruction array and current encoded frame nearest and also the time domain level be lower than the image of current encoded frame as forward reference frame, in case such forward reference frame finds, then the forward reference frame search procedure finishes; The back is similar to the search procedure of the search procedure of reference frame and forward reference frame, has so just obtained the reference frame of current encoded frame.Then if desired, the reference frame that obtains is carried out sub-pixel interpolation.
(5) the present encoding image is carried out motion prediction and motion compensation, discrete cosine transform, quantification, residual information and reference frame index and motion vector are carried out entropy coding, this process is the same with non-scalable video process.
(6) preserve the present frame image reconstruction and advance interim coding and rebuilding image array, the time domain level that this array can be preserved a group of picture is lower than the image reconstruction of all frames of the highest time domain level and I frame or the P frame that previous group of picture is rebuild, so that step 4 can correctly get access to reference frame.
(7) repeating step four is to the process of step 6, up to last image that reaches needed time domain level.
(8) preserve image reconstruction:
In this process, need especially to judge that reconstruction frames is input to the condition of image reconstruction file.If condition satisfies, then the time domain level is lower than all reconstruction frames of the highest time domain level in the output image group image reconstruction array, and this group of picture cataloged procedure finishes, and enters the cataloged procedure of next group of picture; , condition proceeds the cataloged procedure of current group of picture if not satisfying.
2. method specific implementation process provided by the invention:
(1) corresponding step 1 is consistent with non-scalable video process.
(2) size of inspection group of picture.If the group of picture size is gop_size, should make this parameter satisfy when carrying out the flexible time domain coding expansion:
gop_size=2 x(0≤x≤max_temporal_level)(1)
Max_temporal_level is the number of the time domain level of maximum in following formula, and x is necessary for integer.
If current_temporal_level is the time domain level of present encoding image, num_frames is the number of the coding image under the current time domain level in the current group of picture.
(3) the time domain level of each coding image in the current group of picture of calculating, this is one of core of the present invention, in this process, establish increment and be the difference on adjacent two coding image DISPLAY ORDERs in each time domain level, gop_size is the size of group of picture, iLevel is the time domain level, array[] be the array of the time domain level of each coding image in the memory image group.Then:
increment=gop_size/(2 iLevel)(2)
The flow chart of realizing algorithm that this process is used as shown in Figure 4.
In this process, also need some coding parameters in the non-scalable video standard are upgraded, concrete renewal process is, if the frequency of frame-skip is gop_size, if need the number of the B frame that inserts to subtract 1 for the gop_size size between two key frames, the picture coding type of establishing present frame is the frame coding.
(4) obtaining of coding image reference frame, this process also are one of cores of the present invention.In this process, definition structure body CodedPicture at first;
CodedPicture
{
int?level;
unsigned?char**?img?Y;
unsigned?char***?img?UV;
}
Carry out global varible internal memory in the main program at coding and divide the just array PicList[gop_size of needs distribution CodedPicture type of timing] the memory headroom size, this array is also referred to as group of picture image reconstruction array, is used for being stored in the image reconstruction of each coded frame in the cataloged procedure group of picture.When coding B frame, in the acquisition process of needed two reference frames, the present invention has adopted the algorithm of searching for to realize nearby, with the current encoded frame be exactly starting point and center respectively sweep forward and sweep backward nearest and the time domain level be lower than two reference frames of the image reconstruction of current encoded frame as current encoded frame, in search procedure, in case find qualified reference frame, then the search of this direction is unconditional at once finishes.After finding two reference frames, carry out sub-pixel interpolation if desired, then carry out sub-pixel interpolation.If current encoded frame is not the B frame, obtain reference picture according to the corresponding present frame type of non-scalable coding prescribed by standard, then the program flow diagram of this process is as shown in Figure 5.
(5) this process is identical with non-scalable video standard code process, and two reference frames of the both direction that use step (4) is obtained carry out according to the flow process of non-scalable video scheme.
(6) the resulting image reconstruction of step (5) is preserved into PicList array, so that carry out the acquisition process of reference frame of the coding image of next level.
(7) repeating step (3) is to the process of (5), till last coding image of the time domain level that will reach.
(8) preserve image reconstruction.In this process, need to judge the condition of image reconstruction output especially, the output condition of concrete image reconstruction is:
current_temporal_level==max_temporal_level&&img->type==B_IMG&&img->b_frame_to_code+increment(max_temporal_level)==gop_size.
When above condition satisfies, need be according to PicList[] order export the image that the time domain level is less than or equal to the reconstruction frames of the highest time domain level, output in the reset file.
3. realization effect
In realization example of the present invention, the audio/video encoding standard AVS that has adopted China to have independent intellectual property right is at the back of the body Also mention in the scape technology, AVS is the non-telescopic double-frame reference video encoding standard of typical case. Flexible time domain at AVS In the expansion, adopted method of the present invention, the foreman.qcif sequence is carried out encoded test, Fig. 6 is flexible time domain Design sketch, in cataloged procedure, the non-telescopic coding standard that basic layer adopts is encoded, and the management of reference frame is quick Effectively, can improve by a relatively large margin code efficiency, as can be seen from Figure 6, in the situation of same code rate, of the present invention Method can improve PSNR-Y and reach more than the 1dB.
List of references
1.Applications?and?Requirements?for?Scalable?Video?Coding.ISO/IEC?JTC1/SC29/WG11N6880.January?2005,Hongkong,China.
2.J.R.Ohm,”Three-dimensional?subband?coding?with?motion?compensation,”IEEETransaction?on?Image?Processing,vol.3,no.5,pp.559-571,September?1994.
A.Secker?and?D.Taubman,“Lifting-based?invertible?motion?adaptive?transform(LIMAT)framework?for?highly?scalable?video?compression,”IEEE?Transaction?on?Image?Processing,vol.12,no.12,December?2003.
H.Schwarz,D.Marpe,and?T.Wiegand,“Analysis?of?Hierarchical?B?Pictures?and?MCTF,”in?Proceeding?of?IEEE?International?Conference?on?Multimedia?and?Expo,pp.1929-1932,July2006,Toranto,Canada.

Claims (4)

1. method that realizes fast the flexible time domain coding of dual frame reference video stream, it is characterized in that dual frame reference video stream is based on the video encoding standard that the B frame is a double-frame reference, the method of its flexible time domain coding is: basic layer adopts I frame or P frame image type of coding, what enhancement layer adopted is B frame image type of coding, by each coded frame in the group of picture being carried out time domain layer time mark, and in coding B frame process, search for two reference frames that obtain present encoding B frame according to the time domain stratum level, reference frame is I frame or P frame, or the B frame, so just make code stream frame rate stretch according to the multiple of 2 integral number power.
2. method according to claim 1 is characterized in that may further comprise the steps:
(1) with the code stream layering:
Be divided into basic layer and enhancement layer, the non-scalable video standard that basic layer employing structure is IPP...P is encoded, the minimum time domain resolution that corresponding video transmission and decoding terminals show, the enhancement layer correspondence be the B frame, by the flexible choice of B frame being realized the scalability of time domain; When group of picture of coding, need the basic layer in elder generation's this group of picture of coding, i.e. I frame or P frame;
(2) check the legitimacy that is provided with about the flexible time domain parameter in the configuration file:
Whether the size that specifically comprises the hierachy number of size, flexible time domain of group of picture and group of picture is 2 integral number power; If it is illegal to check out that parameter is provided with, program withdraws from so, the cataloged procedure failure;
(3) calculate the time domain level of each coded frame in the current group of picture, and coded frame carried out time domain layer time mark, the coding configuration parameter is upgraded,
In this process, the I frame of basic layer and the time domain level of P frame are set to 0, and remaining B frame calculates according to the time domain level computational algorithm in the layer level,
Described coding configuration parameter upgraded being meant that the coding image type is set to frame coding, the frequency of frame-skip, and the number of the B frame that will insert between I frame and P frame or P frame is that the size of group of picture subtracts 1;
(4) obtain the reference frame of present encoding image:
If current encoded frame is the B frame, then reference frame comprises that forward reference frame and back to reference frame, if current encoded frame is not the B frame, then obtain the reference picture according to the corresponding present frame type of non-scalable video prescribed by standard; If current encoded frame is the B frame in this process, just be starting point and center with the current encoded frame, search for forward coding and rebuilding image array and current encoded frame nearest and also the time domain level be lower than the image of current encoded frame as forward reference frame, in case such forward reference frame finds, then the forward reference frame search procedure finishes; The back is similar to the search procedure of the search procedure of reference frame and forward reference frame, has so just obtained the reference frame of current encoded frame;
(5) the present encoding image is carried out motion prediction and motion compensation, discrete cosine transform, quantification, residual information and reference frame index and motion vector are carried out entropy coding, this process is the same with non-scalable video process;
(6) preserve the present frame image reconstruction and advance interim coding and rebuilding image array, the time domain level that this array can be preserved a group of picture is lower than the image reconstruction of all frames of the highest time domain level and I frame or the P frame that previous group of picture is rebuild, so that step (4) can correctly get access to reference frame;
(7) repeating step (4) is to the process of step (6), up to last image that reaches needed time domain level;
(8) preserve image reconstruction:
In this process, need especially to judge that reconstruction frames is input to the condition of image reconstruction file, if condition satisfies, then the time domain level is lower than all reconstruction frames of the highest time domain level in the output image group image reconstruction array, this group of picture cataloged procedure finishes, and enters the cataloged procedure of next group of picture; , condition proceeds the cataloged procedure of current group of picture if not satisfying.
3. method according to claim 2 is characterized in that: the time domain level computational algorithm in the step (2) at first determine to have identical time domain level adjacent two coded frame between spacing, this spacing is calculated according to following formula:
increment=gop_size/(2 iLevel),
In the above-mentioned formula: increment is the difference on adjacent two coding image DISPLAY ORDERs in each time domain level, and gop_size is the size of group of picture, and iLevel is the time domain level.
4. method according to claim 2 is characterized in that: obtain in the process of reference frame of present encoding image in step (4), defined structure CodedPicture, that is:
{
int?level;
unsigned?char**imgY;
unsigned?char***imgUV;
}
Described structure has been preserved the time domain level and the pictorial data of image reconstruction, and wherein: level is the time domain level of this frame, and imgY stores the luminance component of this frame, and imgUV stores the chromatic component of this frame.
CN 200710051542 2007-02-13 2007-02-13 A method for quickly implementing flexible time domain coding of the dual frame reference video stream Expired - Fee Related CN100471277C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710051542 CN100471277C (en) 2007-02-13 2007-02-13 A method for quickly implementing flexible time domain coding of the dual frame reference video stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710051542 CN100471277C (en) 2007-02-13 2007-02-13 A method for quickly implementing flexible time domain coding of the dual frame reference video stream

Publications (2)

Publication Number Publication Date
CN101018334A CN101018334A (en) 2007-08-15
CN100471277C true CN100471277C (en) 2009-03-18

Family

ID=38727054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710051542 Expired - Fee Related CN100471277C (en) 2007-02-13 2007-02-13 A method for quickly implementing flexible time domain coding of the dual frame reference video stream

Country Status (1)

Country Link
CN (1) CN100471277C (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9654792B2 (en) 2009-07-03 2017-05-16 Intel Corporation Methods and systems for motion vector derivation at a video decoder
US8917769B2 (en) 2009-07-03 2014-12-23 Intel Corporation Methods and systems to estimate motion based on reconstructed reference frames at a video decoder
CN101710987B (en) * 2009-12-29 2011-06-15 浙江大学 Configuration method of layered B forecasting structure with high compression performance
US20110191679A1 (en) * 2010-02-02 2011-08-04 Futurewei Technologies, Inc. System and Method for Online Media Preview
JP5583439B2 (en) * 2010-03-17 2014-09-03 パナソニック株式会社 Image encoding apparatus and camera system
CN102300087A (en) * 2010-06-24 2011-12-28 北京大学 SVC (Switching Virtual Circuit) coding method and coder
EP2656610A4 (en) 2010-12-21 2015-05-20 Intel Corp System and method for enhanced dmvd processing
CN104469369B (en) * 2014-11-17 2017-10-31 何震宇 It is a kind of to utilize the method for decoding client information raising SVC performances
CN105898328A (en) * 2015-12-14 2016-08-24 乐视云计算有限公司 Self-reference coding included setting method and device for reference frame set
CN107436599A (en) * 2016-05-26 2017-12-05 北京空间技术研制试验中心 The closely quick motion planning method of in-orbit operation spacecraft
CN113596457A (en) * 2019-09-23 2021-11-02 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN112351285B (en) * 2020-11-04 2024-04-05 北京金山云网络技术有限公司 Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN101018334A (en) 2007-08-15

Similar Documents

Publication Publication Date Title
CN100471277C (en) A method for quickly implementing flexible time domain coding of the dual frame reference video stream
CN101222630B (en) Time-domain gradable video encoding method for implementing real-time double-frame reference
KR102032268B1 (en) Method for predicting motion vectors in a video codec that allows multiple referencing, motion vector encoding/decoding apparatus using the same
Li et al. A new three-step search algorithm for block motion estimation
CN102474622B (en) Method and device for video coding
Bottreau et al. A fully scalable 3D subband video codec
CN103733620A (en) Three-dimensional video with asymmetric spatial resolution
CN104429076B (en) For scalable video coding and the vague generalization residual prediction of 3D video codings
CN101946515A (en) Two pass quantization for cabac coders
CN102625102B (en) H.264/scalable video coding medius-grain scalability (SVC MGS) coding-oriented rate distortion mode selection method
CN101247525A (en) Method for improving image intraframe coding velocity
Bozinovic et al. Modeling motion for spatial scalability
Xiong et al. Barbell lifting wavelet transform for highly scalable video coding
CN101980536B (en) Object and fractal-based multi-ocular three-dimensional video compression encoding and decoding method
CN107343202B (en) Feedback-free distributed video coding and decoding method based on additional code rate
CN105611301A (en) Distributed video coding and decoding method based on wavelet domain residual errors
Al-Muscati et al. Temporal transcoding of H. 264/AVC video to the scalable format
Nogues et al. A modified HEVC decoder for low power decoding
CN101980539A (en) Fractal-based multi-view three-dimensional video compression coding and decoding method
Kim et al. An optimal framework of video adaptation and its application to rate adaptation transcoding
KR101691380B1 (en) Dct based subpixel accuracy motion estimation utilizing shifting matrix
Lin et al. SNR scalability based on bitplane coding of matching pursuit atoms at low bit rates: Fine-grained and two-layer
Kao et al. A fully scalable motion model for scalable video coding
Yang et al. Motion-compensated wavelet transform coder for very low bit-rate visual telephony
Edirisinghe et al. A wavelet implementation of the pioneering block-based disparity compensated predictive coding algorithm for stereo image pair compression

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090318

Termination date: 20150213

EXPY Termination of patent right or utility model