CN101263513A - Filtered and warpped motion compensation - Google Patents

Filtered and warpped motion compensation Download PDF

Info

Publication number
CN101263513A
CN101263513A CNA2006800338308A CN200680033830A CN101263513A CN 101263513 A CN101263513 A CN 101263513A CN A2006800338308 A CNA2006800338308 A CN A2006800338308A CN 200680033830 A CN200680033830 A CN 200680033830A CN 101263513 A CN101263513 A CN 101263513A
Authority
CN
China
Prior art keywords
previous picture
distortion
fuzzy
adds
filtration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006800338308A
Other languages
Chinese (zh)
Inventor
M·巴达加维
M·周
A·U·巴图尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Texas Instruments Inc
Original Assignee
Texas Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Texas Instruments Inc filed Critical Texas Instruments Inc
Publication of CN101263513A publication Critical patent/CN101263513A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Video compression utilizes filterings and/or warpings of the reference frames for motion estimation and motion compensation. The presence of affine, fade, or blur is determined. If present, filters and/or warpings are applied to reference frames and motion is estimated using reference frames plus any filtering/warping reference frames.

Description

The motion compensation of filtering and twisting
Technical field
[0001] the present invention relates to digital video signal processing, and relate more specifically to the equipment and the method for video compress.
Background technology
[0002] have the various application of Digital Video Communication and storage at present, and corresponding international standard has been developed and develops also and continuing.Low bitrate communication such as visual telephone and video conference, has produced the H.261 standard with multichannel 64kbps bit rate.The demand of low bit rate has more been produced H.263 standard.
[0003] H.264/AVC be nearest video codec standard, it has utilized some senior video coding toolses to provide than existing video signal coding standard such as MPEG-2, MPEG-4 and better compression performance H.263.H.264/AVC the core of standard is the mixed video signal coding techniques of block motion compensation (BMC) and transition coding.In video sequence, block motion compensation is used to remove time redundancy, and transition coding is used to remove spatial redundancy.Traditional block motion compensation scheme supposes that basically the object in a camera lens or the scene (scene) experiences displacement in x direction and y direction.In fact this simple supposition in most of the cases can be designed in a satisfactory manner, and thereby in the video signal coding standard, block motion compensation has become the most widely used technology that time redundancy is removed that is used for.The Code And Decode of block motion compensation during H.264/AVC Fig. 2 a-2b illustrates, Fig. 2 c has shown a plurality of reference frames.
[0004] yet, traditional block motion compensation model is failed the time redundancy in some scenes (scenario) in the capture video sequences, as following listed:
[0005] affine motion: the object experience affine motion in a camera lens, in the time of such as amplification and rotation.Exist some to revise motion compensated schemes in the literature to handle the technology of affine motion; " Global zoom/pan estimation andcompensation for video compression " referring to Y.T.Tse and R.L.Baker, IEEE Proc.ICASSP ' 91 (Toronto, Ont., Canada) May 1991, pp.2725-2728; " the Motioncompensation based on spatial transformations " of Y.Nakaya and H.Harashima, IEEE Trans.Circuits Systs.Video Technol., vol.4, No.3, pp.339-356,366-7, June 1994; With T.Wiegandet al., " Affine Multipicture Motion-Compensated Prediction ", IEEE Trans.Circuits Systs.Video Technol., vol.5, No.2, pp.197-209, February 2005.
[0006] illumination change, fade (fading) and mix (blending): traditional block motion compensation model is failed another scene of capture time redundancy, be (referring to K.Kamikura et al. when existing brightness to change in the camera lens, " Global brightness variation compensationfor video coding; " IEEE Trans.Circuits Systs.Video Technol., vol.8, No.8, pp.988-1000, Dec.1998), perhaps when existing a plurality of camera lenses to fade in the video sequence (referring to J.M.Boyce, " Weighted prediction in the is AVC video codingstandard H.264/MPEG; " IEEE ISCAS, pp.789-792,2004).Fade (fade) (for example take off into black, take off white) be used to sometimes change between a plurality of camera lenses in video sequence.H.264/AVC standard has been introduced the new video signal coding instrument that is known as weight estimation and has been come the coding that fades effectively for this a plurality of camera lenses.Traditional block motion compensation method is when using multiframe to mix with also unsuccessful when each camera lens is changed.
[0007] fuzzy: the another one scene that traditional block motion compensation model can not the capture time redundancy is when existing in the video sequence when bluring.When the relative motion between the camera and the camera lens of catching according to the camera time shutter faster the time, fuzzyly typically in video sequence, take place.Fuzzy also generation when the target when the different depth of field in the camera lens is focused and defocuses is as the different performer of focusing in a camera lens who is realized in the film.
[0008] thereby, when affine motion, illumination change and fuzzy beginning took place in video sequence, it is invalid that traditional block-based motion compensation technique becomes such as employed those motion compensation techniques in H.264/AVC.
Summary of the invention
[0009] the invention provides video encoder structure and method, the form that it has the filtration/distortion of reference frame is used for motion compensation as the additional reference frame.
[0010] this allows the effective compression in affine motion (affine motion), illumination change and the fuzzy scene.
Description of drawings
[0011] Fig. 1 a-1d illustrates the preferred embodiment that comprises Code And Decode.
[0012] motion compensation during H.264/AVC Fig. 2 a-2c illustrates.
[0013] Fig. 3 a-3b has shown processor and network service.
Embodiment
1. summarize
[0014] preferred embodiment of video-frequency compression method comprises and filters and/or twist one or more than a reference frame (perhaps just wherein a plurality of parts), to be provided for the extra reference frame of motion compensation.This allows having affine motion, throw light on and fade variations, and/or the accurate reckoning of the scene of bluring.Fig. 1 a is the process flow diagram of the preferred embodiment that detects, Fig. 1 b has shown that the frame by reconstruct adds the filtration of these frames and/or distortion form (version) and one group of reference frame generating, and Fig. 1 c-1d illustrates the encoder that realizes the preferred embodiment method.Generated this group reference frame (the perhaps each several part of each frame) can to detect in the present frame blur, fade and/or affine motion is made response, perhaps can be result with application program of adaptive one group of predefined filtration of possibility and distortion.
[0015] the preferred embodiment system (for example, cellphone, palm PC (PDA), Digit camera, notebook computer or the like) with any execution preferred embodiment method of several types hardware, for example digital signal processor (DSP), general purpose programmable processors, special circuit or SOC (system on a chip) (SoC), (for example, Fig. 3 a) such as the multicore processor array or such as the combination of DSP and risc processor and various special programmable accelerator.Program stored can realize signal processing method in airborne or outside (flash electricallyerasable ROM (EEROM) (flash EEP)) ROM (read-only memory) (ROM) or the ferroelectric random storer (FRAM).Modulus and digital to analog converter can provide the coupling with simulated world; Modulator and demodulator (adding the antenna that is used for air interface, such as the antenna that is used for the video on the cell phone) can provide the coupling to transmitted waveform; And packing device can be provided in the form that transmits in the network, in the Internet that illustrates in Fig. 3 b.
2. the video compress that has the reference frame of filtration/distortion
[0016] at first consider Fig. 2 a, it has shown traditional multi-frame (multiframe) block motion compensation (BMC) method H.264/AVC.Current video frame f (n) is by using block motion compensation (BMC) from one group of M frame, f (n-1), and f (n-2) ..., f (n-M) (wherein some can be the frames in future) predicts.
[0017] opposite, Fig. 1 c has shown first preferred embodiment of the multi-frame block motion compensation of general filtration and distortion, and Fig. 1 b sounds out or the heuristics mode has shown a series of reference frames.A given reference frame f (n-i), the preferred embodiment method generates one group of k iIndividual other reference frame, f N-i(1), f N-i(2) ..., f N-i(k i), each in them is the form of the filtration of frame f (n-i), or the form of the distortion of frame f (n-i), or both.This filtration expanded and the set of reference frames of distortion are used to calculate present frame f (n) in block motion compensation (BMC) afterwards.
[0018] more clearly, make H N-i, k{ } expression obtains f from f (n-i) N-i(k) operator; That is f, N-i(k)=H N-i, k{ f (n-i) }.First preferred embodiment that filters and twist is as follows:
(a) reference frame of Guo Lving
[0019] reference frame of Guo Lving obtains by linearity or non-linear filtration.Filtration operation is applicable to not by block motion compensation following some scenes in the video captured sequence effectively:
(i) fuzzy: H N-i, k{ } will be the constant color filter of linear translation, and it has by h N-i, k(p, q) given impulse response.For example, be that the long motion blur of four pixels is caught by impulse response in the horizontal direction, impulse response be point (p, q)=(0,0), (0,1), (0,2), when (0,3), h N-i, k(p, q)= 1/ 4, 1/ 4, 1/ 4, 1/ 4}= 1/ 4, and at all other the time, h N-i, k(p, q)=0.The 4th part will have the more particulars of multimode paste.
(ii) fade: H N-i, k{ } will be the operator that at every turn acts on a pixel, such as H N-i, k{ } can be by following operation definition: f N-i(k)=and α f (n-i)+β, it has parameter alpha and β.
(iii) global motion: global motion also can be caught by the constant color filter of linear translation.If (mvx mvy) is global motion vector.H so N-i, k{ } is by the constant color filter h of linear translation N-i, k(p, q) definition, except in the position (mvx, mvy) outside, h throughout N-i, k(p q) is zero, wherein in that (mvx mvy) locates h N-i, k(mvx, mvy)=1; That is, impulse response is the δ function.
(b) Niu Qu reference frame
[0020] Niu Qu reference frame is derived as follows.If (x, the y) coordinate system of expression input reference picture f (n-i), and establish (x ', y ') expression f N-i(k) coordinate system.H then N-i, k{ } defined by following geometric transformation: [x ', y ', 1]=[x, y, 1] T, wherein 3 * 3 matrix T provide like this:
T = t 11 t 12 0 t 21 t 22 0 t 31 t 32 1
Above conversion be applicable to not by block motion compensation (BMC) some scenes in the video captured sequence effectively.Some specific embodiments of conversion provide below:
(i) amplify: T = s x 0 0 0 s y 0 0 0 1
(ii) rotation: T = cos θ sin θ 0 - sin θ cos θ 0 0 0 1
For the conversion of localization, coordinate is with just the center in the zone of conversion is relevant.Notice that changing image comprises image interpolation.We can use nearest neighbor (neighbor) or bilinear filtering or other image interpolation technology to be used for this purpose.Equally under the situation of amplifying, because the picture size after the conversion that is produced is greater than the input picture size, so we can make the reference picture size after the conversion bigger than input picture size.Same when stopping less than the input picture size time in the picture size after the conversion, we elongate the image of (pad) conversion by the expansion edge pixel.We also elongate when image is rotated.
[0021] certain, any two or whole three that blur, fade and twist in the operation can be applied to frame.Similarly, notice that translation (global motion) can be by the t31 of non-zero, the t32 matrix T is represented.
3. scrambler option
[0022] scrambler need signal to demoder and show the filtration/distortion that will realize which kind of type on reference frame.And filtration/distortion can be in the frame level or be confined to the macroblock level.
(a) frame level
[0023] we use two parameter op_type and op_parameters this information of encoding.For example, under the situation of motion blur, op_type will be that motion blur and op_parameters will be color filter response coefficient (filter response coefficient).Under the situation of distortion, op_type will be that affined transformation and op_parameters will be transformation matrix T.Also like this for the combination of fading with other.
(b) macroblock level
[0024] filters and twist and also can realize in the macroblock level.This is being useful especially in the incoming frame experience variety classes conversion of various piece.For example, a people in camera lens may come up towards camera, and this causes this people's amplification, and another people in this camera lens may begin flatly to run away, and this produces motion blur in that part of frame of video.Because entire frame does not experience the filtration/distortion of identical type, not to calculate to go up effectively so generate the reference frame of whole filtration/distortion.Under the situation of macroblock level coding, we expand macroblock types parameter m b_type, and it is used for comprising the situation of filtering and twisting with signal form sending mode information.In this case, in order to reduce the system overhead that sends mb_type with signal form, we can a priori signal to possible color filter that is used for whole video sequence and distortion parameter list, and the mb_type parameter will and be twisted parameter list and make index for this possible color filter afterwards.
4. fuzzy
[0025] the fuzzy color filter that is used to generate fuzzy frame sends signal as supplementary from scrambler to demoder, for example, and as the part of the supplemental enhancement information (SEI) in H.264/AVC.Demoder uses this information, generates fuzzy frame from the frame of its previous reconstruct, its reference frame buffer.
[0026] scrambler iterative loop on one group of predefined fuzzy color filter reduces the fuzzy color filter of seeking the best according to speed.For example, can consider two types fuzzy color filter:
(a) average color filter: b AK(. .), it is done in the piece of K * K size on average
(b) motion blur color filter: b M_r_ θ(. .), wherein r represents motion amplitude, and θ represents travel direction.Especially, (m n) represents that all inputs all equal m * n matrix of 1 to make ones.One group seven simple predefined fuzzy color filter below can using in the scrambler, three fuzzy color filter of front are average color filter, and four fuzzy color filter of back are the motion blur color filter.
b a4=ones(4,4)/16
b a8=ones(8,8)/64
b a16=ones(16,16)/256
b m_4_0=ones(1,4)/4
b m_4_90=ones(4,1)/4
b m_6_0=ones(1,6)/6
b m_6_90=ones(6,1)/6
[0027] the fuzzy compensation technology is useful in fuzzy region only.The use of this video signal coding instrument is similar with the use of weighted prediction tool H.264/AVC, and weighted prediction tool H.264/AVC is useful in the zone of fading or desalinate mainly.
[0028] in motion estimation/compensation, fuzzy compensation also can be finished in the piece level by using additional modes.The current video scrambler is searched in one group of some pattern (INTRA, INTER, INTER+4MV or the like) to seek best the encoding option.In this group mode that scrambler is done to search for thereon, fuzzy pattern will be such additional modes.Only some frame of video has in the scene at fuzzy place therein, will reduce complexity of calculation at the fuzzy compensation of piece level.This exists different objects or target therein also is useful in the fuzzy scene of different directions experience, and for example, camera can be moved to the left and interested main object or target can move right.
[0029] complicacy of aforesaid preferred embodiment brute force (brute force) compensated encoder is higher.Therefore, in order to reduce computational complexity, the preferred embodiment method can only be moved fuzzy compensation method in having fuzzy zone.By (for example using video camera automatic focus technology, referring to K-S.Choi et al., New Autofocussing Technique Using Frequency SelectiveWeighted Median Filter for Video Cameras, 45 IEEE Trans.Cons.Elec.820 (1999) and the list of references of wherein quoting) detect this zone.In scrambler, iterative loop on one group of predefined fuzzy color filter reduces the fuzzy color filter of seeking the best according to speed.Improve compression performance and reduce complicacy by ambiguous estimation, such as handling by the use transform domain.
5. reference frame buffer
[0030] frame in the multi-frame impact damper (its respective frame by reference frame and their filtration/distortion is formed) can be the long-term reference frame or the reference frame of short-term.Therefore, long-term and reference frame management short-term is H.264/AVC expanded to the multi-frame impact damper of preferred embodiment.
6. parameter is determined
[0031] needs each parameter value that is used to filter/twist, its reference frame (a plurality of reference frame) that is applied to reconstruct is to generate as that group complete reference frame among Fig. 1 b, and the parameter value that is used to filter/twist can be one group of selected predefined parameter value (some possible becoming more meticulous are arranged), or the parameter value that can directly estimate in every way.Each parameter value also can dope to another frame from a frame.For example, the amplification in the video sequence typically takes place in some frames.Be that the value of magnification that a frame calculates can be used to predict that initial amplifying parameters is to be used for next frame.Notice that each parameter only can be constant roughly in the zone, and thereby will only on the basis in zone, make an estimate and filter/twist.For fuzzy, it is common bluring that the horizontal motion is fallen fuzzy with vertical object, so one group of predefined fuzzy color filter can be as having only the fuzzy of 0 degree and 90 degree directions in the 4th part.Parameter value adaptation (that is, the vertical fuzzy of falling objects strengthens with the object acceleration) can be used to reduce complicacy.The object (or zone) or the translation of background (camera movement camera lens) under any circumstance will be estimated by the conventional motion vector that reference frame obtained of never filtration/distortion.Similarly, Jun Heng amplification (s x=s y) and rotation be typical affine motion and only need two parameters, so as in the 3rd part its coordinate with respect to one group of predefined color filter of target or regional center, add and adaptive parameter also will reduce complicacy.
[0032] the whole bag of tricks can be used to guide parameters estimation (replace the search in one group of predefined initial parameter value, add to become more meticulous and adaptation).For fuzzy or desalination, automatic focus and local frequency-domain analysis provide information; For fading and illumination change, can use local lighting; And, can use such as passing through the error minimization initial motion vector that becomes more meticulous as the people such as Wiegand that quote in the background technology for the affine motion method.
7. revise
[0033] when the characteristic that has kept the reference frame (or its several parts) that comprises filtration and/or distortion was used for estimation, preferred embodiment can be modified in every way.
[0034] for example, field (upper and lower) can replace frame to use, that is, these methods are applied to have usually the picture of corresponding adjustment.(for example, when SAD) exceeding threshold value, can call and possible blur, fade and/or the detection of affine motion in the distortion of the block motion compensation motion vector prediction of routine; In this case, near reference block, use filtrations/distortion partly, and will respectively be out of shape (SAD) accordingly and compare with initial SAD, seeking possible bluring, fade and/or affine motion, and the parameter value that becomes more meticulous; With distortion with owing to the balance of the rate variation of odd encoder information more coding is maked decision.

Claims (8)

1. video-frequency compression method with block motion compensation type, it uses one or more than a reference picture, seeks the motion vector that is used for block of pixels in photo current, improves to comprise step:
(a) described one or comprise one group of filtration/distortion of previous picture more than a reference picture, these filtration/distortions of wherein said previous picture are selected from: (i) the fuzzy form of described previous picture adds the form of the distortion of the above previous picture, (ii) the fuzzy form of described previous picture adds the form of fading of the above previous picture, (iii) the form of fading of described previous picture adds the form of the distortion of the above previous picture, and (iv) the fuzzy form of the described previous picture form of fading that adds the above previous picture adds the form of the distortion of the above previous picture.
2. method according to claim 1, wherein, described filtration/distortion is selected from finite set.
3. method according to claim 1, wherein said filtration/distortion is determined from the estimation of fuzzy parameter, the parameter of fading and/or distortion parameter.
4. video encoder with block motion compensation device type, it uses in storer one or more than a reference picture, seeks the motion vector that is used for block of pixels in photo current, and improvement comprises:
(a) color filter/torsatron, it is coupled to described storer, and can be used for generating, and one group of filtration/distortion of in described storer, storing previous picture, be made for the usefulness of reference picture, described filtration/the distortion of wherein said previous picture is selected from: (i) the fuzzy form of described previous picture adds the form of the distortion of the above previous picture, (ii) the fuzzy form of described previous picture adds the form of fading of the above previous picture, (iii) the form of fading of described previous picture adds the form of the distortion of the above previous picture, and (iv) the fuzzy form of the described previous picture form of fading that adds the above previous picture adds the form of the distortion of the above previous picture.
5. scrambler according to claim 4, wherein, described filter/torsatron comprises Fuzzy Detector.
6. scrambler according to claim 4 wherein, is used for determining that the information about the filtration/distortion of previous picture of motion vector is relevant with described motion vector.
7. Video Decoder with block motion compensation device type, it uses motion vector from storer or predicts block of pixels in the photo current more than a reference picture, and improvement comprises:
(a) color filter/torsatron, it is coupled to described storer, and can be used for generating, and each filtration/distortion of in described storer, storing the picture of previous reconstruct, described filtration/the distortion of wherein said previous picture is selected from: (i) the fuzzy form of described previous picture adds the form of the distortion of the above previous picture, (ii) the fuzzy form of described previous picture adds the form of fading of the above previous picture, (iii) the form of fading of described previous picture adds the form of the distortion of the above previous picture, and (iv) the fuzzy form of the described previous picture form of fading that adds the above previous picture adds the form of the distortion of the above previous picture.
8. demoder according to claim 7, wherein, the information of relevant with the motion vector filtration/distortion about previous picture is used to determine to use which kind of described filtration/distortion to be used for piece and predicts.
CNA2006800338308A 2005-07-15 2006-07-17 Filtered and warpped motion compensation Pending CN101263513A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US70000305P 2005-07-15 2005-07-15
US60/700,003 2005-07-15
US11/327,904 2006-01-09

Publications (1)

Publication Number Publication Date
CN101263513A true CN101263513A (en) 2008-09-10

Family

ID=39963024

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2006800338308A Pending CN101263513A (en) 2005-07-15 2006-07-17 Filtered and warpped motion compensation

Country Status (1)

Country Link
CN (1) CN101263513A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055979A (en) * 2010-12-28 2011-05-11 深圳市融创天下科技发展有限公司 Methods and devices for coding and decoding frame motion compensation
CN102055978A (en) * 2010-12-28 2011-05-11 深圳市融创天下科技发展有限公司 Methods and devices for coding and decoding frame motion compensation
CN104718756A (en) * 2013-01-30 2015-06-17 英特尔公司 Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN104769945A (en) * 2012-09-27 2015-07-08 奥林奇公司 Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
CN104854866A (en) * 2012-11-13 2015-08-19 英特尔公司 Content adaptive, characteristics compensated prediction for next generation video
CN104869399A (en) * 2014-02-24 2015-08-26 联想(北京)有限公司 Information processing method and electronic equipment.
CN105637863A (en) * 2013-10-14 2016-06-01 高通股份有限公司 Device and method for scalable coding of video information

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055979A (en) * 2010-12-28 2011-05-11 深圳市融创天下科技发展有限公司 Methods and devices for coding and decoding frame motion compensation
CN102055978A (en) * 2010-12-28 2011-05-11 深圳市融创天下科技发展有限公司 Methods and devices for coding and decoding frame motion compensation
WO2012088812A1 (en) * 2010-12-28 2012-07-05 深圳市融创天下科技股份有限公司 Method and device for frame motion compensation encoding and decoding
CN102055978B (en) * 2010-12-28 2014-04-30 深圳市融创天下科技股份有限公司 Methods and devices for coding and decoding frame motion compensation
CN102055979B (en) * 2010-12-28 2014-09-03 深圳市云宙多媒体技术有限公司 Methods and devices for coding and decoding frame motion compensation
CN104769945B (en) * 2012-09-27 2018-06-22 奥林奇公司 It codes and decodes the method for image, code and decode device, mechanized data medium
CN104769945A (en) * 2012-09-27 2015-07-08 奥林奇公司 Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US9762929B2 (en) 2012-11-13 2017-09-12 Intel Corporation Content adaptive, characteristics compensated prediction for next generation video
CN104854866A (en) * 2012-11-13 2015-08-19 英特尔公司 Content adaptive, characteristics compensated prediction for next generation video
CN104854866B (en) * 2012-11-13 2019-05-31 英特尔公司 The content-adaptive of next-generation video, characteristic compensation prediction
CN104885455A (en) * 2013-01-30 2015-09-02 英特尔公司 Content adaptive bitrate and quality control by using frame hierarchy sensitive quantization for high efficiency next generation video coding
CN105556964A (en) * 2013-01-30 2016-05-04 英特尔公司 Content adaptive bi-directional or functionally predictive multi-pass pictures for high efficiency next generation video coding
US9973757B2 (en) 2013-01-30 2018-05-15 Intel Corporation Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN104718756A (en) * 2013-01-30 2015-06-17 英特尔公司 Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN104718756B (en) * 2013-01-30 2019-02-15 英特尔公司 Next-generation video coding is carried out using the content-adaptive predictive pictures and function prediction picture of modified reference
CN104885455B (en) * 2013-01-30 2019-02-22 英特尔公司 A kind of computer implemented method and device for Video coding
CN105637863A (en) * 2013-10-14 2016-06-01 高通股份有限公司 Device and method for scalable coding of video information
CN104869399A (en) * 2014-02-24 2015-08-26 联想(北京)有限公司 Information processing method and electronic equipment.

Similar Documents

Publication Publication Date Title
US9253504B2 (en) Methods and apparatus for adaptive reference filtering
CN104363451B (en) Image prediction method and relevant apparatus
EP3941056A1 (en) Encoding and decoding method and device, encoder side apparatus and decoder side apparatus
JP4723025B2 (en) Image encoding method and image encoding apparatus
JP5606625B2 (en) Reference processing using advanced motion models for video coding
KR20150055005A (en) Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN103141097B (en) The de-blocking filter optimized
JP2012502590A (en) Video coding system and method using configuration reference frame
CN101263513A (en) Filtered and warpped motion compensation
RU2684193C1 (en) Device and method for motion compensation in video content
JP5328936B2 (en) Motion estimation method
WO2007011851A2 (en) Filtered and warped motion compensation
JP2010232734A (en) Image encoding apparatus, and image encoding method
Lan et al. Exploiting non-local correlation via signal-dependent transform (SDT)
WO2012010023A1 (en) Method and apparatus for image motion estimation
Paul et al. McFIS in hierarchical bipredictve pictures-based video coding for referencing the stable area in a scene
Yan Noise reduction for MPEG type of codec
JP2018182435A (en) Motion vector prediction device and computer program
Kesrarat et al. Investigation of performance trade off in motion estimation algorithms on sub-pixel displacement
JP2007013398A (en) Post filter, post filtering program and electronic information equipment
JP2003125411A (en) Image coder, image decoder, its method, image coding program, and image decoding program
Meng Research on Video Coding Technology Based on Neural Network
Jang et al. Enhanced motion estimation algorithm with prefiltering in video compression
KR101590875B1 (en) Method and appratus for encoding images using motion prediction by multiple reference, and method and apparatus for decoding images using motion prediction by multiple reference
Esche Temporal pixel trajectories for frame denoising in a hybrid video codec

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080910