CN102204256B - Image prediction method and system - Google Patents

Image prediction method and system Download PDF

Info

Publication number
CN102204256B
CN102204256B CN200980143556.3A CN200980143556A CN102204256B CN 102204256 B CN102204256 B CN 102204256B CN 200980143556 A CN200980143556 A CN 200980143556A CN 102204256 B CN102204256 B CN 102204256B
Authority
CN
China
Prior art keywords
block
pixels
pixel
frame
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200980143556.3A
Other languages
Chinese (zh)
Other versions
CN102204256A (en
Inventor
王荣刚
张永兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Priority to CN200980143556.3A priority Critical patent/CN102204256B/en
Priority claimed from PCT/IB2009/055216 external-priority patent/WO2010049916A1/en
Publication of CN102204256A publication Critical patent/CN102204256A/en
Application granted granted Critical
Publication of CN102204256B publication Critical patent/CN102204256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method for Computing a predicted frame from a first and a second reference frames, said method comprising for each block of pixels in the predicted frame the acts of defining a first block of pixels in the first reference frame collocated with a third block of pixels which is the block of pixels in the predicted frame; defining a second block of pixels corresponding, in the second reference frame, to the first block of pixels along the motion vector of said first block from said first to second reference frames; Computing a first set of coefficients allowing the transformation of the pixels of the first block into pixels of the second block; Computing pixels of the third block using the first set of coefficients and pixels from a fourth block collocated in the first reference frame with the second block of pixels.

Description

Image prediction method and system
Technical field
The present invention relates generally to image processing, and more specifically, relate to image prediction.
Background technology
Prediction is that a kind of statistical estimate is processed, and wherein according to the observation of other stochastic variables, estimates one or more stochastic variables.When the variable that will estimate is associated with " in the future " and when observable variable was associated with " past ", this is called as prediction in some sense.One of the simplest, the most general Predicting Technique is linear prediction.Linear prediction is for example to predict a vector according to another vector.The most common purposes of prediction is to estimate stationary random process (that is, the sample of the immovable random process of its joint probability distribution (random stochastic process) when being shifted according to the observation of several formerly samples in time or space.The Another Application of prediction be when according at reference picture (also referred to as forward direction image), comprise be observed " formerly " block of pixels and estimate block of pixels time, be applied in during image/video compresses.In the case, each predicted image (or picture or frame) is divided into non-overlapped rectangular block.The motion vector (that is the vector, using for the prediction of the skew of the coordinate for providing from the coordinate of predicted picture to reference picture) of estimation (Motion Estimation, ME) each piece of deriving in reference picture is provided.Then, with reference to the corresponding blocks in the reference frame of derived motion vectors point, use motion compensation (MC) to predict each piece.ME and MC be method known to the person skilled in the art both.The method can contribute to eliminate redundant information, and result, can need less bit to describe residual error (its be between original and predicted poor).Yet in fact this ME/MC Forecasting Methodology is not for predicting the in the future final solution of frame, this is because it is the hypothesis that Moving Objects based on caught is being carried out translational motion, and this is always ungenuine.In addition,, for relating to the Image estimation of nongausian process, ME/MC technology can not fully extract and will contribute to the prediction information likely of the institute about the past of frame in the future.
Nowadays, need to be a kind of for overcome image prediction solution prior art shortcoming, that can easily realize in existing communication architecture.
Summary of the invention
The object of the invention is to overcome the deficiency and/or prior art is made to improvement.
In order to reach such degree, the present invention proposes a kind ofly for calculate the method for predicted frame according to the first and second reference frames, for each block of pixels in predicted frame, described method comprises following action:
A) the first block of pixels in juxtaposed the first reference frame of block of pixels in definition and predicted frame;
B) motion vector along described juxtaposition piece from described the first reference frame to the second reference frame is defined in the second block of pixels corresponding with the first juxtaposition block of pixels the second reference frame;
C1) calculate for allowing the pixel of juxtaposition piece to be transformed to the first coefficient sets of the pixel of second;
D) with calculate the pixel of predicted frame piece in the first coefficient sets and next comfortable the first reference frame with the pixel of juxtaposed the 4th of the second block of pixels.
The invention still further relates to a kind of according to the system of claim 7.
The invention still further relates to a kind of according to the device of claim 4.
The invention still further relates to a kind of according to the computer program of claim 10.
The advantage of institute's put forward the methods is, it can make full use of redundancy adaptively, to be adjusted at according to the characteristic of pixel in local spatial time zone the movable information of deriving between successive frame.
Compare with existing solution, another advantage of institute's put forward the methods is, it can regulate interpolation coefficient (to carry out predict pixel according to the existing pixel in previous frame) adaptively, thus the unstable statistical attribute of match video signal.Interpolation coefficient has been played the part of key player for precision of prediction.Described coefficient is more accurate, and predicted frame is just more reliable.These coefficients may relate to the heavy burden of the bit rate aspect of video compression.Thereby the method according to this invention has proposed, for derive the algorithm of the more severity factor of first by making full use of high similitude between the same object of consecutive frame, therefore to have discharged the great burden that transmits this coefficient.
Accompanying drawing explanation
Now, will be individually by means of example and only describe embodiments of the invention with reference to accompanying drawing, in described accompanying drawing, to same parts, provide corresponding Reference numeral, and wherein:
Fig. 1 schematically illustrates according to the example of the pixel prediction in the predicted image carrying out according to the pixel in reference picture of the embodiment of the present invention;
Fig. 2 schematically illustrates according to the piece using in the method for the embodiment of the present invention and frame;
Fig. 3 A schematically illustrates the method according to the embodiment of the present invention;
Fig. 3 B schematically illustrates the method according to the embodiment of the present invention;
Fig. 3 C schematically illustrates the method for the additional embodiment according to the present invention;
Fig. 4 A schematically illustrates integer (integer) sample of 1/4th sample brightness interpolations (quarter sample luma interpolation) for traditional interpolating method and the example of mark (fractional) sampling location;
Fig. 4 B schematically illustrate according to the embodiment of the present invention for reciprocal fraction pixel (fractional pixel) being carried out to the example of the spatial neighborhood of interpolation;
Fig. 5 schematically illustrates the transformation from backward to forward coefficients according to the embodiment of the present invention;
Fig. 6 is the comparison with the side information of the existing scheme generation based on motion extrapolation by the method according to this invention in DVC coding-" George Foreman (foreman) ";
Fig. 7 is the comparison with the side information of the existing scheme generation based on motion extrapolation by the method according to this invention in DVC coding-" Mobile (Mobile) ";
Fig. 8 is the comparison with the reconstruction WZ frame of the existing scheme generation based on motion extrapolation by the method according to this invention in DVC coding-" George Foreman ";
Fig. 9 is the comparison with the reconstruction WZ frame of the existing scheme generation based on motion extrapolation by the method according to this invention in DVC coding-" Mobile ";
Figure 10 is the comparison of the performance when replacing hopping model (skip model) for sequence " Mobile " by the method according to this invention; And
Figure 11 is the comparison of the performance when replacing hopping model for sequence " Tan Pute (Tempete) " by the method according to this invention.
Embodiment
Be below the description of example embodiment, when making in combination with accompanying drawing, the description of these example embodiment will prove above-mentioned feature and advantage, and introduce feature and advantage in addition.
In the following description, for the object of explaining rather than limiting, proposition specific detail (such as, framework, interface, technology, device etc.) for describing.Yet, will be apparent that for those skilled in the art, other embodiment that depart from these details will be understood in the scope in claims.
And, for purposes of clarity, omitted the detailed description of well known device, system and method, thereby do not made the description of the invention fuzzy.In addition, in detail other entities in router, server, node, base station, gateway or communication network are not described in detail, this is due to their realization, to have exceeded the scope of native system and method.
In addition, should understand clearly, for n-lustrative object, comprised accompanying drawing, and described accompanying drawing not show the scope of native system.
The method according to this invention has proposed a kind of model that carrys out predicted picture (that is, so-called predicted or current image/frame) for the observation based on making at previous image.In the method according to the invention, the block of pixels of take is carried out this prediction as unit, and can carry out this prediction for each piece of predicted image.In the text, image can be assimilated to (assimilate) for (pixel) piece.Juxtaposition (collocated) piece by the first image and the second image, one will understand that the piece in identical position in two images.For example, in Fig. 2, piece B t(k, l) and B t-1(k, l) is juxtaposition piece.
Fig. 3 A has described the illustrative embodiment of the method according to this invention, wherein according to both known and calculate this predicted frame at predicted frame 260 (that is the frame that, predict) the first reference frame 200 and the second reference frame 210 before.All images can be in image streams.
In this illustrative embodiment, for each block of pixels that will predict in predicted frame 260, the method comprises for allowing to define the action 220 of the first block of pixels in juxtaposed the first reference frame of block of pixels (that is, the 3rd) that will predict with predicted frame.Then, action 230 allows the motion vector along the first juxtaposition block of pixels from described the first reference frame to the second reference frame to be defined in the second block of pixels corresponding with described juxtaposition piece the second reference frame.Motion vector is the vector for inter prediction (inter prediction), and this inter prediction is for the skew of the coordinate providing from the coordinate of predicted picture to reference picture.It shows this macro block in predicted picture or pixel (or a similar macro block or pixel) for the position based on reference picture macro block or pixel.Because the first and second reference pictures are known pictures, thus can be by the technology that easily can use for those skilled in the art the motion vector for the juxtaposition piece from the first reference frame to the second reference frame of deriving, and therefore, can define second.Subsequently, can in action 240, calculate for allowing the pixel of juxtaposition piece to be transformed to the first coefficient sets of the pixel of second.Finally, action 250 allows with calculate the pixel of predicted frame piece in the first coefficient sets and next comfortable the first reference frame with the pixel of juxtaposed the 4th of the second block of pixels.
In the method according to the invention, use the first coefficient sets, the block of pixels of deriving and will predict according to the 4th block of pixels in the first reference frame.Thereby, due to by second and the 4th respectively juxtaposition in the second and first reference frame, so implied by the 4th block of pixels that will predict of deriving in predicted frame: for defining the motion vector of second with identical for setting up the motion vector of relation between the 4th and the block of pixels that will predict.
Fig. 1 has schematically described along following motion vector 130 and has predicted the pixel 111 in predicted frame 110 according to the pixel in reference frame 120, and this motion vector 130 carries out associated (link) for the pixel 111 that will predict with its respective pixel 122 of reference frame 120.
As shown in Figure 1, for each pixel 111 in predicted frame 110, along (shown in Figure 1 by motion vector 130) movement locus (trajectory) respective pixel 121 in reference frame 120 of deriving.Defined the square space neighborhood 125 centered by respective pixel 121 in reference frame 120.Thereby, the pixel in predicted frame 111 is approximately to the linear combination of the pixel 122 of corresponding spatial neighborhood 125 in reference frame 120.This interpolation can be processed and is expressed as:
Y ^ t ( m , n ) = Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m ~ + i , n ~ + j ) · α i , j + n t ( m , n ) - - - ( 1 )
Wherein:
-
Figure BPA00001357862000052
showed and in predicted frame 110, be positioned at the predicted pixel 111 that coordinate (m, n) is located;
-X t-1performance corresponding to pixel in reference frame 120;
-
Figure BPA00001357862000053
showed and passed through
Figure BPA00001357862000054
, the position of the respective pixel 121 of reference frame 120 that the motion vector 130 that is positioned at the predicted pixel 111 of correspondence that (m, n) locate in predicted frame 110 is pointed;
i, jit is interpolation coefficient;
-n t(m, n) is additive white Gaussian noise.
Can use method known to those skilled in the art (such as, mean square error (MSE) method and lowest mean square (LMS) method further explained after so) interpolation coefficient of deriving.
The radius r of interpolation set or interpolation filter (that is, for the interpolation coefficient of square space neighborhood 125) can be for by the dimension definitions of interpolation filter being: (2r+1) * and (2r+1).For example, in Fig. 1, radius is r=1, and interpolation filter is of a size of 3 * 3.
Fig. 2 has described the frame using in the method according to the invention.Use the first reference frame X t-1the 211 and second reference frame X t-2222 derive for the predicted frame Y that derives tthe interpolation coefficient of each block of pixels in 205.
By the first reference frame X t-1first B in 211 t-1(k, l) 212 is defined as predicted frame Y tthe block of pixels that will predict in 205 (the 3rd) B tthe juxtaposition piece of (k, l) 201.
Due to known and defined the first and second reference frames both, so existing method well known by persons skilled in the art allows along juxtaposition (or first) piece B t-1(k, l) 212 is from the first reference frame X t-1to the second reference frame X t-2motion vector v t-1, t-2(k, l) defines second
Figure BPA00001357862000061
Thereby, can define according to the known pixels in the first and second reference frames, as described in Fig. 1 the first interpolation coefficient set.
Then, together with in incompatible use the second reference frame 222 of the first coefficient set previously having obtained on this
Figure BPA00001357862000062
the first reference frame in juxtaposition piece
Figure BPA00001357862000063
(with reference to figure 3A, also referred to as the 4th), to derive block of pixels (or the 3rd) B that will predict in predicted frame 205 tpredicted pixel in (k, l) 201.
Fig. 3 B has described the illustrative embodiment of the method according to this invention.
In action 300, at predicted frame Y tthe block of pixels B that middle selection will be predicted t(k, l).Then, definition and Y in action 310 tin the juxtaposed B of block of pixels that will predict t-1(k, l).Due to known and defined X t-1and X t-2both, so can be along juxtaposition piece B in action 320 t-1(k, l) 212 is from the first reference frame X t-1to the second reference frame X t-2motion vector v t-1, t-2(k, l) is defined in X t-2in with B t-1(k, l) is corresponding second
Figure BPA00001357862000064
action
330 in definition with
Figure BPA00001357862000065
juxtaposed X t-1in the 4th
Figure BPA00001357862000066
The in the situation that of application drawing 1 described method, in action 340, can pass through according to second
Figure BPA00001357862000067
in pixel and approximate first B t-1pixel in (k, l), obtains the first interpolation coefficient set.In other words, suppose B t-1each pixel in (k, l) be approximately with by motion v t-1, t-2piece centered by the respective pixel that (k, l) points to
Figure BPA00001357862000068
in the linear combination of square space neighborhood:
Y ^ t - 1 ( m , n ) = Σ - r ≤ ( i , j ) ≤ r X t - 2 ( m ~ + i , n ~ + j ) · α ij + n t - 1 ( m , n ) - - - ( 2 )
The approximate definition of depending on interpolation coefficient of this pixel.In fact, these interpolation coefficients should be chosen to be to optimum interpolation coefficient.
In equation (2), known X t-2in pixel.In addition, due to known Y also t-1in pixel, so can be by the pixel of being undertaken by equation (2) approximate and Y t-1in corresponding actual pixels compare, to derive interpolation coefficient α i, j.In the illustrative embodiment of the method according to this invention, so go up mentionedly, use mean square error (MSE) to carry out this relatively, to define resulting mean square error:
ϵ 2 ( k , l ) = Σ ( m , n ) ∈ Σ B t - 1 ( k , l ) E ( | | Y t - 1 ( m , n ) - Y ^ t - 1 ( m , n ) | | 2 ) - - - ( 3 )
MSE as performance standard can be considered as for by based on removing from the observation of signal the measurement that measurable information reduces the energy of how many these signals.Because the target of fallout predictor is to remove this measurable information, thus preferably fallout predictor corresponding to less MSE.
Then, can use lowest mean square (LMS) method to derive optimum interpolation coefficient.
Then, in action 345, make following hypothesis, can use the first identical coefficient sets, according to the pixel in the 4th, be similar to the pixel in the piece that will predict, this is owing to having highly redundant degree between these two references and predicted frame).Suppose to use equation (3) and the optimum interpolation coefficient of deriving is α i, j, as previously explained, can use same factor and equation (1) to make as follows B tthe prediction of (k, l):
Y ^ t ( m , n ) = Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m ~ + i , n ~ + j ) · α i , j + n t ( m , n ) - - - ( 4 )
Wherein, α i, jit is the interpolation coefficient obtaining in equation (2) and (3).
Can emphasize, the frame in frame stream is nearer, and redundancy is higher, and thereby, this hypothesis is also just better.Can emphasize, be to use identical motion vector, in fact
V t,t-1(k,l)=V t-1,t-2(k,l)
Carry out basis in pixel derivation B tthe prediction of the pixel in (k, l) (action 350) and according to
Figure BPA00001357862000074
in pixel derivation B t-1the prediction of the pixel in (k, l).
In additional embodiment of the present invention, with reference to figure 3C, can derive the second interpolation coefficient set, to increase the precision of the pixel prediction in the block of pixels that will predict in predicted frame.
In fact, symmetrically, action 245 in can with the second interpolation coefficient set by
Figure BPA00001357862000075
in pixel approximate or be expressed as B t-1the linear combination of the pixel of (k, l):
X ^ t - 2 ( m ~ , n ~ ) = Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m + i , n + j ) · β i , j + n t - 2 ( m ~ , n ~ ) - - - ( 5 )
Then, can suppose: the second identical coefficient sets can be for being used described the second interpolation coefficient set in action in 255, according to B tpixel in (k, l) is similar to or expresses
Figure BPA00001357862000082
in pixel (again making hypothesis: for example, when being chosen to be consecutive frame in frame stream, have highly redundant degree between reference and predicted frame):
X ^ t - 1 ( m ~ , n ~ ) = Σ - r ≤ ( i , j ) ≤ r Y t ( m + i , n + j ) · β i , j + n t - 1 ( m ~ , n ~ ) - - - ( 6 )
Yet, here, due to B tpixel unknown (as the pixel that will predict) in (k, l), so can not be expressed as them
Figure BPA00001357862000084
in the linear combination of pixel.But, because mathematic(al) representation is linear combination, so can use symmetrical interpolation coefficient, the basis of the interpolation coefficient of the second set in pixel express B tpixel in (k, l):
β i , j ′ = β - i , - j - - - ( 7 )
Y ^ t ( m , n ) = Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m ~ + i , n ~ + j ) · β i , j ′ + n t ( m , n ) - - - ( 8 )
Wherein, β ' i, jthe contrary coefficient corresponding with the coefficient of deriving in (5).
Finally, for according to this optional embodiment of the inventive method, derive/obtain two interpolation coefficient set, this has implied basis the B of middle pixel ttwo expression of pixel in (k, l)/approximate.Thereby, for each pixel, can be by these two of same pixel approximate be averaged or average, be similar to derive optimum prediction according to described two:
Y ^ t ( m , n ) =
( Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m ~ + i , n ~ + j ) · α i , j + Σ - r ≤ ( i , j ) ≤ r X t - 1 ( m ~ + i , n ~ + j ) · β ′ i , j ) / 2 + n t ( m , n )
(9)。
In fact, equation (4) and (8) allow approximate frame Y in two different directions (forward and backward) tin (m, n) same pixel of locating, this has implied α i, j≈ β ' i, jthereby, allow to improve the precision of this prediction.
In practice, for example, the in the situation that of encoder/decoder system, the method according to this invention is based on the following fact, the block of pixels in the first and second reference frames both for encoder can with/known, thereby this allows to use the data of deriving according to these reference frames to obtain predicted frame.Can also be with realizing this method for calculate the interpolation device of predicted frame according to video flowing the first and second reference frames.Described coding, decoding or interpolation device can be typically following electronic installations, and it comprises and be arranged to the processor that is carried in the executable instruction of storing on computer-readable medium, and this executable instruction makes described processor carry out this method.This interpolation device can also be for calculate the encoder/decoder part of the system of predicted frame according to video flowing the first and second reference frames, this system comprises conveyer, this conveyer is for transmitting the video flowing that comprises described reference frame, for further calculating predicted frame to interpolation device.
On this, in described this illustrative embodiment, motion vector is pixel integer precision (integer accuracy).Yet the method according to this invention also can realize subpixel accuracy.This is because of in (being described on Fig. 4 A) existing/known 1/4th pixel interpolating methods, use the whole pixel nearest with the sub-pixel of wanting interpolation, along horizontal and vertical direction, pass through fixing filter tap (for example, 6 tap filters mean that interpolation utilize 6 nearest whole pixels) each sub-pixel carried out to interpolation.
On Fig. 4 A, illustrate integral sample (thering is uppercase shaded block) and fractional sampling position (unhatched blocks with lowercase) for 1/4th sample brightness interpolations (that is, be inserted as in by the resolution of luma samples in both in horizontal and vertical direction original resolution 4 times).
At subpixel accuracy in this case, can apply as follows the method according to this invention, as shown in Figure 4 B (spatial neighborhood).For each sub-pixel, find nearest whole pixel, then, by the weighted linear combination of whole pixel in the Square Neighborhood centered by nearest whole pixel, corresponding sub-pixel is carried out to interpolation.By selected suitable interpolation coefficient, it has realized the result identical with existing subpixel interpolation method.Yet, in real image, only along horizontal and vertical direction, carry out interpolation, and in the situation that use the multiple section (complex region) of existing/known solution, this interpolation may be enough accurately always.In addition, in the conventional method, interpolation tap and filter are always fixed, and this has further limited the precision of interpolation result.Yet, in the method according to the invention, can carry out interpolation along any direction, rather than be limited to along horizontal or vertical direction and carry out interpolation, the coefficient along specific direction is significantly greater than other directions.As example, if diagonally there is edge, along correspondence to filter coefficient angular direction, the method according to this invention by the coefficient being relatively greater than in other positions, and thereby improved interpolation precision (yet, conventional filter only can be carried out interpolation along horizontal or vertical direction, and thereby can not be suitable for edge direction).In addition, can regulate adaptively interpolation coefficient according to the characteristic of pixel in adjacent space neighborhood.
In order to verify the forecasting efficiency of proposed model, use distributed video coding (DVC) extrapolation to make the example of realization.In DVC, final Wyner-Ziv (WZ) frame of rebuilding comprises the side information (SI) that adds the error of being proofreaied and correct by Parity Check Bits.As a result, the improvement of SI prediction has formed one of most critical aspect of improving DVC compression efficiency.Because if SI is high-quality, the energy of residual, information (it need to proofread and correct the error between SI and primitive frame) reduces, and has caused transmitting the minimizing of Parity Check Bits, and thereby has reduced bit rate.Because being suitable for the only available information based on the past, the method according to this invention predicts present frame, thus its can be realized in the extrapolation application in DVC, and itself and existing scheme based on extrapolation can be compared.In DVC, because original pixels is unavailable at decoder-side, so carry out ME in frame in the past.For example, as shown in Figure 2, for each the piece B in predicted frame t(k, l), first we use juxtaposition piece B t-1(k, l) as current block, and finds its MV in frame t-2, and then by the MC carrying out in frame t-1, is processed and obtained B with MV tthe prediction of (k, l).In the illustrative embodiment of the method according to this invention, we use the MV identical with the existing scheme based on motion extrapolation, and in Fig. 6-9, have described comparative result.H.263+, we will use key frame, and wherein QP is set to 8, and then with Turbo encoder, WZ frame be encoded.In Fig. 6 and Fig. 7, described SI comparison.Can easily observe: compare with existing motion extrapolation method, the method according to this invention can be improved the PSNR value of SI significantly.For example, the gain in George Foreman QCIF sequence is greater than 1.5dB, and gain in Mobile QCIF sequence is approximately 3dB.This significantly improves mainly predicts owing to the super ability of the method according to this invention.Fig. 8 and 9 has presented the WZ comparison of the method according to this invention and existing motion extrapolation method.Can find out, in George Foreman QCIF sequence, gain is greater than 1db, and gain in Mobile QCIF sequence, this gain is greater than 2.5dB.

Claims (9)

1. for calculate a method for predicted frame according to the first and second reference frames, for each block of pixels in predicted frame, described method comprises following action:
A) definition with as the first block of pixels in juxtaposed first reference frame of the 3rd block of pixels of block of pixels in predicted frame;
B) motion vector along described the first block of pixels from described the first reference frame to the second reference frame is defined in the second block of pixels corresponding with the first block of pixels the second reference frame;
C1) calculate for allowing the pixel of the first block of pixels to be transformed to the first coefficient sets of the pixel of second;
D), by the first coefficient sets with carry out in comfortable the first reference frame the pixel with juxtaposed the 4th of the second block of pixels, calculate the pixel of the 3rd.
2. according to the method for claim 1, also comprise following action c2): calculate for allowing the pixel of second to be transformed to the second coefficient sets of the pixel of first, this action d) described the second coefficient sets also used.
3. according to the method for one of aforementioned claim, wherein said block of pixels is the square block of n * n pixel, wherein n is greater than 1 integer, the first and second coefficient sets are corresponding to the first and second n * n matrix, and wherein, at action d) in, this calculation procedure is considered described the first matrix and described the second transpose of a matrix.
4. one kind for calculating the interpolation device of predicted frame according to the first and second reference frames of video flowing, described device is arranged to and from this video flowing, selects described the first and second reference frames, each block of pixels in predicted frame, is further arranged to described device to comprise:
A) for define with juxtaposed the first reference frame of the 3rd block of pixels as predicted frame block of pixels in the parts of the first block of pixels;
B) for the motion vector along described the first block of pixels from described the first reference frame to the second reference frame, be defined in the parts of the second block of pixels that the second reference frame is corresponding with the first block of pixels;
C1) for calculating for allowing the pixel of the first block of pixels to be transformed to the parts of the first coefficient sets of the pixel of second;
D1) for the first coefficient sets and come comfortable the first reference frame and juxtaposed the 4th of the second block of pixels pixel, calculate the parts of the pixel of the 3rd.
5. according to the device of claim 4, be further arranged to and comprise:
C2) for calculating for allowing the pixel of second to be transformed to the parts of the second coefficient sets of the pixel of first;
D2) for also using the parts of the pixel of the predicted frame of the incompatible calculating of the second coefficient set.
6. according to the device of one of aforementioned claim 4 and 5, wherein said block of pixels is the square block of n * n pixel, wherein n is greater than 1 integer, the first and second coefficient sets, corresponding to the first and second n * n matrix, are further arranged to this device to comprise: consider that described the first matrix and described the second transpose of a matrix calculate the parts of the pixel of predicted frame.
7. for calculate a system for predicted frame according to the first and second reference frames of video flowing, described system comprises:
-conveyer, for transmitting this video flowing;
-interpolation device, is arranged to:
-from this conveyer, receive this video flowing;
-from this video flowing, select described the first and second reference frames,
Each block of pixels in predicted frame, is further arranged to described interpolation device to comprise:
A) for define with juxtaposed the first reference frame of the 3rd block of pixels as predicted frame block of pixels in the parts of the first block of pixels;
B) for the motion vector along described the first block of pixels from described the first reference frame to the second reference frame, be defined in the parts of the second block of pixels that the second reference frame is corresponding with the first block of pixels;
C1) for calculating for allowing the pixel of the first block of pixels to be transformed to the parts of the first coefficient sets of the pixel of second;
D1) for by the first coefficient sets with carry out comfortable the first reference frame and the pixel of juxtaposed the 4th of the second block of pixels is calculated the parts of the pixel of the 3rd.
8. according to the system of claim 7, wherein this interpolation device is further arranged to and is comprised:
C2) for calculating for allowing the pixel of second to be transformed to the parts of the second coefficient sets of the pixel of first;
D2) for also using the parts of the pixel of the predicted frame of the incompatible calculating of the second coefficient set.
9. according to the system of one of aforementioned claim 7 and 8, wherein said block of pixels is the square block of n * n pixel, wherein n is greater than 1 integer, the first and second coefficient sets, corresponding to the first and second n * n matrix, are further arranged to this device to comprise: for considering that described the first matrix and described the second transpose of a matrix calculate the parts of the pixel of predicted frame.
CN200980143556.3A 2008-10-31 2009-10-20 Image prediction method and system Active CN102204256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200980143556.3A CN102204256B (en) 2008-10-31 2009-10-20 Image prediction method and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2008072901 2008-10-31
CNPCT/CN2008/072901 2008-10-31
PCT/IB2009/055216 WO2010049916A1 (en) 2008-10-31 2009-10-20 Image prediction method and system
CN200980143556.3A CN102204256B (en) 2008-10-31 2009-10-20 Image prediction method and system

Publications (2)

Publication Number Publication Date
CN102204256A CN102204256A (en) 2011-09-28
CN102204256B true CN102204256B (en) 2014-04-09

Family

ID=44662828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200980143556.3A Active CN102204256B (en) 2008-10-31 2009-10-20 Image prediction method and system

Country Status (1)

Country Link
CN (1) CN102204256B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101383775B1 (en) * 2011-05-20 2014-04-14 주식회사 케이티 Method And Apparatus For Intra Prediction
US10867375B2 (en) * 2019-01-30 2020-12-15 Siemens Healthcare Gmbh Forecasting images for image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1205153A (en) * 1995-10-20 1999-01-13 诺基亚流动电话有限公司 Motion vector field coding
CN101227601A (en) * 2007-01-15 2008-07-23 飞思卡尔半导体公司 Equipment and method for performing geometric transformation in video rendition
WO2008112072A2 (en) * 2007-03-09 2008-09-18 Dolby Laboratories Licensing Corporation Multi-frame motion extrapolation from a compressed video source

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1205153A (en) * 1995-10-20 1999-01-13 诺基亚流动电话有限公司 Motion vector field coding
CN101227601A (en) * 2007-01-15 2008-07-23 飞思卡尔半导体公司 Equipment and method for performing geometric transformation in video rendition
WO2008112072A2 (en) * 2007-03-09 2008-09-18 Dolby Laboratories Licensing Corporation Multi-frame motion extrapolation from a compressed video source

Also Published As

Publication number Publication date
CN102204256A (en) 2011-09-28

Similar Documents

Publication Publication Date Title
TWI617185B (en) Method and apparatus of video coding with affine motion compensation
US8018998B2 (en) Low complexity motion compensated frame interpolation method
CN103096080B (en) Apparatus for estimating motion vector of current block
CN102396230B (en) Image processing apparatus and method
CN104935938A (en) Inter-frame prediction method in hybrid video coding standard
JP4906864B2 (en) Scalable video coding method
JP2000175193A (en) Picture data interpolation method, frame speed-up converting method and method for deciding real motion vector associated with characteristic block in electronic digital picture sequence reproducing system
JPH08265780A (en) Method and apparatus for coding/decoding video signal
JP2004530367A (en) Motion vector prediction method and motion vector prediction device
JP2011114572A (en) Image encoding apparatus, image decoding apparatus, image encoding method, and image decoding method
JP4688462B2 (en) Method and apparatus for differential coding of image blocks using predictive coding
JP2009510869A5 (en)
KR100878536B1 (en) Method and apparatus for interpolating video
JPH08265765A (en) Image coding system and motion compensating device for use therein
CN101288310B (en) Motion estimation
CN102204256B (en) Image prediction method and system
KR100393063B1 (en) Video decoder having frame rate conversion and decoding method
JP6612721B2 (en) Predictive image generation method, predictive image generation apparatus, and computer program
Van et al. Fast motion estimation for closed-loop HEVC transrating
KR101638211B1 (en) Video coding based on global movement compensation
Zhang et al. A polynomial approximation motion estimation model for motion-compensated frame interpolation
EP2359601B1 (en) Image prediction method and system
JP2001224036A (en) Moving picture coder
KR20130009372A (en) Apparatus and method for estimating of motion in a moving picture
JPH0965342A (en) Video coder and video decoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant