AU2016202077B2 - Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding - Google Patents

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding Download PDF

Info

Publication number
AU2016202077B2
AU2016202077B2 AU2016202077A AU2016202077A AU2016202077B2 AU 2016202077 B2 AU2016202077 B2 AU 2016202077B2 AU 2016202077 A AU2016202077 A AU 2016202077A AU 2016202077 A AU2016202077 A AU 2016202077A AU 2016202077 B2 AU2016202077 B2 AU 2016202077B2
Authority
AU
Australia
Prior art keywords
video
picture
video picture
encoded
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU2016202077A
Other versions
AU2016202077A1 (en
Inventor
Adriana Dumitras
Barin G. Haskell
Atul Puri
David W. Singer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2011202000A external-priority patent/AU2011202000B2/en
Application filed by Apple Inc filed Critical Apple Inc
Priority to AU2016202077A priority Critical patent/AU2016202077B2/en
Publication of AU2016202077A1 publication Critical patent/AU2016202077A1/en
Application granted granted Critical
Publication of AU2016202077B2 publication Critical patent/AU2016202077B2/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture (105) and a nearby video picture is determined. The display time difference is then encoded (180) into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Description

1001413476 2016202077 04 Apr 2016 1
Method and Apparatus for Variable Accuracy Inter-Picture Timing Specification for
Digital Video Encoding
RELATED APPLICATIONS
Incorporated herein by reference, in its entirety, is PCT/US2003/021714 (published as WO 2004/008654), filed on 11 July 2003.
FIELD OF THE INVENTION
The present invention relates to the field of multimedia compression systems. In particular the present invention discloses methods and systems for specifying variable accuracy inter-picture timing.
BACKGROUND OF THE INVENTION
Digital based electronic media formats are finally on the cusp of largely replacing analog electronic media formats. Digital compact discs (CDs) replaced analog vinyl records long ago. Analog magnetic cassette tapes are becoming increasingly rare. Second and third generation digital audio systems such as Mini-discs and MP3 (MPEG Audio - layer 3) are now taking market share from the first generation digital audio format of compact discs.
The video media has been slower to move to digital storage and transmission formats than audio. This has been largely due to the massive amounts of digital information required to accurately represent video in digital form. The massive amounts of digital information needed to accurately represent video require very high-capacity digital storage systems and high-bandwidth transmission systems. 2016202077 21 Sep 2017 1001919047 la
Reference to any prior art in the specification is not an acknowledgment or suggestion that this prior art forms part of the common general knowledge in any jurisdiction or that this prior art could reasonably be expected to be understood, regarded as relevant, and/or combined with other pieces of prior art by a skilled person in the art. WO 2004/008654 PCT/US2003/021714 2 2016202077 04 Apr 2016
However, video is now rapidly moving to digital storage and transmission formats. Faster computer processors, high-density storage systems, and new efficient compression and encoding algorithms have finally made digital video practical at consumer price points. The DVD (Digital Versatile Disc), a digital video system, has been one of the fastest selling consumer electronic products in years. DVDs have been rapidly supplanting Video-Cassette Recorders (VCRs) as the pre-recorded video playback system of choice due to their high video quality, very high audio quality, convenience, and extra features. The antiquated analog NTSC (National Television Standards Committee) video transmission system is currently in the process of being replaced with the digital ATSC (Advanced Television Standards Committee) video transmission system.
Computer systems have been using various different digital video encoding formats for a number of years. Among the best digital video compression and encoding systems used by computer systems have been the digital video systems backed by the Motion Pictures Expert Group commonly known by the acronym MPEG. The three most well known and highly used digital video formats from MPEG are known simply as MPEG-1, MPEG-2, and MPEG-4. VideoCDs (VCDs) and early consumer-grade digital video editing systems use the early MPEG-1 digital video encoding format. Digital Versatile Discs (DVDs) and the Dish Network brand Direct Broadcast Satellite (DBS) television broadcast system use the higher quality MPEG-2 digital video compression and encoding system.
The MPEG-4 encoding system is rapidly being adapted by the latest computer based digital video encoders and associated digital video players. SUBSTITUTE SHEET (RULE 26) 3 2016202077 04 Apr 2016 WO 2004/008654 PCT/U S2003/021714
The MPEG-2 and MPEG-4 standards compress a series of video frames or video fields and then encode the compressed frames or fields into a digital bitstream. When encoding a video frame or field with the MPEG-2 and MPEG-4 systems, the video frame or field is divided into a rectangular grid of macroblocks. Each macroblock is independently compressed and encoded.
When compressing a video frame or field, the MPEG-4 standard may compress the frame or field into one of three types of compressed frames or fields: Intra-frames (I-frames), Unidirectional Predicted frames (P-frames), or Bi-Directional Predicted frames (B-frames). Intra-frames completely independently encode an independent video frame with no reference to other video frames. P-frames define a video frame with reference to a single previously displayed video frame. B-frames define a video frame with reference to both a video frame displayed before the current frame and a video frame to be displayed after the current frame. Due to their efficient usage of redundant video information, P-frames and B-frames generally provide the best compression. SUBSTITUTE SHEET (RULE 26) 2016202077 21 Sep 2017 1001919047 4
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method comprising: receiving a bitstream comprising an encoded first video picture, an encoded second video picture, and an encoded order value that is representative of a position of the second video picture with reference to the first video picture in a sequence of video pictures, the order value encoded in a slice header that is associated with the second video picture; and calculating a first motion vector for decoding the second video picture by using said order value.
According to a second aspect of the invention there is provided a method comprising: receiving a bitstream; extracting an encoded first video picture, an encoded second video picture, and more than one instance of an encoded order value for the second video picture from the bitstream, the order value for specifying a position of the second video picture with reference to the first video picture in a sequence of video pictures and for computing a motion vector for decoding the second video picture.
As used herein, except where the context requires otherwise, the term "comprise" and variations of the term, such as "comprising", "comprises" and "comprised", are not intended to exclude further features, components, integers or steps, A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. 2016202077 21 Sep 2017 1001919047 4a
For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.
Other objects, features, and advantages of present invention will be apparent from the company drawings and from the following detailed description. 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 5
BRIEF DESCRIPTION OF THE DRAWINGS
The objects, features, and advantages of the present invention will be · apparent to one skilled in the art, in view of the following detailed description in which:
Figure 1 illustrates a high-level block diagram of one possible a digital video encoder system.
Figure 2 illustrates a serious of video pictures in the order that the pictures should be displayed wherein the arrows connecting different pictures indicate inter-picture dependency created using motion compensation.
Figure 3 illustrates the video pictures from Figure 2 listed in a preferred transmission order of pictures wherein the arrows connecting different pictures indicate inter-picture dependency created using motion compensation.
Figure 4 graphically illustrates a series of video pictures wherein the distances between video pictures that reference each other are chosen to be powers of two. SUBSTITUTE SHEET (RULE 26) 6 2016202077 04 Apr 2016 WO 2004/008654 PCT/U S2003/021714
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT A method and system for specifying Variable Accuracy Inter-Picture Timing in a multimedia compression and encoding system is disclosed. In the following description, for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that these specific details are not required in order to practice the present invention. For example, the present invention has been described with reference to the MPEG-4 multimedia compression and encoding system. However, the same techniques can easily be applied to other types of compression and encoding systems.
Multimedia Compression and Encoding Overview
Figure 1 illustrates a high-level block diagram of a typical digital video encoder 100 as is well known in the art. The digital video encoder 100 receives an incoming video stream of video frames 105 at the left of the block diagram. Each video frame is processed by a Discrete Cosine Transformation (DCT) unit 110. The frame may be processed independently (an intra-frame) or with reference to information from other frames received from the motion compensation unit (an inter-frame). Next, a Quantizer (Q) unit 120 quantizes the information from the Discrete Cosine Transformation unit 110. Finally, the quantized video frame is then encoded with an entropy encoder (H) unit 180 to produce an encoded bitstream. The entropy encoder (H) unit 180 may use a variable length coding (VLC) system. SUBSTITUTE SHEET (RULE 26) 7 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714
Since an inter-frame encoded video frame is defined with reference to other nearby video frames, the digital video encoder 100 needs to create a copy of how decoded each frame will appear within a digital video decoder such that inter-frames may be encoded. Thus, the lower portion of the digital video encoder 100 is actually a digital video decoder system. Specifically, an inverse quantizer (Q'1) unit 130 reverses the.quantization of the video frame information and an inverse Discrete Cosine Transformation (DCT'1) unit 140 reverses the Discrete Cosine Transformation of the video frame information. After all the DCT coefficients are reconstructed from iDCT, the motion compensation unit will use the information* along with the motion vectors, to reconstruct the encoded frame which is then used'as the reference frame for the motion estimation of the next frame.
The decoded video frame may then be used to encode inter-frames (P-frames or B-frames) that are defined relative to information in the decoded video frame. Specifically, a motion compensation (MC) unit 150 and a motion estimation (ME) unit 160 are used to determine motion vectors and generate differential values used to encode inter-frames. A rate controller 190 receives information from many different components in a digital video encoder 100 and uses the information to allocate a bit budget for each video frame. The rate controller 190 should allocate the bit budget in a manner that will generate the highest quality digital video bit stream that that complies with a specified set of restrictions. Specifically, the rate controller 190 attempts to generate the highest quality compressed video stream without overflowing buffers (exceeding the amount of available memory in a decoder by SUBSTITUTE SHEET (RULE 26) 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 8 sending more information than can be stored) or underflowing buffers (not sending video frames fast enough such that a decoder runs out of video frames to display).
Multimedia Compression and Encoding Overview
In some video signals the time between successive video pictures (frames or fields) may not be constant. (Note: This document will use the term video pictures to generically refer to video frames or video fields.) For example, some video pictures may be dropped because of transmission bandwidth constraints. Furthermore, the video timing may also vary due to camera irregularity or special effects such as slow motion or fast motion. In some video streams, the original video source may simply have non-uniform inter-picture times by design. For example, synthesized video such as computer graphic animations may have i non-uniform timing since no arbitrary video timing is created by a uniform video capture system such as a video camera system. A flexible digital video encoding system should be able to handle non-uniform timing.
Many digital video encoding systems divide video pictures into a rectangular grid of macroblocks. Each individual macroblock from the video picture is independently compressed and encoded. In some embodiments, subblocks of macroblocks known as ‘pixelblocks’ are used. Such pixel blocks may have their own motion vectors that may be interpolated. This document will refer t to macroblocks although the teachings of the present invention may be applied equally to both macroblocks and pixelblocks. SUBSTITUTE SHEET (RULE 26) 9 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714
Some video coding standards, e.g., ISO MPEG standards or the ITU H.264 standard, use different types of predicted macroblocks to encode video, pictures. In one scenario, a macroblock may be one of three types: 1. I-macroblock - An Intra (I) macroblock uses no information from any other video pictures in its coding (it is completely self-defined); 2. P-macroblock - A unidirectionally predicted (P) macroblock refers to picture information from one preceding video picture; or 3. B-macroblock - A bi-directional predicted (B) macroblock uses information from one preceding picture and one future video picture.
If all the macroblocks in a video picture are Intra-macroblocks, then the video picture is an Intra-frame. If a video picture only includes unidirectional predicted macro blocks or intra-macroblocks, then the video picture is known as a P-frame. If the video picture contains any bi-directional predicted macroblocks, then the video picture is known as a B-frame. For the simplicity, this document will consider the case where all macroblocks within a given picture are of the same type.
An example sequence of video pictures to be encoded might be represented as
Ii B2 B3 B4 P5 Bs B7 Be B9 Pjo Bn P12 B13114-· · where the letter (I, P, or B) represents if the video picture is an I-frame, P-frame, or B-frame and the number represents the camera order of the video picture in the sequence of video pictures. The camera order is the order in which a camera SUBSTITUTE SHEET (RULE 26) 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 10 recorded the video pictures and thus is also the order in which the video pictures should be displayed (the display order).
The previous example series of video pictures is graphically illustrated in Figure 2. Referring to Figure 2, the arrows indicate that macroblocks from a stored picture (I-frame or P-frame in this case) are used in the motion compensated prediction of other pictures.
In the scenario of Figure 2, no information from other pictures is used in the encoding of the intra-frame video picture Ii. Video picture P5 is a P-frame that uses video information from previous video picture I] in its coding such that an arrow is drawn from video picture I] to video picture P5. Video picture B2, video picture B3, video picture B4 all use information from both video picture Ii and video picture P5 in their coding such that arrows are drawn from video picture Ij and video picture P5 to video picture B2, video picture B3, and video picture B4. As stated above the inter-picture times are, in general, not the same.
Since B-pictures use information from future pictures (pictures that will be displayed later), the transmission order is usually different than the display order. Specifically, video pictures that are needed to construct other video pictures should be transmitted first. For the above sequence, the transmission order might be
Ii P5 B2 B3 B4 ΡιοΒβB7Be B9P12 Bn I14B13... SUBSTITUTE SHEET (RULE 26) 11 2016202077 04 Apr 2016 WO 2004/008654 PCT/U S2003/021714
Figure 3 graphically illustrates the above transmission order of the video pictures from Figure 2. Again, the arrows in the figure indicate that macroblocks from a stored video picture (I or P in this case) are used in the motion compensated prediction of other video pictures.
Referring to Figure 3, the system first transmits I-frafne h which does not depend on any other frame. Next, the system transmits P-frame video picture P5 that depends upon video picture Ii. Next, the system transmits B-frame video picture B2 after video picture P5 even though video picture B2 will be displayed before video picture P5. The reason for this is that when it comes time to decode B2, the decoder will have already received and stored the information in video pictures Ii and P5 necessary to decode video picture B2. Similarly, video pictures Ii and P5 are ready to be used to decode subsequent video picture B3 and video picture B4. The receiver/decoder reorders the video picture sequence for proper display. In this operation I and P pictures are often referred to as stored pictures.
The coding of the P-frame pictures typically utilizes Motion Compensation, wherein a Motion Vector is computed for each macroblock in the picture. Using the computed motion vector, a prediction macroblock (P-macroblock) can be formed by translation of pixels in the aforementioned previous picture. The difference between the actual macroblock in the P-frame picture and the prediction macroblock is then coded for transmission.
Each motion vector may also be transmitted via predictive coding. For example, a motion vector prediction may be formed using nearby motion SUBSTITUTE SHEET (RULE 26) 12 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 vectors. In such a case, then the difference between the actual motion vector and the motion vector prediction is coded for transmission. . Each B-macroblock uses two motion vectors: a first motion vector referencing the aforementioned previous video picture and a second motion vector referencing the future video picture. From these two motion vectors, two prediction macroblocks are computed. The two predicted macroblocks are then combined together, using some function, to form a final predicted macroblock. As above, the difference between the actual macroblock in the B-frame picture and the final predicted macroblock is then encoded for transmission.
As with P-macroblocks, each motion vector (MV) of a B-macroblock may be transmitted via predictive coding. Specifically, a predicted motion vector is formed using nearby motion vectors. Then, the difference between the actual motion vector and the predicted is coded for transmission.
However, with B-macroblocks the opportunity exists for interpolating motion vectors from motion vectors in the nearest stored picture macroblock. Such interpolation is carried out both in the digital video encoder and the digital video decoder.
This motion vector interpolation works particularly well on video pictures from a video sequence where a camera is slowly panning across a stationary background. In fact, such motion vector interpolation may be good enough to be used alone. Specifically, this means that no differential information SUBSTITUTE SHEET (RULE 26) 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 13 needs be calculated or transmitted for these B-macroblock motion vectors encoded using interpolation.
To illustrate further, in the above scenario let us represent the interpicture display time between pictures i and j as Dy, i.e., if the display times of the pictures are T, and Tj, respectively, then
Di, j = Ti - Tj from which it follows that Di,k = Di, j +
Di,k = -D]c,i
Note that D,j maybe negative in some cases.
Thus, if MVs,i is a motion vector for a P5 macroblock as referenced to It, then for the corresponding macroblocks in B2, B3 and B4 the motion vectors as referenced to ii and P5, respectively, would be interpolated by MV2fi = MV5(1*D2,i/D5,i MVs,2 = MVsix*D5#2/D5,1 MV3.1 = ^5,1*03,1/05,1 MV5,3 = MVS,i*D5.3/D5,1 MV4;1 = MV5,i*D4,i/D5,i MV5,4 = MV5,1*05,4/05,1
Note that since ratios of display times are used for motion vector prediction, absolute display times are not needed. Thus, relative display times may be used for Di j display time values. SUBSTITUTE SHEET (RULE 26) 14 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714
This scenario may be generalized, as for example in the H.264 standard. In the generalization, a P or B picture may use any previously transmitted picture for its motion vector prediction. Thus, in the above case picture B3 may use picture Ii and picture B2 in its prediction. Moreover, motion vectors may be extrapolated, not just interpolated. Thus, in this case we would have: •MV3fl = MV2i1*D3,i/D2(1
Such motion vector extrapolation (or interpolation) may also be used in the prediction· process for predictive coding of motion vectors..
In any event, the problem 'in the case of non-uniform inter-picture times is to transmit the relative display time values of D, j to the receiver, and that is the subject of the present invention. In one embodiment of the present invention, for each picture after the first picture we transmit the display time difference between the current picture and the most recently transmitted stored picture. For error resilience, the transmission could be repeated several times within the picture, e.g., in the so-called slice headers of the MPEG or H.264 standards. If all slice headers are lost, then presumably other pictures that rely on the lost picture for decoding information cannot be decoded either.
Thus, in the above scenario we would transmit the following: D5,l £>2,5 03,5 D43 Djo,5 D6>1o D7,io D8,)0 D940 0)2,10 0)1,12 Di4,12 Di3,)4 SUBSTITUTE SHEET (RULE 26) 15 2016202077 04 Apr 2016 WO 2004/008654 PCT/U S2003/021714
For the purpose of motion vector estimation, the accuracy requirements for may vary from picture to picture. For example, if there is only a single B-frame picture
I
Be halfway between two P-frame pictures P5 and P7, then it suffices to send only: 07,5 = 2 and Dg(7 = -1
Where the D; j display time values are relative time values. If, instead, video picture Ββ is only one quarter the distance between video picture P5 and video picture P7 then the appropriate Dtt} display time values to send would be: 1^7,5 = 4 and Dg,7 = -1
Note that in both of the two preceding examples, the display time between the video picture B$ and video picture video picture P7 is being used as the display time “unit” and the display time difference between video picture P5 and picture video picture P7 is four display time “units”.
In general, motion vector estimation is less complex if divisors are powers of two. This is easily achieved in our embodiment if Djj (the inter-picture time) between two stored pictures is chosen to be a power of two as graphically illustrated in Figure 4. Alternatively, the estimation procedure could be defined to truncate or round all divisors to a power of two.
In the case where an inter-picture time is to be a power of two, the number of data bits can be reduced if only the integer power (of two) is transmitted instead of the full value of the inter-picture time. Figure 4 graphically illustrates a case wherein the distances between pictures are chosen to be powers of two. In such a case, the D3>t display time value of 2 between video picture Pi and picture SUBSTITUTE SHEET (RULE 26) 16 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 video picture P3 is transmitted as 1 (since 21 = 2) and the D7>3 display time value of 4 between video picture P7 and picture video picture P3 can be transmitted as 2 (since 22 = 4)..
In some cases, motion vector interpolation may not be used. However, it is still necessary to transmit the display order of the video pictures to the receiver/player system such that the receiver/player system will display the video pictures in the proper order. In this case, simple signed integer values for D,· j suffice irrespective of the actual display times. In some applications only the sign may be needed.
The inter-picture times Djj may simply be transmitted as simple signed integer values. However, many methods may be used for encoding .the D, j values to achieve additional compression. For example, a sign bit followed by a variable length coded magnitude is relatively easy to implement and provides coding efficiency. SUBSTITUTE SHEET (RULE 26) 17 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714
One such variable length coding system that may be used is known as UVLC (Universal Variable Length Code). The UVLC variable length coding system is giveii by the code words: 1 = 1 2 = 0 10 3 = Ο 1 1 ' 4 = 0 0 10 0 5 = 0 0 10 1 6 = 0 0 110 7 = 0 0 111 8 = 000100 0...
Another method of encoding the inter-picture times may be to use arithmetic coding. Typically, arithmetic coding utilizes conditional probabilities to effect a very high compression of the data bits.
Thus, the present invention introduces a simple but powerful method of encoding and transmitting inter-picture display times. The encoding of interpicture display times can be made very efficient by using variable length coding, or arithmetic coding. Furthermore, a desired accuracy can be chosen to meet the needs of the video decoder, but no more.
The foregoing has described a system for specifying variable accuracy inter-picture timing in a multimedia compression and encoding system. It is contemplated that changes and modifications may be made by one of ordinary SUBSTITUTE SHEET (RULE 26) 18 2016202077 04 Apr 2016 WO 2004/008654 PCT/US2003/021714 skill in the art, to the materials and arrangements of elements of the present invention without departing from the scope of the invention. SUBSTITUTE SHEET (RULE 26)

Claims (12)

1. A method comprising: receiving a bitstream comprising an encoded first video picture, an encoded second video picture, and an encoded order value that is representative of a position of the second video picture with reference to the first video picture in a sequence of video pictures, the order value encoded in a slice header that is associated with the second video picture; and calculating a first motion vector for decoding the second video picture by using said order value.
2. The method of claim 1, wherein calculating the first motion vector comprises using said order value to perform an interpolation operation based on a second motion vector.
3. The method of claim 1, wherein the first video picture is decoded before the second video picture.
4. The method of claim 1, wherein said encoded order value is compressed in the bitstream.
5. The method of claim 4, wherein said compressed order value is compressed in the bitstream by variable length coding.
6. The method of claim 4, wherein said compressed order value is compressed in the bitstream by arithmetic coding.
7. The method of claim 1, wherein said order value comprises an integer that specifies a display order for the second video picture in the sequence of video pictures.
8. A method comprising: receiving a bitstream; extracting an encoded first video picture, an encoded second video picture, and more than one instance of an encoded order value for the second video picture from the bitstream, the order value for specifying a position of the second video picture with reference to the first video picture in a sequence of video pictures and for computing a motion vector for decoding the second video picture.
9. The method of claim 8, wherein the encoded order value is compressed in the bitstream.
10. The method of claim 9, wherein the encoded order value is compressed in the bitstream by using variable length coding.
11. The method of claim 9, wherein said encoded order value is compressed in said bitstream by using arithmetic coding.
12. The method of claim 8, wherein an order value specifies a display order for the second video picture in the sequence of video pictures.
AU2016202077A 2002-07-15 2016-04-04 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding Expired AU2016202077B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2016202077A AU2016202077B2 (en) 2002-07-15 2016-04-04 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US60/396,363 2002-07-14
US10/291,320 2002-11-08
AU2011202000A AU2011202000B2 (en) 2002-07-15 2011-05-02 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204743A AU2013204743B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2016202077A AU2016202077B2 (en) 2002-07-15 2016-04-04 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2013204743A Division AU2013204743B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Publications (2)

Publication Number Publication Date
AU2016202077A1 AU2016202077A1 (en) 2016-04-28
AU2016202077B2 true AU2016202077B2 (en) 2017-10-05

Family

ID=48239797

Family Applications (5)

Application Number Title Priority Date Filing Date
AU2013204760A Expired AU2013204760B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204690A Expired AU2013204690B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204743A Expired AU2013204743B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204651A Expired AU2013204651B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2016202077A Expired AU2016202077B2 (en) 2002-07-15 2016-04-04 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Family Applications Before (4)

Application Number Title Priority Date Filing Date
AU2013204760A Expired AU2013204760B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204690A Expired AU2013204690B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204743A Expired AU2013204743B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2013204651A Expired AU2013204651B2 (en) 2002-07-15 2013-04-12 Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Country Status (1)

Country Link
AU (5) AU2013204760B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7088776B2 (en) 2002-07-15 2006-08-08 Apple Computer, Inc. Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
US6728315B2 (en) 2002-07-24 2004-04-27 Apple Computer, Inc. Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6400768B1 (en) * 1998-06-19 2002-06-04 Sony Corporation Picture encoding apparatus, picture encoding method, picture decoding apparatus, picture decoding method and presentation medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0864228B1 (en) * 1996-07-05 2001-04-04 Matsushita Electric Industrial Co., Ltd. Method for display time stamping and synchronization of multiple video object planes
ID24586A (en) * 1998-12-21 2000-07-27 Matsushita Electric Ind Co Ltd DEVICE AND TIME ADJUSTMENT METHOD USING TIME BASE MODULE AND TIME IMPROVEMENT RESOLUTION
US6297852B1 (en) * 1998-12-30 2001-10-02 Ati International Srl Video display method and apparatus with synchronized video playback and weighted frame creation
US7088776B2 (en) * 2002-07-15 2006-08-08 Apple Computer, Inc. Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
US20080025408A1 (en) * 2006-07-31 2008-01-31 Sam Liu Video encoding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6400768B1 (en) * 1998-06-19 2002-06-04 Sony Corporation Picture encoding apparatus, picture encoding method, picture decoding apparatus, picture decoding method and presentation medium

Also Published As

Publication number Publication date
AU2013204743B2 (en) 2016-05-19
AU2016202077A1 (en) 2016-04-28
AU2013204743A1 (en) 2013-05-09
AU2013204690B2 (en) 2015-11-19
AU2013204651B2 (en) 2015-12-24
AU2013204651A1 (en) 2013-05-09
AU2013204760A1 (en) 2013-05-09
AU2013204760B2 (en) 2015-12-03
AU2013204690A1 (en) 2013-05-09

Similar Documents

Publication Publication Date Title
US9838707B2 (en) Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2016202077B2 (en) Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding
AU2011202000B2 (en) Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired