WO2002030127A2 - Digital video decoding system and method - Google Patents

Digital video decoding system and method Download PDF

Info

Publication number
WO2002030127A2
WO2002030127A2 PCT/US2001/031538 US0131538W WO0230127A2 WO 2002030127 A2 WO2002030127 A2 WO 2002030127A2 US 0131538 W US0131538 W US 0131538W WO 0230127 A2 WO0230127 A2 WO 0230127A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
bus
processors
decoder
interface
Prior art date
Application number
PCT/US2001/031538
Other languages
French (fr)
Other versions
WO2002030127A3 (en
Inventor
Raymond J. Kolczynski
Original Assignee
Sarnoff Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sarnoff Corporation filed Critical Sarnoff Corporation
Publication of WO2002030127A2 publication Critical patent/WO2002030127A2/en
Publication of WO2002030127A3 publication Critical patent/WO2002030127A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates generally to multimedia presentation systems, and particularly to digital cinema systems and methods.
  • a decoder system suitable for use in a digital video reproduction system including: a removable data storage medium having a sufficient storage capacity to store digital data indicative of a sequence of images to be displayed; first and second data buses; a first interface for receiving the data from the storage medium and being coupled to the storage medium and the first data bus; a plurality of computer processors interconnected and communicable with one another via the second data bus, for receiving the received data from the interface via the first bus and for decoding the received digital data to form a sequence of frames; and, a second interface for receiving the sequence of frames from the plurality of processors via the first bus and converting the sequence of frames into a format suitable for transmission to a display device.
  • Figure 1 illustrates a plan view of a system architecture according to a preferred form of the present invention
  • Figure 2 illustrates a plan view of a preferred configuration for a platform utilized according to the preferred form of Figure 1 ;
  • Figure 3 illustrates a preferred method used for decoding
  • FIG. 4 illustrates a data flow according to a preferred form of the present invention. Detailed Description of the Invention
  • the decoder supports a resolution for each frame of 1280 pixels horizontally by 1024 pixels vertically. Such a resolution is suitable for use with a commercially available Texas Instruments digital projector.
  • the decoder is preferably adapted to handle higher resolutions for other projectors though.
  • the decoder preferably operates at real-time motion picture rates of twenty-four (24) frames per second.
  • an encoded movie is delivered to a decoder on a removable hard disk drive which includes an Ultra-2 SCSI (Small Computer Serial Interface) interface.
  • the movie is provided by the removable drive as a bitstream in a format such as an MPEG-2 transport streams or other similar format as will be described.
  • the removable hard disk drive preferably used has a capacity of 50 GB and a throughput rate of 80 Mb/s.
  • One suitable drive is a Barracuda 50 which is commercially available from Seagate Corp., although any disk drive that exhibits suitable capacity and transfer rates could of course be used.
  • the hard disk drive preferably plugs into a decoder hardware platform and data is provided on an as needed basis.
  • the movie or presentation to be reproduced is preferably digitally encoded using a rate of approximately 45 Mb/s. Therefore, the size of an encoded two-hour movie will be approximately 40 GB ( « 45 Mb/s * 60 sec/min. * 60 min/hr *2 hr. * 0.125 bits/byte). This rate may change depending upon the quality of decoded movie desired and limitations on the size of the transport medium. It should be understood that the movie data stream provided to the decoder can be supplied using other suitable methods and apparatus as well, such as other types of portable media such as compact disks, or electronic transmissions.
  • the movie data stream could either be downloaded in its entirety to a local storage medium accessible to the decoder via a direct dial-up connection over conventional telephone lines, or via the global interconnection of computers and computer networks commonly referred to as the Internet.
  • the decoder is preferably based on a commercially available Mercury Computer
  • This platform includes a PCI-64 chassis with 16 slots for processor boards which are also available from Mercury Computer Systems.
  • the processors, or compute nodes, utilized on these boards are each preferably a PowerPC 7400, also known as a G4 processor, commercially available from IBM.
  • Each board preferably includes two compute nodes or G4 processors.
  • Also resident on each board is a block of memory for each compute node. In a particularly preferred form of the present mvention, 64 MB of Random Access Memory (RAM) is provided per compute node.
  • RAM Random Access Memory
  • each board is an ASIC (Application Specific Integrated Circuit) for each compute node which interfaces with a crossbar switch on each processor board.
  • ASIC Application Specific Integrated Circuit
  • the crossbar switch enables the processor boards to be networked together in a scalable interconnect fabric along a RACE++ bus. This permits the processors to communicate and share data with one another, enabling effective stream computing at realtime motion picture rates.
  • the RACE++ bus was developed by Mercury Computer Systems and became an ANSI standard in 1995. It allows communication at a peal- data rate of 266 Mb/s, with a sustained rate of 250 Mb/s. Additionally, this rate can be simultaneously achieved between any two processors in the network.
  • each of the G4 processor boards occupies the space of two of the available PCI slots. Other models of the same board occupy only a single slot.
  • the preferred form of the present invention will be described as it relates to the first possibility, i.e. each board occupying two slots, however it should be understood that if each board occupies only one slot, the other now available slots can be used for additional compute nodes, or left empty for example.
  • the decoder platform provides as its output uncompressed frames of the movie. Each frame preferably includes 1280 pixels horizontal by 1024 pixels vertical. The frames are output on a PCI bus. These frames are provided, in the case of the aforementioned TI projector, to a SMPTE 292 interface. A display interface converts the data to the SMPTE 292 format. It should be understood that while the present invention is described in relation to the
  • Mercury Computer Systems hardware platform which is a commercially available system with the required performance and bandwidth to support the demanding requirements of the present invention, other platforms which demonstrate similar capabilities can be utilized or designed simply as a matter of design choice.
  • the specific capabilities of the current Mercury platform address the biggest need of the digital cinema problem, the data rate required to decode and display digitally encoded motion pictures in real-time.
  • the bandwidth of the RACE++ bus allows large amounts of data to be moved throughout the system because of the simultaneous nature of the inter-processor communications.
  • FIG 1 illustrates a plan view of a decoder system 10 according to a preferred form of the present invention.
  • the system 10 fits into an overall structure of the digital theater system and is preferably secured within a suitably robust enclosure 20 for securing the system 10 from tampering.
  • the enclosure 20 can take the form of a locking cabinet for example.
  • the system 10 receives data 12 from a removable hard disk drive 30 that acts as the transport medium.
  • the system 10 reads bitstream data 12 from this disk 30, produces uncompressed pictures, and passes them as signal 62 from an output of the display interface 60 to an input of digital projector 40, which displays them as sequential images and audio signal 42.
  • the signal 62 is provided to the projector 40 using a suitable display interface 60 which formats the signal 62 appropriately, in the case of the aforementioned TI projector, according to the SMPTE 292 standard.
  • decoding is performed by one or more software modules implemented using the aforementioned Mercury, or decoder, platform 50.
  • Data read from the drive 30, i.e. signal 12 is in a bitstream format, such as an MPEG-2 transport stream or other similar format.
  • the bitstream signal 12 is demultiplexed by packet and transport module 70 in a conventional manner such that video data signal 72 is available for algorithmic processing.
  • the module 70 although depicted as separate from the decoder platform 50, is, according to a preferred form of the present invention, actually performed by one of the processors within the decoder platform 50.
  • the display interface 60 is preferably PCI based and also resides within a slot of the platform 50.
  • the system 10 further preferably includes local storage device 80 coupled to the packet and transport module 70, audio interface 90 coupled to the decoder platform 50, and content manager 100 and security module 110 coupled to the packet and transport module 70.
  • the platform 50 preferably includes 16 slots which are utilized as follows: 1 - Ultra-2 SCSI interface board 510 to interface to the drive 30, 14 - 7 dual-slot G4 processor boards (2 slots per board including empty slots 530) and 1 - display interface card 540. More particularly, one of the 16 slots preferably contains a conventional Ultra-2 SCSI interface board 510. As set forth, the Seagate 50 GB hard drive preferably utilized as drive 30 has an Ultra-2 SCSI interface. Using one PCI slot for the interface board 510 leaves 15 remaining slots. Each processor board 520 utilized includes two PowerPC 7400 processors, also known as G4 processors, i.e. compute nodes 550.
  • each board 520 occupies two of the available PCI slots.
  • Single slot G4 boards suitable for use with the Mercury platform 50 are also available. With 15 PCI slots remaining, 7 dual-slot width boards per platform 50 can be used.
  • Each G4 board 520 contains two processors, therefore a total of 14 G4 processors are provided to employ the software decoding algorithm.
  • the G4 boards 520 occupy 14 PCI slots (7 boards x 2 slots/board), leaving a single available PCI slot for the output interface board 540 which acts as display interface 60 of Figure 1.
  • the display interface board 540 fills this final PCI slot.
  • the dual-width boards 520 preferably include 64 MB RAM per processor 550.
  • Each board 520 also includes a crossbar switch 570 that allows communication with other boards 520 via the RACE++ bus 580.
  • the crossbar switch 570 is connected to an ASIC on each board 520 which allows communication along the RACE++ bus at a peak rate of 260 Mb/s with a sustained rate of 250 Mb/s according to one form of the present invention. This rate can be achieved simultaneously between any pair of processors 550 in the network 500. Because of this, real-time requirements necessary for the decoder functionality can be realized using software employed by the processors 550.
  • the decoder platform 50 is communicable with the local storage device 80, such as a hard disk drive having suitable performance characteristics.
  • a disk drive 80 can be used for storing trailers and other content which might not be created at the same time or location at which a movie is encoded and written to the transport medium 30, e.g. commercials or previews.
  • the platform 50 preferably includes sufficient space for such a hard disk drive 80, which should have a suitable capacity and operational characteristics for its intended use, e.g. to provide additional video data and audio data for presentation.
  • the output of the display interface 540 includes a sequence of full resolution, uncompressed frames. These frames are provided as signal 62 to the Texas Instruments digital projector 40 according to one aspect of the present invention.
  • the decoded frames are transmitted from each processor board 520 via the PCI bus 590 which includes 4 - 64 bit PCI bridges 592 and is controlled via a master processing unit 594, which can take the forni of a Pentium III processor according to one form of the present invention and is commercially available from Intel Corporation. It should be recognized that the digital projector 40 interface board 540 is based on the SMPTE 292 specification.
  • Audio is preferably in the form of eight channels of uncompressed, digital data. Audio data arrives at the system 10 multiplexed with the video and metadata in the transport stream 12 from the hard disk medium 30.
  • this transport stream data 12 is demultiplexed using one of the compute nodes 550. While the video data is distributed for decoding among the remaining G4 processors 550, the audio data is distributed to the appropriate audio device 560 for playback via the PCI bus 590. Synchronization between audio and video is also preferably accomplished using conventional audio SMPTE Linear Time Codes (LTC's). As also set forth, in one embodiment, two identical audio cards 560 are utilized to process the audio data. These cards are shorter PCI cards and fit between the G4 boards 520 in the spaces that remain empty because of the dual-slot boards 520. Each card 560 preferably provides four channels of audio.
  • LTC's Linear Time Codes
  • the cards 560 preferably synchronize with the display interface 540 to provide proper audio/video synchronization.
  • audio support can be integrated with the video interface board 540. This eliminates the need for separate audio cards 560.
  • the aforementioned and commercially available DVS board provides this functionality.
  • the system 10 includes security module 110 which is communicable with the module 70 and is used for data decryption.
  • the encryption of the encoded movie on the transport medium 30 is accomplished at the time of encoding the movie using a suitable device and method as are well understood.
  • the decryption module 110 performs appropriate decryption to enable proper access to the bitstream 12.
  • the module 110 can utilize a conventional keycard 112 to provide a decryption key or portion of a key as data 114. Further a decryption key or portion of a key can further be acquired by the module 110 via a conventional telephone line from signal 116.
  • the module 110 can interact with and authenticate itself to a conventional key distribution system via the signals 116, and either way is provided with the necessary decryption data to enable proper access to the bitstream 12.
  • the content manager module 100 can be used for account tracking, i.e., to track the number of times a movie is played for example.
  • the content manager can be realized using the computer 594 for example,_ or one or more of the compute nodes 550.
  • Control of the system 10 is preferably accomplished through a Graphical User Interface (GUI) which allows a set of actions to be performed on the movie, e.g. start or stop play of the movie or insert a commercial or preview stored on the local device 80 for example.
  • GUI Graphical User Interface
  • a command line interface can be used for control, or even a physical control to start and stop the movie, such as a switch or pushbutton for example.
  • This interactivity is represented within Figure 1 as signal 102 being provided to the content manager 100.
  • the system 10 platform 50 executes a suitable algorithm that decodes the bitstream 12 and creates a series of full resolution frames displayed at 24 frames per second (FPS).
  • the decoder algorithm is essentially the inverse of the encoding process used to create the bitstream 12.
  • the issue with porting the algorithm to the real-time platform involves distributing the necessary tasks among the available processors 550.
  • the system uses parallel processing such that each processor 550 performs all the decoding tasks for individual ones of the frames to be displayed.
  • the bitstream 12 is divided into segments and distributed in parallel among the processors 540.
  • the first processor 540 provided a segment of the bitstream 12 is expected to be the first to produce decoded frames for signal 52 which is provided to display interface card 540.
  • Each processor 550 is also responsible for transmitting its decoded frames to the display interface 540 using DMA transfer on the PCI bus 590 for example.
  • a disadvantage of this approach is the accessibility of necessary anchor frames among the different processors 550.
  • anchor frames are frames that are used to decode other frames with the addition of motion vectors as is conventionally understood.
  • An advantage to this approach is the relative ease with porting the code directly to each processor 550. Also, the symmetry involved in doing this allows for easier debugging of code implementing the decoding algorithm.
  • a second form of the present invention involves partitioning the algorithm among the available processors 520.
  • the output of one part of the algorithm from one processor 550 is input to another of the processors 550.
  • the algorithm is pipelined.
  • the output of the last processor 550 performing the operations of the algorithm are decoded frames, and used as input to the display interface board 540.
  • the disadvantage of this approach is the sensitivity of the entire algorithm to its partitioning across processors 550. There exists a desire to keep the idle time of each processor 550 at a minimum while maintaining the delicate timing between processor data sharing. One processor 550 should not be producing output for the next processor 550 until the subsequent processor 550 is ready to receive it.
  • An advantage to this approach is a highly efficient structure which leads to fewer overall processors 550 in the system.
  • a closed group of pictures (GOP) format where ten (10) frames are coded together and do not rely on the use of any other pictures not in the GOP is used to distribute the signal 12 across the processors 550.
  • This method allows each processor 550 to perform the same tasks on different pieces of the encoded bitstream 12.
  • Another level of software used in combination with the platform 50 deals with Mercury Computer Systems hardware platform interfacing, in a preferred form of the present invention.
  • Mercury Computer Systems provides many low level functions for communicating with its hardware, however, a layer is preferably provided on top of those functions which allows for the algorithm porting to access the hardware on a high level.
  • An example of this involves shared memory buffers as a means to passing data between various processors. There are timing issues involved with this, such as writing into buffers which are not enabled, and reading from buffers with blocking for example. Of course, this is simply a matter of design though and well within the capabilities of those possessing an ordinary skill in the art.
  • a higher quality output is provided since the picture is digitized from film, which has higher dynamic range than the traditional NTSC component video.
  • the frames of a movie are digitized using 10-bit values for each component as opposed to 8-bit values.
  • the components used are not Y, U, and V, but red (R), green (G), and blue (B).
  • R red
  • G green
  • B blue
  • the format is known as 4:4:4.
  • Film is shot at a fixed frame rate of 24 fps progressively scanned.
  • the Digital Cinema Decoder uses 10-bit RGB 4:4:4 video at 24 fps.
  • the video signal is digitized with pixel values which can be represented by 8-bit values, i.e. a range from 0-255.
  • the picture which is being digitized is broken up into a luminance component (Y) and chrominance components (U and V), which each have 8-bit values.
  • the luminance component has a value for every pixel in the picture.
  • the chrominance components are subsampled such that one U value and one V value exist for alternate luminance pixels on a line. This is done with the belief that they eye cannot discern such subsampling with regards to the color details of the signal.
  • This format is commonly referred to as 4:2:2 component format.
  • MPEG-2 specifies frame rates from approximately 24 frames per second (fps) to approximately 60 fps for progressively scanned video. Alternatively, it supports interlaced video at approximately 30 fps. Therefore, traditional MPEG-2 encoding uses 8-bit YUV 4:2:2 video at 24-60 fps.
  • the display output 60 is based on the type of projector 40 that is available where the system 10 is to be implemented. Since the decoding algorithm preferably decodes 10-bit RGB 4:4:4 frames, if a projector is not available to display this type of input, format conversion to an appropriate type that can be displayed by the available projector 40 is performed. The most common format for this conversion is based on the aforementioned YUV 4:2:2 structure. To accomplish this, each full resolution frame of decoded output can be transformed by means of a conversion matrix as is well understood. However, it should be understood this can result in the loss of some dynamic range in clirominance otherwise present with the 10-bit RGB 4:4:4 format.
  • the spatial resolution of an MPEG-2 signal can be anywhere from 640 pixels horizontally and 480 pixels vertically to 1920 horizontal pixels and 1080 vertical pixels. According to another aspect of the present invention, a resolution of 1280 pixels horizontally and 1024 pixels vertically is used. It should be noted that no part of the design precludes these resolutions from changing. In fact, it is assumed that the resolution of the picture will be increased as projector technology improves and equipment to support the higher definition resolutions becomes available.
  • the method 1000 includes demultiplexing 1100, variable length decoding 1200, inverse quantization 1300, inverse discrete cosine transformation 1400, motion compensation 1500 and frame reconstruction 1600.
  • one or more of the processors 550 is used to demultiplex 1100 the received, and decrypted, bitstream 12.
  • the task of demultiplexing is to break the bitstream up into relevant packets of video data, audio data, and other data, sometimes referred to as metadata. This is the first step that is necessary before performing decoding computations.
  • Variable length decoding 1200 is used to provide prediction error DCT coefficients in quantized form for the demultiplexed video data. These coefficients are then dequantized and transformed 1400 to obtain pixel values or prediction errors. Motion compensation 1500 is used to provide reconstructed pixel values therefrom, and frames are reconstructed 1600 from these pixel values. These steps are well understood by those possessing an ordinary skill in the art.
  • data in the system 10 flows from the transport medium 30, to the projector 40, where it is displayed on a screen for viewing.
  • the data While residing on the 50 GB hard drive 30, the data is essentially a transport stream. It is transmitted to the platform 50 as signal 12 via the Ultra-2 SCSI interface connected to an Ultra SCSI-2 board 510 in the platform 50 and PCI bus 590. Demultiplexing 1100 of the Transport Stream 12 is performed by one of the processing boards 520 which sends video elementary stream segments (a Primary Elementary Stream or PES portion) for one 10 -frame GOP to ones of the various G4 processors 550 in the system via the RACE ++ bus 580. Each of these processors 550 performs decompression and reconstruction of the frame data for that particular GOP, e.g. steps 1200 - 1600.
  • PES Primary Elementary Stream
  • the ten decoded frames are sent to the display interface board 540 via the PCI bus 590 by using DMA transfer.
  • the display interface 540 reformats the received data appropriately for the SMPTE 292 interface that the digital projector 40 takes as input. It should be recognized that if the decoding algorithm is operating on each available processor 550 in the system other than those used for demultiplexing 520, it should be able to operate at least at a rate required to keep up with the required speed necessary to feed the projector 40 for real-time rates, e.g. using a frame rate of 24 FPS, and that the bandwith available on the RACE++ bus must be sufficient for driving these real-time requirements.
  • the necessary decompression computations can be performed fast enough to keep up with the real-time needs of the projector 40.
  • the RACE++ bus 580 on the decoder platform 50 has a peak bandwith of 266 Mb/s with a sustained bandwidth of 250 Mb/s. More importantly, these rates can be achieved simultaneously between processors 550.
  • the PCI bus 590 makes use of PCI bridges 592 that extend the capability of the PCI bandwidth, preferably four 64-bit PCI bridges 592.
  • Metadata 1700 is supported in the system 10 by allowing for private data to be multiplexed within the transport stream 12. Streams with metadata are treated identically to streams without metadata, and this metadata is essentially ignored after demultiplexing 1100 as it is separated out thereby.
  • This metadata can be supplied to other systems, or system 10 components depending upon the nature of the metadata and desired functionality.
  • the system can help track movies viewed for accounting purposes as well. This enables a remote office to monitor the showing of movies with the system 10.
  • the system 10 uses the content manager 100 and the telephone connection 116, or any other suitable means, to communicate data indicative of showings to the remote location, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A decoder system suitable for use in a digital video reproduction system. The decoder system includes a removable data storage medium having a sufficient storage capacity to store digital data indicative of a sequence of images to be displayed. The decoder system includes first and second data buses. The decoder system includes a first interface for receiving the data from the storage medium and being coupled to the storage medium and the first data bus. The decoder system includes a plurality of computer processors interconnected and communicable with one another via the second data bus, for receiving the received data from the interface via the first bus and for decoding the received digital data to form a sequence of frames. The decoder system includes a second interface for receiving the sequence of frames from the plurality of processors via the first bus and converting the sequence of frames into a format suitable for transmission to a display device.

Description

DIGITAL, CINEMA DECODER SYSTEM AND METHOD
Field of Invention
The present invention relates generally to multimedia presentation systems, and particularly to digital cinema systems and methods.
Background of the Invention
It is becoming increasingly more popular to utilize digital representations of movies and other types of presentations and devices in place of conventional analog devices and copies. The advantages of such are well known, and include increased audio and visual quality as well as improvements in reproduction cost and security for example.
It is an object of the present invention to provide a system which can satisfy key specifications for a digital cinema theater system which include: a high video resolution, film frame rate, full 10-bit color, and visually transparent compression. It is another object of the invention to support 1280 horizontal pixels by 1024 vertical pixels per frame, with future capability provided to utilize 1920 horizontal pixels by 1080 vertical pixels per frame. A desired frame rate is 24 frames per second. Each pixel preferably has 10 bits each for red, green, and blue components thereof. And, compression is preferably used to provide approximately 45 Mb/s.
It is an object of the present invention to provide an improved system for decoding digital representations of cinema movies and other reproductions as well.
It is a further object of the present invention to meet all of these specifications, and provide the capability to reduce utilized resolution to match available projection systems. Summary of the Invention
A decoder system suitable for use in a digital video reproduction system, the decoder system including: a removable data storage medium having a sufficient storage capacity to store digital data indicative of a sequence of images to be displayed; first and second data buses; a first interface for receiving the data from the storage medium and being coupled to the storage medium and the first data bus; a plurality of computer processors interconnected and communicable with one another via the second data bus, for receiving the received data from the interface via the first bus and for decoding the received digital data to form a sequence of frames; and, a second interface for receiving the sequence of frames from the plurality of processors via the first bus and converting the sequence of frames into a format suitable for transmission to a display device.
Brief Description of the Figures
Various other objects, features and advantages of the invention will become more apparent by reading the following detailed description in conjunction with the drawings, which are shown by way of example only, wherein:
Figure 1 illustrates a plan view of a system architecture according to a preferred form of the present invention;
Figure 2 illustrates a plan view of a preferred configuration for a platform utilized according to the preferred form of Figure 1 ;
Figure 3 illustrates a preferred method used for decoding; and,
Figure 4 illustrates a data flow according to a preferred form of the present invention. Detailed Description of the Invention
The present mvention will be discussed as it relates to cinemagraphic movies, however should be understood to be equally applicable to other types of presentations and display devices as well though. In one embodiment of the present invention the decoder supports a resolution for each frame of 1280 pixels horizontally by 1024 pixels vertically. Such a resolution is suitable for use with a commercially available Texas Instruments digital projector. However, the decoder is preferably adapted to handle higher resolutions for other projectors though. Further, the decoder preferably operates at real-time motion picture rates of twenty-four (24) frames per second.
According to one aspect of the invention, an encoded movie is delivered to a decoder on a removable hard disk drive which includes an Ultra-2 SCSI (Small Computer Serial Interface) interface. The movie is provided by the removable drive as a bitstream in a format such as an MPEG-2 transport streams or other similar format as will be described. The removable hard disk drive preferably used has a capacity of 50 GB and a throughput rate of 80 Mb/s. One suitable drive is a Barracuda 50 which is commercially available from Seagate Corp., although any disk drive that exhibits suitable capacity and transfer rates could of course be used.
The hard disk drive preferably plugs into a decoder hardware platform and data is provided on an as needed basis. The movie or presentation to be reproduced is preferably digitally encoded using a rate of approximately 45 Mb/s. Therefore, the size of an encoded two-hour movie will be approximately 40 GB (« 45 Mb/s * 60 sec/min. * 60 min/hr *2 hr. * 0.125 bits/byte). This rate may change depending upon the quality of decoded movie desired and limitations on the size of the transport medium. It should be understood that the movie data stream provided to the decoder can be supplied using other suitable methods and apparatus as well, such as other types of portable media such as compact disks, or electronic transmissions. Further, the movie data stream could either be downloaded in its entirety to a local storage medium accessible to the decoder via a direct dial-up connection over conventional telephone lines, or via the global interconnection of computers and computer networks commonly referred to as the Internet. For example, alternatively, by providing a communications medium having a sufficient throughput to provide the data stream in real-time, the bitstream can be provided to the decoder in an on-demand, real-time manner. The decoder is preferably based on a commercially available Mercury Computer
Systems platform product. This platform includes a PCI-64 chassis with 16 slots for processor boards which are also available from Mercury Computer Systems. The processors, or compute nodes, utilized on these boards are each preferably a PowerPC 7400, also known as a G4 processor, commercially available from IBM. Each board preferably includes two compute nodes or G4 processors. Also resident on each board is a block of memory for each compute node. In a particularly preferred form of the present mvention, 64 MB of Random Access Memory (RAM) is provided per compute node.
Also preferably included on each board is an ASIC (Application Specific Integrated Circuit) for each compute node which interfaces with a crossbar switch on each processor board. The crossbar switch enables the processor boards to be networked together in a scalable interconnect fabric along a RACE++ bus. This permits the processors to communicate and share data with one another, enabling effective stream computing at realtime motion picture rates.
As is well understood by those possessing an ordinary skill in the pertinent art, the RACE++ bus was developed by Mercury Computer Systems and became an ANSI standard in 1995. It allows communication at a peal- data rate of 266 Mb/s, with a sustained rate of 250 Mb/s. Additionally, this rate can be simultaneously achieved between any two processors in the network.
In one embodiment, each of the G4 processor boards occupies the space of two of the available PCI slots. Other models of the same board occupy only a single slot. The preferred form of the present invention will be described as it relates to the first possibility, i.e. each board occupying two slots, however it should be understood that if each board occupies only one slot, the other now available slots can be used for additional compute nodes, or left empty for example. The decoder platform provides as its output uncompressed frames of the movie. Each frame preferably includes 1280 pixels horizontal by 1024 pixels vertical. The frames are output on a PCI bus. These frames are provided, in the case of the aforementioned TI projector, to a SMPTE 292 interface. A display interface converts the data to the SMPTE 292 format. It should be understood that while the present invention is described in relation to the
Mercury Computer Systems hardware platform, which is a commercially available system with the required performance and bandwidth to support the demanding requirements of the present invention, other platforms which demonstrate similar capabilities can be utilized or designed simply as a matter of design choice. The specific capabilities of the current Mercury platform address the biggest need of the digital cinema problem, the data rate required to decode and display digitally encoded motion pictures in real-time. The bandwidth of the RACE++ bus allows large amounts of data to be moved throughout the system because of the simultaneous nature of the inter-processor communications.
More particularly, and referring now to the Figures, like references identify like elements of the illustrated preferred form of the present invention. Figure 1 illustrates a plan view of a decoder system 10 according to a preferred form of the present invention. The system 10 fits into an overall structure of the digital theater system and is preferably secured within a suitably robust enclosure 20 for securing the system 10 from tampering. The enclosure 20 can take the form of a locking cabinet for example. As set forth, the system 10 receives data 12 from a removable hard disk drive 30 that acts as the transport medium. The system 10 reads bitstream data 12 from this disk 30, produces uncompressed pictures, and passes them as signal 62 from an output of the display interface 60 to an input of digital projector 40, which displays them as sequential images and audio signal 42. The signal 62 is provided to the projector 40 using a suitable display interface 60 which formats the signal 62 appropriately, in the case of the aforementioned TI projector, according to the SMPTE 292 standard.
In a preferred embodiment, decoding is performed by one or more software modules implemented using the aforementioned Mercury, or decoder, platform 50. Data read from the drive 30, i.e. signal 12, is in a bitstream format, such as an MPEG-2 transport stream or other similar format. The bitstream signal 12 is demultiplexed by packet and transport module 70 in a conventional manner such that video data signal 72 is available for algorithmic processing. The module 70, although depicted as separate from the decoder platform 50, is, according to a preferred form of the present invention, actually performed by one of the processors within the decoder platform 50. Likewise, the display interface 60 is preferably PCI based and also resides within a slot of the platform 50.
The system 10 further preferably includes local storage device 80 coupled to the packet and transport module 70, audio interface 90 coupled to the decoder platform 50, and content manager 100 and security module 110 coupled to the packet and transport module 70.
Referring now also to Figure 2, therein is illustrated a block diagram of a preferred configuration 500 for the platform 50. The platform 50 preferably includes 16 slots which are utilized as follows: 1 - Ultra-2 SCSI interface board 510 to interface to the drive 30, 14 - 7 dual-slot G4 processor boards (2 slots per board including empty slots 530) and 1 - display interface card 540. More particularly, one of the 16 slots preferably contains a conventional Ultra-2 SCSI interface board 510. As set forth, the Seagate 50 GB hard drive preferably utilized as drive 30 has an Ultra-2 SCSI interface. Using one PCI slot for the interface board 510 leaves 15 remaining slots. Each processor board 520 utilized includes two PowerPC 7400 processors, also known as G4 processors, i.e. compute nodes 550. If dual-slot boards 520 are used, each board 520 occupies two of the available PCI slots. Single slot G4 boards suitable for use with the Mercury platform 50 are also available. With 15 PCI slots remaining, 7 dual-slot width boards per platform 50 can be used. Each G4 board 520 contains two processors, therefore a total of 14 G4 processors are provided to employ the software decoding algorithm. The G4 boards 520 occupy 14 PCI slots (7 boards x 2 slots/board), leaving a single available PCI slot for the output interface board 540 which acts as display interface 60 of Figure 1. The display interface board 540 fills this final PCI slot. There is also preferably provided two audio cards 560 which perform the functions of the audio interface 90 of Figure 1. These are preferably short PCI cards which fit in empty PCI slots 530 between the dual-width G4 boards 520. It should be understood these audio boards 560 are not necessary if one or more of the boards 520 or interface 540 are adapted to process audio as well as the video though. Besides the two processors 550 on each G4 board 520, there are preferably two memory blocks. The dual-width boards 520 preferably include 64 MB RAM per processor 550. Each board 520 also includes a crossbar switch 570 that allows communication with other boards 520 via the RACE++ bus 580. The crossbar switch 570 is connected to an ASIC on each board 520 which allows communication along the RACE++ bus at a peak rate of 260 Mb/s with a sustained rate of 250 Mb/s according to one form of the present invention. This rate can be achieved simultaneously between any pair of processors 550 in the network 500. Because of this, real-time requirements necessary for the decoder functionality can be realized using software employed by the processors 550.
Referring again to Figure 1 also, the decoder platform 50 is communicable with the local storage device 80, such as a hard disk drive having suitable performance characteristics. Such a disk drive 80 can be used for storing trailers and other content which might not be created at the same time or location at which a movie is encoded and written to the transport medium 30, e.g. commercials or previews. The platform 50 preferably includes sufficient space for such a hard disk drive 80, which should have a suitable capacity and operational characteristics for its intended use, e.g. to provide additional video data and audio data for presentation.
In Figure 2, the output of the display interface 540, i.e. signal 62, includes a sequence of full resolution, uncompressed frames. These frames are provided as signal 62 to the Texas Instruments digital projector 40 according to one aspect of the present invention. The decoded frames are transmitted from each processor board 520 via the PCI bus 590 which includes 4 - 64 bit PCI bridges 592 and is controlled via a master processing unit 594, which can take the forni of a Pentium III processor according to one form of the present invention and is commercially available from Intel Corporation. It should be recognized that the digital projector 40 interface board 540 is based on the SMPTE 292 specification. Both Viewgraphics of Mountain View, California, and DVS of Glendale, California, manufacture products which stream video through an SMPTE 292 interface. Appropriate drivers can be used to allow either of these products to reside in the network configuration 500 and convert the data output from the processor boards 520 to conform to the SMPTE 292 specification. This allows the board 540 to be used in the network 500 and connect to the G4 processor boards 520, and output frames via the SMPTE 292 interface to the projector 40 as signal 62. Audio is preferably in the form of eight channels of uncompressed, digital data. Audio data arrives at the system 10 multiplexed with the video and metadata in the transport stream 12 from the hard disk medium 30. After being read into the system 10 via the PCI bus 590, this transport stream data 12 is demultiplexed using one of the compute nodes 550. While the video data is distributed for decoding among the remaining G4 processors 550, the audio data is distributed to the appropriate audio device 560 for playback via the PCI bus 590. Synchronization between audio and video is also preferably accomplished using conventional audio SMPTE Linear Time Codes (LTC's). As also set forth, in one embodiment, two identical audio cards 560 are utilized to process the audio data. These cards are shorter PCI cards and fit between the G4 boards 520 in the spaces that remain empty because of the dual-slot boards 520. Each card 560 preferably provides four channels of audio. The cards 560 preferably synchronize with the display interface 540 to provide proper audio/video synchronization. In another embodiment of the present invention, audio support can be integrated with the video interface board 540. This eliminates the need for separate audio cards 560. The aforementioned and commercially available DVS board provides this functionality.
Referring again to Figure 1, the system 10 includes security module 110 which is communicable with the module 70 and is used for data decryption. The encryption of the encoded movie on the transport medium 30 is accomplished at the time of encoding the movie using a suitable device and method as are well understood. The decryption module 110 performs appropriate decryption to enable proper access to the bitstream 12. The module 110 can utilize a conventional keycard 112 to provide a decryption key or portion of a key as data 114. Further a decryption key or portion of a key can further be acquired by the module 110 via a conventional telephone line from signal 116. The module 110 can interact with and authenticate itself to a conventional key distribution system via the signals 116, and either way is provided with the necessary decryption data to enable proper access to the bitstream 12.
Also connected to the module 70 is content manager module 100. The content manager module 100 can be used for account tracking, i.e., to track the number of times a movie is played for example. The content manager can be realized using the computer 594 for example,_ or one or more of the compute nodes 550.
Control of the system 10 is preferably accomplished through a Graphical User Interface (GUI) which allows a set of actions to be performed on the movie, e.g. start or stop play of the movie or insert a commercial or preview stored on the local device 80 for example. In an alternative embodiment, a command line interface can be used for control, or even a physical control to start and stop the movie, such as a switch or pushbutton for example. This interactivity is represented within Figure 1 as signal 102 being provided to the content manager 100.
According to the preferred embodiment, the system 10 platform 50 executes a suitable algorithm that decodes the bitstream 12 and creates a series of full resolution frames displayed at 24 frames per second (FPS). As is conventionally understood, the decoder algorithm is essentially the inverse of the encoding process used to create the bitstream 12. The issue with porting the algorithm to the real-time platform involves distributing the necessary tasks among the available processors 550. There are two basic methods for porting to the multi-processor 520 network: (1) having each processor 520 perform the entire decoding process on different sets of frame data, or (2) partitioning the algorithm steps across the available processors 520. In one form of the present invention, the system uses parallel processing such that each processor 550 performs all the decoding tasks for individual ones of the frames to be displayed. In this method, the bitstream 12 is divided into segments and distributed in parallel among the processors 540. The first processor 540 provided a segment of the bitstream 12 is expected to be the first to produce decoded frames for signal 52 which is provided to display interface card 540. Each processor 550 is also responsible for transmitting its decoded frames to the display interface 540 using DMA transfer on the PCI bus 590 for example. A disadvantage of this approach is the accessibility of necessary anchor frames among the different processors 550. Of course, anchor frames are frames that are used to decode other frames with the addition of motion vectors as is conventionally understood. An advantage to this approach is the relative ease with porting the code directly to each processor 550. Also, the symmetry involved in doing this allows for easier debugging of code implementing the decoding algorithm. A second form of the present invention involves partitioning the algorithm among the available processors 520. In this method the output of one part of the algorithm from one processor 550 is input to another of the processors 550. In this way, the algorithm is pipelined. The output of the last processor 550 performing the operations of the algorithm are decoded frames, and used as input to the display interface board 540. The disadvantage of this approach is the sensitivity of the entire algorithm to its partitioning across processors 550. There exists a desire to keep the idle time of each processor 550 at a minimum while maintaining the delicate timing between processor data sharing. One processor 550 should not be producing output for the next processor 550 until the subsequent processor 550 is ready to receive it. An advantage to this approach is a highly efficient structure which leads to fewer overall processors 550 in the system. Using the parallel approach to decoding, as an exemplary embodiment of the present invention, and in order to solve the problem of transferring anchor frames from one processor to others that need it, a closed group of pictures (GOP) format where ten (10) frames are coded together and do not rely on the use of any other pictures not in the GOP is used to distribute the signal 12 across the processors 550. This method allows each processor 550 to perform the same tasks on different pieces of the encoded bitstream 12. Another level of software used in combination with the platform 50 deals with Mercury Computer Systems hardware platform interfacing, in a preferred form of the present invention. Mercury Computer Systems provides many low level functions for communicating with its hardware, however, a layer is preferably provided on top of those functions which allows for the algorithm porting to access the hardware on a high level. An example of this involves shared memory buffers as a means to passing data between various processors. There are timing issues involved with this, such as writing into buffers which are not enabled, and reading from buffers with blocking for example. Of course, this is simply a matter of design though and well within the capabilities of those possessing an ordinary skill in the art.
According to another aspect of the present invention, a higher quality output is provided since the picture is digitized from film, which has higher dynamic range than the traditional NTSC component video. To accomplish this, the frames of a movie are digitized using 10-bit values for each component as opposed to 8-bit values. Additionally, the components used are not Y, U, and V, but red (R), green (G), and blue (B). For each of these components, there is a 10-bit sample taken for every pixel in the picture. Because of this lack of subsampling, the format is known as 4:4:4. Film is shot at a fixed frame rate of 24 fps progressively scanned. Therefor, the Digital Cinema Decoder uses 10-bit RGB 4:4:4 video at 24 fps. In traditional MPEG-2 applications, the video signal is digitized with pixel values which can be represented by 8-bit values, i.e. a range from 0-255. The picture which is being digitized is broken up into a luminance component (Y) and chrominance components (U and V), which each have 8-bit values. The luminance component has a value for every pixel in the picture. However, the chrominance components are subsampled such that one U value and one V value exist for alternate luminance pixels on a line. This is done with the belief that they eye cannot discern such subsampling with regards to the color details of the signal. This format is commonly referred to as 4:2:2 component format. MPEG-2 specifies frame rates from approximately 24 frames per second (fps) to approximately 60 fps for progressively scanned video. Alternatively, it supports interlaced video at approximately 30 fps. Therefore, traditional MPEG-2 encoding uses 8-bit YUV 4:2:2 video at 24-60 fps.
It should be understood that the display output 60 is based on the type of projector 40 that is available where the system 10 is to be implemented. Since the decoding algorithm preferably decodes 10-bit RGB 4:4:4 frames, if a projector is not available to display this type of input, format conversion to an appropriate type that can be displayed by the available projector 40 is performed. The most common format for this conversion is based on the aforementioned YUV 4:2:2 structure. To accomplish this, each full resolution frame of decoded output can be transformed by means of a conversion matrix as is well understood. However, it should be understood this can result in the loss of some dynamic range in clirominance otherwise present with the 10-bit RGB 4:4:4 format. The spatial resolution of an MPEG-2 signal can be anywhere from 640 pixels horizontally and 480 pixels vertically to 1920 horizontal pixels and 1080 vertical pixels. According to another aspect of the present invention, a resolution of 1280 pixels horizontally and 1024 pixels vertically is used. It should be noted that no part of the design precludes these resolutions from changing. In fact, it is assumed that the resolution of the picture will be increased as projector technology improves and equipment to support the higher definition resolutions becomes available.
Referring now to Figure 3, therein is illustrated a preferred form of a method 1000 used for decoding the signal 12. The method 1000, suitable for use by the configuration 500 of Figure 2, includes demultiplexing 1100, variable length decoding 1200, inverse quantization 1300, inverse discrete cosine transformation 1400, motion compensation 1500 and frame reconstruction 1600. As set forth, one or more of the processors 550 is used to demultiplex 1100 the received, and decrypted, bitstream 12. The task of demultiplexing is to break the bitstream up into relevant packets of video data, audio data, and other data, sometimes referred to as metadata. This is the first step that is necessary before performing decoding computations. Variable length decoding 1200 is used to provide prediction error DCT coefficients in quantized form for the demultiplexed video data. These coefficients are then dequantized and transformed 1400 to obtain pixel values or prediction errors. Motion compensation 1500 is used to provide reconstructed pixel values therefrom, and frames are reconstructed 1600 from these pixel values. These steps are well understood by those possessing an ordinary skill in the art.
Referring now also to Figure 4, according to a preferred form of the present invention, data in the system 10 flows from the transport medium 30, to the projector 40, where it is displayed on a screen for viewing.
While residing on the 50 GB hard drive 30, the data is essentially a transport stream. It is transmitted to the platform 50 as signal 12 via the Ultra-2 SCSI interface connected to an Ultra SCSI-2 board 510 in the platform 50 and PCI bus 590. Demultiplexing 1100 of the Transport Stream 12 is performed by one of the processing boards 520 which sends video elementary stream segments (a Primary Elementary Stream or PES portion) for one 10 -frame GOP to ones of the various G4 processors 550 in the system via the RACE ++ bus 580. Each of these processors 550 performs decompression and reconstruction of the frame data for that particular GOP, e.g. steps 1200 - 1600. The ten decoded frames are sent to the display interface board 540 via the PCI bus 590 by using DMA transfer. The display interface 540 reformats the received data appropriately for the SMPTE 292 interface that the digital projector 40 takes as input. It should be recognized that if the decoding algorithm is operating on each available processor 550 in the system other than those used for demultiplexing 520, it should be able to operate at least at a rate required to keep up with the required speed necessary to feed the projector 40 for real-time rates, e.g. using a frame rate of 24 FPS, and that the bandwith available on the RACE++ bus must be sufficient for driving these real-time requirements.
By utilizing an AltiVec processor, which is native to the G4 processors 550, in combination with the G4 processors 550, the necessary decompression computations can be performed fast enough to keep up with the real-time needs of the projector 40. As set forth, the RACE++ bus 580 on the decoder platform 50 has a peak bandwith of 266 Mb/s with a sustained bandwidth of 250 Mb/s. More importantly, these rates can be achieved simultaneously between processors 550. The PCI bus 590 makes use of PCI bridges 592 that extend the capability of the PCI bandwidth, preferably four 64-bit PCI bridges 592. With both bus structures 580, 590 the platform 50 is capable of performing real-time input, operations and output the desired quality signal 62 to the projector 40. Metadata 1700 is supported in the system 10 by allowing for private data to be multiplexed within the transport stream 12. Streams with metadata are treated identically to streams without metadata, and this metadata is essentially ignored after demultiplexing 1100 as it is separated out thereby. This metadata can be supplied to other systems, or system 10 components depending upon the nature of the metadata and desired functionality. The system can help track movies viewed for accounting purposes as well. This enables a remote office to monitor the showing of movies with the system 10. The system 10 uses the content manager 100 and the telephone connection 116, or any other suitable means, to communicate data indicative of showings to the remote location, for example.
Although the invention has been described and pictured in a preferred form with a certain degree of particularity, it is understood that the present disclosure of the preferred form, has been made only by way of example, and that numerous changes in the details of construction and combination and arrangement of parts may be made without departing from, the spirit and scope of the invention as hereinafter claimed. It is intended that the patent shall cover by suitable expression in the appended claims, whatever features of patentable novelty exist in the invention disclosed.

Claims

Claims: I Claim:
1. A decoder system suitable for use in a digital video reproduction system, said decoder system comprising: a removable data storage medium having a sufficient storage capacity to store digital data indicative of a sequence of images to be displayed; first and second data buses; a first interface for receiving said data from said storage medium and being coupled to said storage medium and said first data bus; a plurality of computer processors interconnected and communicable with one another via said second data bus, for receiving said received data from said interface via said first bus and for decoding said received digital data to form a sequence of frames; and, a second interface for receiving said sequence of frames from said plurality of processors via said first bus and converting said sequence of frames into a format suitable for transmission to a display device.
2. The decoder of Claim 1, wherein said second bus has a sustained data rate of at least approximately 250 Mb/s.
3. The decoder of Claim 1, wherein at least one of said processors receives said data using said first bus, serves to demultiplex a video portion of said received data from at least one other portion of said data, and send said demultiplexed video portion of said data to at least one other of said processors using said second bus.
4. The decoder of Claim 3, wherein said at least one other portion of said received data comprises an audio portion of said data and said system further comprises at least one audio processor for decoding said audio portion of said data and being coupled to said demultiplexing at least one of said processors via said first bus.
5. The decoder of Claim 3, wherein said at least one other portion of said data comprises metadata.
6. The decoder of Claim 1, wherein said first bus is substantially PCI compliant, said second bus is a RACE++ bus, said second interface is a Small Computer Serial Interface (SCSI), said format is SMPTE 292 compliant, and said display device comprises a digital projector.
7. The decoder of Claim 1, wherein said images to be displayed are divisible into Groups of Pictures, and each Group of Pictures is processed by one of said processors.
8. A method for preparing an audio/visual presentation for real-time display using a display device, said method comprising: providing a data stream indicative of said audio/visual presentation at a bit rate sufficient to support real-time decoding of said data stream to enable real-time display said presentation, wherein said data stream is divisible into groups of pictures; distributing at least a portion of said data stream across a plurality of parallel processors according to said groups of pictures; processing said at least portion of said data stream using said plurality of processors to provide data indicative of a plurality of frames; and, formatting said data indicative of a plurality of frames dependency upon said display device.
9. The method of Claim 8, further comprising demultiplexing said at least portion of said data stream from said data stream, wherein said data stream is provided using a first bus, and said distributing uses a second bus.
10. The method of Claim 8, wherein: at least one of said processors receives said data using a first bus, serves to demultiplex a video portion of said received data from at least one other portion of said data, and sends said demultiplexed video portion of said data to at least one other of said processors using a second bus; said at least one other portion of said received data comprises an audio portion of said data and said system further comprises at least one audio processor for decoding said audio portion of said data and being coupled to said demultiplexing at least one of said processors via said first bus; and, said first bus is substantially PCI compliant, said second bus is a RACE++ bus, said second interface is a Small Computer Serial Interface (SCSI), said formatting is SMPTE 292 compliant, and said display device comprises a digital projector.
PCT/US2001/031538 2000-10-06 2001-10-04 Digital video decoding system and method WO2002030127A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68422100A 2000-10-06 2000-10-06
US09/684,221 2000-10-06

Publications (2)

Publication Number Publication Date
WO2002030127A2 true WO2002030127A2 (en) 2002-04-11
WO2002030127A3 WO2002030127A3 (en) 2002-08-15

Family

ID=24747169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/031538 WO2002030127A2 (en) 2000-10-06 2001-10-04 Digital video decoding system and method

Country Status (1)

Country Link
WO (1) WO2002030127A2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461679A (en) * 1991-05-24 1995-10-24 Apple Computer, Inc. Method and apparatus for encoding/decoding image data
US5532744A (en) * 1994-08-22 1996-07-02 Philips Electronics North America Corporation Method and apparatus for decoding digital video using parallel processing
EP0812113A2 (en) * 1996-06-05 1997-12-10 Matsushita Electric Industrial Co., Ltd. Method and apparatus for partitioning and decoding compressed digital video bitstreams by parallel decoders
EP0817501A2 (en) * 1996-07-04 1998-01-07 Matsushita Electric Industrial Co., Ltd. Management of multiple buffers and video decoders in progressive digital video decoder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461679A (en) * 1991-05-24 1995-10-24 Apple Computer, Inc. Method and apparatus for encoding/decoding image data
US5532744A (en) * 1994-08-22 1996-07-02 Philips Electronics North America Corporation Method and apparatus for decoding digital video using parallel processing
EP0812113A2 (en) * 1996-06-05 1997-12-10 Matsushita Electric Industrial Co., Ltd. Method and apparatus for partitioning and decoding compressed digital video bitstreams by parallel decoders
EP0817501A2 (en) * 1996-07-04 1998-01-07 Matsushita Electric Industrial Co., Ltd. Management of multiple buffers and video decoders in progressive digital video decoder

Also Published As

Publication number Publication date
WO2002030127A3 (en) 2002-08-15

Similar Documents

Publication Publication Date Title
US7868879B2 (en) Method and apparatus for serving audiovisual content
US6445738B1 (en) System and method for creating trick play video streams from a compressed normal play video bitstream
US6356945B1 (en) Method and apparatus including system architecture for multimedia communications
US6058141A (en) Varied frame rate video
US6633725B2 (en) Layered coding of image data using separate data storage tracks on a storage medium
US6091857A (en) System for producing a quantized signal
WO2020189817A1 (en) Method and system for distributed decoding of split image for tile-based streaming
US8111932B2 (en) Digital image decoder with integrated concurrent image prescaler
US20070230586A1 (en) Encoding, decoding and transcoding of audio/video signals using combined parallel and serial processing techniques
CN1981522A (en) Stereoscopic television signal processing method, transmission system and viewer enhancements
EP4060620A1 (en) Cloud gaming gpu with integrated nic and shared frame buffer access for lower latency
EP0796011B1 (en) Video decoder including polyphase fir horizontal filter
Jurgen Digital video
JP3168922B2 (en) Digital image information recording and playback device
US5990976A (en) Video image processing apparatus and the method of the same
US20030016302A1 (en) Apparatus and method for conditioning digital image data for display of the image represented thereby
JP3850015B2 (en) Digital video signal recording / reproducing apparatus and transmission apparatus
KR100264639B1 (en) Color control for on-screen display in digital video
WO2002030127A2 (en) Digital video decoding system and method
KR100449200B1 (en) Computer implementation method, trick play stream generation system
JP2002057986A (en) Decoder and decoding method, and recording medium
US20040179136A1 (en) Image transmission system and method thereof
JP2005176068A (en) Motion image distribution system and method therefor
Razavi et al. High-performance JPEG image compression chip set for multimedia applications
Yamaguchi et al. Superhigh‐definition digital cinema distribution system with 8‐million‐pixel resolution

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP