EP1407606A1 - Digitales datenabrufrundsendesystem mit vorabrufdatenübertragung - Google Patents

Digitales datenabrufrundsendesystem mit vorabrufdatenübertragung

Info

Publication number
EP1407606A1
EP1407606A1 EP02731483A EP02731483A EP1407606A1 EP 1407606 A1 EP1407606 A1 EP 1407606A1 EP 02731483 A EP02731483 A EP 02731483A EP 02731483 A EP02731483 A EP 02731483A EP 1407606 A1 EP1407606 A1 EP 1407606A1
Authority
EP
European Patent Office
Prior art keywords
data
dod
data blocks
recited
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02731483A
Other languages
English (en)
French (fr)
Inventor
Khoi Hoang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PrediWave Corp
Original Assignee
PrediWave Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/841,792 external-priority patent/US20020023267A1/en
Priority claimed from US09/892,017 external-priority patent/US20020026501A1/en
Priority claimed from US10/054,008 external-priority patent/US20020175998A1/en
Application filed by PrediWave Corp filed Critical PrediWave Corp
Publication of EP1407606A1 publication Critical patent/EP1407606A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26216Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the channel capacity, e.g. network bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1881Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with schedule organisation, e.g. priority, sequence management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4436Power management, e.g. shutting down unused components of the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences

Definitions

  • Video-on-demand (VOD) systems are one type of data-on-demand (DOD) system.
  • NOD systems video data files are provided by a server or a network of servers to one or more clients on a demand basis. These systems will be well understood by those of skill in the art.
  • a server or a network of servers communicates with clients in a standard hierarchical client-server model.
  • a client sends a request to a server for a data file (e.g., a video data file).
  • the server sends the requested file to the client.
  • a client's request for a data file can be fulfilled by one or more servers.
  • the client may have the capability to store any received data file locally in non-volatile memory for later use.
  • the standard client-server model requires a two way communications infrastructure.
  • two-way communications requires building new infrastructure because existing cables can only provide one-way communications. Examples of two way communications infrastructure are hybrid fiber optics coaxial cables (HFC) or all fiber infrastructure. Replacing existing cables is very costly and the resulting services may not be affordable to most users.
  • HFC hybrid fiber optics coaxial cables
  • the standard client-server model has many limitations when a service provider (e.g., a cable company) attempts to provide VOD services to a hrge number of clients.
  • a service provider e.g., a cable company
  • the service provider has to implement a mechanism to continuously listen and fulfill every request from each client within the network; thus, the number of clients who can receive service is dependent on the capacity of such a mechanism.
  • One mechanism uses massively-parallel computers having large and fast disk arrays as local servers. However, even the fastest existing local server can only deliver video data streams to about 1000 to 2000 clients at one time. Thus, in order to service more clients, the number of local servers must increase. Increasing local servers requires more upper level servers to maintain control of the local servers.
  • STB receiving set top box
  • STB receiving set top box
  • the delay required to download sufficient data in order to play a selected VOD can be significant even with very high download speeds.
  • Another limitation of the standard client-server model is that each client requires its own bandwidth. Thus, the total required bandwidth is directly proportional to the number of subscribing clients. Cache memory within local servers has been used to improve bandwidth limitation but using cache memory does not solve the problem because cache memory is also limited.
  • the present invention teaches systems and methods for providing data-on-demand (DOD) services with reduced access time over existing DOD systems.
  • the present invention also teaches systems and methods for providing DOD services over a decreased bandwidth.
  • a method for sending data to a client to provide data-on-demand services comprising the steps of: providing a decreased idle time linear sequence of data blocks containing data including a selected DOD service; removing a most frequently occurring data block from said decreased idle time sequence of data blocks; placing said removed most frequently occurring data block in a prefetch data stream such that said prefetch data stream includes prefetch data blocks corresponding to said selected DOD service; transmitting said prefetch data stream via said transmission medium; and transmitting said remaining decreased idle time sequence of data blocks via said transmission medium such that a receiving device may combine said remaining decreased idle time sequence of data blocks and said prefetch data blocks to create said selected DOD service, thereby decreasing the bandwidth necessary to transmit said DOD service.
  • the DOD broadcast server method may further comprise: removing a plurality of additional data blocks from said decreased idle time sequence of data blocks; placing at least one of said plurality of additional data blocks in said prefetch data stream such that said prefetch data stream includes said most frequently occurring data blocks and said additional data blocks corresponding to said selected DOD service; and transmitting said remaining decreased idle time sequence of data blocks via said transmission medium such that a receiving device may combine said remaining decreased idle time sequence of data blocks, and said prefetch data blocks to create said selected DOD service, thereby further decreasing the bandwidth necessary to transmit said DOD service.
  • a method for processing data received from a server to provide data-on-demand services comprises the steps of: receiving a prefetch data stream containing prefetch data blocks corresponding to a selected DOD service; storing said prefetch data blocks in a memory location; receiving a primary data stream containing primary data blocks corresponding to said selected DOD service; and processing said primary data blocks and said prefetch data bocks in order to enable a user to access said selected DOD service.
  • Tie method may further include: receiving user input indicative of said selected DOD service; switching to a channel corresponding to said selected DOD service in response to said user input; and receiving said primary data stream from said channel corresponding to said selected DOD service.
  • the method for processing data received from a server is performed by a set-top box at the client side.
  • a data-on-demand system comprises a first set of channel servers, a central controlling server for controlling the first set of channel servers, a first set of up-converters coupled to the first set of channel servers, a combiner/amplifier coupled to the first set of up-converters, and a combiner/amplifier adapted to transmit data via a transmission medium.
  • the data-on-demand system further comprises a channel monitoring module for monitoring the system, a switch matrix, a second set of channel servers, and a second set of up- converters. The channel monitoring module is configured to report to the central controlling server when system failure occurs.
  • the central controlling server in response to a report from the channel monitoring module, instructs the switch matrix to replace a defective channel server in the first set of channel servers with a channel server in the second set of channel servers and a defective up converter in the first set of up-converters with an up-converter in the second set of upconverters.
  • Another embodiment of the present invention teaches a universal STB capable of receiving and handling a plurality of digital services such as VOD and digital broadcast.
  • This embodiment teaches a universal STB having a highly flexible architecture capable of sophisticated processing of received data.
  • This architecture includes a databus, a first communication device suitable for coupling to a digital broadcast communications medium, a memory typically including persistent and transient memory bi-directionally coupled to the databus, a digital data decoder bi-directionally coupled to the databus, and a central processing unit (CPU) bi-directionally coupled to the databus.
  • the CPU of this embodiment of the present invention implements a STB control process for controlling the memory, the digital decoder, and the demodulator.
  • the STB control process is operable to process digital data such as that received at the first communications device.
  • the STB control process should be capable of receiving data blocks derived from a decreased idle time scheduling matrix as well as parallel streaming of such data blocks.
  • the complex STB architectures allow a data-block optimized preloading data stream to be broadcast and loaded into an idle STB.
  • the pre-loading of specific data blocks into a STB allows bandwidth savings at critical times.
  • the pre-loading data stream (or "prefetch") can be programmed to pre-deliver different data sequences at different times of the day based on VOD preferences.
  • FIG. 1 A illustrates an exemplary DOD system in accordance with an embodiment of the invention.
  • FIG. IB illustrates an exemplary DOD system in accordance with another embodiment of the invention.
  • FIG. 2 illustrates an exemplary channel server in accordance with an embodiment of the invention.
  • FIG. 3 illustrates an exemplary set-top box in accordance with an embodiment of the invention.
  • FIG. 4 illustrates an exemplary process for generating a scheduling matrix in accordance with an embodiment of the invention.
  • FIG. 5 graphically illustrates an example of a scheduling matrix of a six data block file.
  • FIG. 6 graphically illustrates how the data blocks of the scheduling matrix in FIG. 5 are moved up until all idle time slots are filled.
  • FIG. 7 graphically illustrates a new decreased idle time scheduling matrix.
  • FIG. 8 depicts the addition of the decreased idle time embodiment.
  • FIG. 9 is a flow chart diagram illustrating how the decreased idle time embodiment is accomplished.
  • FIG. 10 is a flow chart diagram illustrating a process for scheduling DOD data blocks for transmission on a primary data stream and a prefetch data stream in accordance with one embodiment of the present invention
  • FIG. 11 is a flow chart diagram illustrating a process for scheduling DOD data blocks for transmission on a primary data stream and a prefetch data stream in accordance with an alternative embodiment of the present invention.
  • FIG. 12 is a flow chart diagram illustrating a set-top-box pre-loading process in accordance with one embodiment of the present invention.
  • FIG. IA illustrates an exemplary DOD system 100 in accordance with an embodiment of the invention.
  • the DOD system 100 provides data files, such as video files, on demand.
  • the DOD system 100 is not limited to providing video files on demand but is also capable of providing other data files, for example, game files on demand.
  • the DOD system 100 includes a central controlling server 102, a central storage 103, a plurality of channel servers 104a-104n, a plurality of up-converters 106a-106n, and a combiner/amplifier 108.
  • the central controlling server 102 controls the channel servers 104.
  • the central storage 103 stores data files in digital format.
  • data files stored in the central storage 103 are accessible via a standard network interface (e.g., Ethernet connection) by any authorized computer, such as the central controller server 102, connected to the network.
  • Each channel server 104 is assigned to a channel and is coupled to an up-converter 106.
  • the channel servers 104 provide data files that are retrieved from the central storage 103 in accordance with instructions from the central controlling server 102.
  • the output of each channel server 104 is a quadrature amplitude modulation (QAM) modulated intermediate frequency (IF) signal having a suitable frequency for the corresponding up-converter 106.
  • QAM-modulated IF signals are dependent upon adopted standards.
  • the current adopted standard in the United States is the data-over-cable-systems interface-specification (DOCSIS) standard, which requires an approximately 43.75MHz IF frequency.
  • the up-converters 106 convert IF signals received from the channel servers 104 to radio frequency signals (RF signals).
  • RF signals which include frequency and bandwidth, are dependent on a desired channel and adopted standards. For example, under the current standard in the United States for a cable television channel 80, the RF signal has a frequency ofapproximately 559.25MHz and a bandwidth of approximately 6MHz.
  • the outputs of the up-converters 106 are applied to the combiner/amplifier 108.
  • the combiner/amplifier 108 amplifies, conditions, and combines the received RF signals then outputs the signals out to a transmission medium 110.
  • the central controlling server 102 includes a graphics user interface (not shown) to enable a service provider to schedule data delivery by a drag-and-drop operation. Further, the central controlling server 102 authenticates and controls the channel servers 104 to start or stop according to delivery matrices. In an exemplary embodiment, the central controlling server 102 automatically selects a channel and calculates delivery matrices for transmitting data files in the selected channel. The central controlling server 102 provides offline addition, deletion, and update of data file information (e.g., duration, category, rating, and/or brief description). Further, the central controlling server 102 controls the central storage 103 by updating data files and databases stored therein.
  • data file information e.g., duration, category, rating, and/or brief description
  • an existing cable television system 120 may continue to feed signals into the combiner/amplifier 108 to provide non-DOD services to clients.
  • the DOD system 100 in accordance with the invention does not disrupt present cable television services.
  • FIG. IB illustrates another exemplary embodiment of the DOD system 100 in accordance with the invention.
  • the DOD system 100 includes a switch matrix 112, a channel monitoring module 114, a set of back-up channel servers 1 16a 116b, and a set of back-up up-converters 118a- 118b.
  • the switch matrix 112 is physically located between the up-converters 106 and the combiner/amplifier 108.
  • the switch matrix 1 12 is controlled by the central controlling server 102.
  • the channel monitoring module 114 comprises a plurality of configured set-top boxes, which simulate potential clients, for monitoring the health of the DOD system 100.
  • Monitoring results are communicated by the channel monitoring module 114 to the central controlling server 102.
  • the central controlling server 102 through the switch matrix 112 disengages the malfunctioning component and engages a healthy backup component 116 and/or 118 to resume service.
  • data files being broadcasted from the DOD system 100 are contained in motion pictures expert group (MPEG) files.
  • MPEG motion pictures expert group
  • Each MPEG file is dynamically divided into data blocks and sub-blocks mapping to a particular portion of a data file along a time axis. These data blocks and sub-blocks are sent during a pre-determined time in accordance with three- dimensional delivery matrices provided by the central controlling server 102.
  • a feedback channel is not necessary for the DOD system 100 to provide DOD services. However, if a feedbackcharmel is available, the feedback channel can be used for other purposes, such as billing or providing Internet services.
  • FIG. 2 illustrates an exemplary channel server 104 in accordance with an embodiment of the invention.
  • the channel server 104 comprises a server controller 202, a CPU 204, a QAM modulator 206, a local memory 208, and a network interface 210.
  • the server controller 202 controls the overall operation of the channel server 104 by instructing the CPU 204 to divide data files into blocks (further into sub-blocks and data packets), select data blocks for transmission in accordance with a delivery matrix provided by the central controlling server 102, encode selected data, compress encoded data, then deliver compressed data to the QAM modulator 206.
  • the QAM modulator 206 receives data to be transmitted via a bus (i.e., PCI, CPU local bus) or Ethernet connections.
  • a bus i.e., PCI, CPU local bus
  • the QAM modulator 206 may include a downstream QAM modulator, an upstream quadrature amplitude modulation/quadrature phase shift keying (QAM/QPSK) burst demodulator with forward error correction decoder, and/or an upstream tuner.
  • the output of the QAM modulator 206 is an IF signals that can be applied directly to an up converter 106.
  • the network interface 210 connects the channel server 104 to other channel servers 104 and to the central controlling server 102 to execute the scheduling and controlling instructions from the central controlling server 102, reporting status back to the central controlling server 102, and receiving data files from the central storage 103. Any data file retrieved from the central storage 103 can be stored in the local memory 208 of the channel server 104 before the data file is processed in accordance with instructions from the server controller 202.
  • the channel server 104 may send one or more DOD data streams depending on the bandwidth of a cable channel (e.g., 6, 6.5, or 8MHz), QAM modulation (e.g., QAM 64 or QAM 256, and a compression standard/bit rate of the DOD data stream (i.e., MPEG-1 or MPEG-2).
  • a cable channel e.g., 6, 6.5, or 8MHz
  • QAM modulation e.g., QAM 64 or QAM 256
  • a compression standard/bit rate of the DOD data stream i.e., MPEG-1 or MPEG-2
  • FIG. 3 illustrates a universal set-top box (STB) 300 in accordance with one embodiment of the invention.
  • the STB 300 comprises a QAM demodulator 302, a CPU304, a local. memory 308, a buffer memory 310, a decoder 312 having video and audio decoding capabilities, a graphics overlay module 314, a user interface 318, a communications link 320, and a fast data bus 322 coupling these devices as illustrated.
  • the CPU 302 controls overall operation of the universal STB 300 in order to select data in response to a client's request, decode selected data, decompress decoded data, reassemble decoded data, store decoded data in the local memory 308 or the buffer memory 310, and deliver stored data to the decoder 312.
  • the local memory 308 comprises non- volatile memory (e.g., a hard drive) and the buffer memory 310 comprises volatile memory.
  • the QAM demodulator 302 comprises transmitter and receiver modules and one or more of the following: privacy encryption/decryption module, forward error correction decoder/encoder, tuner control, downstream and upstream processors, CPU and memory interface circuits.
  • the QAM demodulator 302 receives modulated IF signals, samples and demodulates the signals to restore data.
  • the decoder 312 when access is granted, decodes at least one data block to transform the data block into images displayable on an output screen.
  • the decoder 312 supports commands from a subscribing client, such as play, stop, pause, step, rewind, forward, etc.
  • the decoder 312 provides decoded data to an output device 324 for use by the client.
  • the output device 324 may be any suitable device such as a television, computer, any appropriate display monitor, a VCR, or the like.
  • the graphics overlay module 314 enhances displayed graphics quality by, for example, providing alpha blending or picture-in-picture capabilities.
  • the graphics overlay module 314 can be used for graphics acceleration during game playing mode, for example, when the service provider provides games-on-demand services using the system in accordance with the invention.
  • the user interface 318 enables user control of the STB 300, and may be any suitable device such as a remote control device, a keyboard, a smartcard, etc.
  • the communications link 320 provides an additional communications connection. This may be coupled to another computer, or may be used to implement bi-directional communication.
  • the data bus 322 is preferably a commercially available "fast" data bus suitable for performing data communications in a real time manner as required by the present invention. Suitable examples are USB, firewire, etc.
  • data files are broadcast to all cable television subscribers, only the DOD subscriber who has a compatible STB 300 will be able to decode and enjoy data-on-demand services.
  • permission to obtain data files on demand can be obtained via a smart card system in the user interface 318.
  • a smart card may be rechargeable at a local store or vending machine set up by a service provider.
  • a flat fee system provides a subscriber unlimited access to all available data files.
  • data-on-demand interactive features permits a client to select at any time an available data file. The amount of time between when a client presses a select button and the time the selected data file begins playing is referred to as a response time. As more resources are.
  • a response time gets shorter.
  • a response time can be determined based on an evaluation of resource allocation and desired quality of service.
  • the number of data blocks (NUM_OF_BLKS) for each data file can be calculated as follows:
  • Estimated_BLK_Size (DataFile Size * TS) / DataFile_Length (1)
  • BLK SIZE (Estimated BLK Size + CLUSTER SIZE- lByte) / CLUSTER _SIZE (2)
  • BLK_SIZE_B YTES BLK_SIZE * CLUSTER_SIZE (3)
  • NUM_OF_BLKS (DataFile_Size + BLK_SIZE_BYTES- lByte)/BLK_SIZEJBYTES (4)
  • the Estimated_BLK_Size is an estimated block size (in Bytes); the DataFile_Size is the data file size (in Bytes); TS represents the duration of a time slot (in seconds); DataFileJLength is the duration of the data file (in seconds); BLK SIZE is the number of clusters needed for each data block; CLUSTER _SIZE is the size of a cluster in the local memory 208 for each channel server 104 (e.g., 64KBytes); BLK_SIZE_BYTES is a block size in Bytes.
  • the number of blocks (NUM_OF_BLKS) is equal to the data file size (in Bytes) plus a data block size in Bytes minus 1, Byte and divided by a data block size in Bytes. Equations (1) to (4) illustrate one specific embodiment. A person of skill in the art would recognize that other methods are available to calculate a number of data blocks for a data file. For example, dividing a data file into a number of data blocks is primarily a function of an estimated block size and the cluster size of the local memory 208 of a channel server 104. Thus, the invention should not be limited to the specific embodiment presented above.
  • FIG. 4 illustrates an exemplary process for generat g a scheduling matrix for sending a data file in accordance with an embodiment of the invention.
  • this invention uses time division multiplexing (TDM) and frequency division multiplexing (FDM) technology to compress and schedule data delivery at the server side.
  • a scheduling matrix is generated for each data file.
  • each data file is divided into a number of data blocks and the scheduling matrix is generated based on the number of data blocks.
  • a scheduling matrix provides a send order for sending data blocks of a data file from a server to clients, such that the data blocks are accessible in sequential order by any client who wishes to access the data file at a random time.
  • a number of data blocks (x) for a data file is received.
  • a first variable, j is set to zero (step 404).
  • a reference array is cleared (step 406). The reference array keeps track of data blocks for internal management purposes.
  • j is compared to x (step 408). If j is less than x, a second variable, i, is set to zero (step 412).
  • i is compared to x (step 414). If i is less than x, data blocks stored in the column [(i+j) modulo (x)] of a scheduling matrix are written into the reference array (step 418). If the reference array already has such data block(s), do not write a duplicate copy.
  • the scheduling matrix and the reference arrays are as follows:
  • Step 418) i is less than x (0 ⁇ 6).
  • Step 420 Does RA contain data block i or blkO?
  • Step 422 RA does not contain anything because it is empty. Write blkO into position [0,0] in SM and the RA.
  • Step 418) i is less than x (1 ⁇ 6).
  • Step 420 Does RA contain data block i or blkl?
  • Step 422) RA does not contain blkl.
  • Step 414) Compare i to x.
  • is less than x (2 ⁇ 6).
  • Step 422 RA does not contain blk2. Write blk2 into position [2,0] in SM and the RA.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (3 ⁇ 6).
  • Step 422 RA does not contain blk3. Write blk3 into position [3, 0] in SM and the RA.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6).
  • Read matrix positions of column [.4] in the SM aid write to RA; initially, the SM is empty so nothing is written into RA.
  • RA does not contain blk4. Write blk4 into position [4, 0] in SM and the RA.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (5 ⁇ 6).
  • RA does not contain blk5. Write blk5 into position [5, 0] in SM and the RA.
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA)
  • Step 408 Compare j to x.
  • Step 414) Compare i to x.
  • Position [1, 0] contains blkl; thus, blkl is written into RA. All other positions are empty.
  • RA does not contain blkO.
  • RA now has blkl and blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (1 ⁇ 6). Read matrix positions of column [2] in the SM and write to RA.
  • Position [2, 0] contains blk2. All other positions are empty. RA now has blkl, blkO, and blk2.
  • Step 420 Does RA contain data block i or blkl ?
  • Step 414) Compare i to x.
  • Step 418) i is less than x (2 ⁇ 6). Read matrix positions of column [3] in the SM and write to RA.
  • Position [3, 0] contains blk3. All other positions are empty.
  • RA now has blkl, blkO, blk2, and blk.3.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (3 ⁇ 6). Read matrix positions of column [4] in the SM and write to RA.
  • Position [4, 0] contains blk4. All other positions are empty.
  • RA now has blkl, blkO, blk2, blk3, and blk4.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6). Read matrix positions of column [5] in the SM and write to RA.
  • Position [5, 0] contains blk5. All other positions are empty.
  • RA now has blkl, blkO, blk2, blk3, blk4, and blkS.
  • Step 414) Compare i to x.
  • Position [0, 0] contains blkO. All other positions are empty. RA already contains blkO; thus, blkO is discarded.
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA)
  • Step 408 Compare j to x.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (0 ⁇ 6). Read matrix positions of column [2] in the SM and write to RA.
  • Position [2, 0] contains blk2. All other positions are empty. RA now has blk2.
  • RA does not contain blkO. Write blkO into position [2, 2] in the SM and the RA. RA now has blk2 and blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (1 ⁇ 6). Read matrix positions of column [3] in the SM and write to RA.
  • Position [3, 0] contains blk3. All other positions are empty.
  • RA now has blk2, blkO, and blk3.
  • RA does not contain blkl.
  • RA now has blk2, blkO, blk3, and blkl .
  • Step 414) Compare i to x.
  • Step 418) i is less than x (2 ⁇ 6). Read matrix positions of column [4] in the SM and write to RA.
  • Position [4, 0] contains blk4. All other positions are empty.
  • RA now has blk2, blkO, blk3, blkl, and blk4.
  • Step 418) i is less than x (3 ⁇ 6). Read matrix positions of column [5] in the SM and write to RA.
  • Position [5, 0] contains blk5. All other positions are empty.
  • RA now has blk2, blkO, blk3, blkl, blk4, and blk5.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6). Read matrix positions of column [0] in the SM and write toRA.
  • Position [0, 0] contains blkO. All other positions are empty. RA already contain blkO; thus blkO is discarded.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (5 ⁇ 6). Read matrix positions of column [1] in the SM and write to RA.
  • Position [1, 0] contains blkl and position [1, 1] contains blkO.
  • RA already contains blkl and blkO; thus blkl and blkO are discarded. All other positions are empty.
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA)
  • Step 408 Compare j to x.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (0 ⁇ 6).
  • Position [3, 0] contains blk3 and position [3, 2] contains blkl .
  • Blk3 and blkl are written into RA.
  • Step 420 Does RA contain data block i or blkO?
  • Step 422 RA does not contain blkO. Write blkO into position [3, 3] in the SM and the RA. RA now has blk3, blkl and blkO.
  • Step 424 Add 1 to i (i-1) to derive value for position (4, 3]. Go back to Step 414.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (1 ⁇ 6). Read matrix positions of column [4] in the SM and write to RA.
  • Position [4, 0] contains blk4. All other positions are empty.
  • RA now has blk3, blkl, blkO and blk4.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (2 ⁇ 6). Read matrix positions of column [5] in the SM and write to RA.
  • Position [5, 0] contains blk5. All other positions are empty.
  • RA now has blk3, blkl , blkO, blk4, and blk5.
  • RA does not contain blk2.
  • RA now has blk3, blkl, blkO, blk4, blk5, and blk2.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (3 ⁇ 6). Read matrix positions of column [0] in the SM and write to RA.
  • Position [0, 0] contains blkO. All other positions are empty. RA already contains blkO; thus, discard blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6). Read matrix positions of column [1] in the SM and write to RA.
  • Position [1, 0] contains blkl and position [1, 1] contains blkO. All other positions are empty.
  • RA already contains blkl and blkO; do not write a duplicate copy.
  • Step 414) Compare i to x.
  • Position [2, 0] contains blk2 and position [2, 2] contains blkO. All other positions are empty.
  • RA already contains blk2 and blkO; do not write a duplicate copy.
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA) "
  • Step 408 Compare j to x.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (0 ⁇ 6). Read matrix positions of column [4] in the SM and write to RA.
  • Position [4, 0] contains blk4. Blk4 is written into RA. All other positions are empty.
  • RA does not contain blkO.
  • RA now has blk4 and blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (1 ⁇ 6). Read matrix positions of column [5] in the SM and writeto RA.
  • Position [5, 0] contains blk5 and position [5, 3] contains blk2. All other positions are empty.
  • RA now has blk4, blkO, blk5, and blk2.
  • RA does not contain blkl.
  • RA now has blk4, blkO, blk5, blk2, and blkl .
  • Step 414) Compare i to x.
  • Step 418) i is less than x (2 ⁇ 6). Read matrix positions of column [0] in the SM and write to RA.
  • Position [0, 0] contains blkO. All other positions are empty. RA already contains blkO; thus, do not write a duplicate copy.
  • Step 420 Does RA contain data block i or blk2?
  • Step 414) Compare i to x.
  • Step 418) i is less than x (3 ⁇ 6). Read matrix positions of column [1] in the SM and write to RA.
  • Position [1, 0] contains blkl and position [1, 1]. All other positions are empty.
  • RA already contains blkl and blkO; do not write a duplicate copy.
  • RA does not contain blk3.
  • RA now has blk4, blkO, blk5,blk2, blkl, and blk3.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6). Read matrix positions of column [2] in the SM and write to RA.
  • Position [2, 0] contains blk2 and position [2, 2] contains blkO. All other positions are empty.
  • RA already contains blk2 and blkO; do not write a duplicate copy.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (5 ⁇ 6). Read matrix positions of column [3] in the SM and write to RA.
  • Position [3, 0] contains blk3, position [3, 2] contains blkl, and position [3,3] contains blkO. All other positions are empty.
  • RA already contains blk3, blkl , and blkO; do not write a duplicate copy.
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA)
  • Step 408 Compare j to x.
  • Step 414) Compare i to x.
  • Position [5, 0] contains blk5, position [5, 3] contains blk2, and position [5,4] contains blkl .
  • Blk5, blk2, and blkl are written into RA. All other positions are empty.
  • RA does not contain blkO.
  • RA now has blk5, blk2, blkl, and blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (1 ⁇ 6). Read matrix positions of column [0] in the SM and write to RA.
  • Position [0, 0] contains blkO and all other positions are empty.
  • RA now has blk5, blk2, blkl, and blkO.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (2 ⁇ 6). Read matrix positions of column [1] in the SM and write to RA.
  • Position [1, 0] contains blkl
  • position [1, 1] contains blkO
  • position [1,4] contains blk3. All other positions are empty.
  • RA already contains blkO and blkl; thus, do not write a duplicate copy.
  • RA now has blk5, blk2, blkl, blkO, and blk3.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (3 ⁇ 6). Read matrix positions of column [2] in the SM and write to RA.
  • Position [2, 0] contains blk2 and position [2, 2] contains blkO. All other positions are empty.
  • RA already contains blk2 and blkO; do not write a duplicate copy.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (4 ⁇ 6). Read matrix positions of column [3] in the SM and write to RA.
  • Position [3, 0] contains blk3, position [3, 2] contains blkl, position [3,3] contains blkO. All other positions are empty.
  • RA already contains blk3, blkl, and blkO; do not write a duplicate copy. (Step 420) Does RA contain data block i or blk4?
  • Step 422 RA does not contain blk4. Write blk4 into position [3, 5] of the SM and the RA.
  • RA now has blk5, blk2, blkl , blkO, blk3, and blk4.
  • Step 414) Compare i to x.
  • Step 418) i is less than x (5 ⁇ 6). Read matrix positions of column [4] in the SM and write to RA.
  • Position [4, 0] contains blk4 and position [4, 4] contains blkO. All other positions are empty. RA already contains blk4 and blkO; do not write a duplicate copy.
  • Step 424) RA does contain blk5. Thus, nothing is written into position [3, 4].
  • Step 414) Compare i to x.
  • Step 406 Clear a Reference Array (RA)
  • Step 408 Compare j to x.
  • Step 410) j is equal to x (6 ⁇ 6); END.
  • the six data blocks of the data file are sent in the following sequence:
  • a look-ahead process can be used to calculate a look-ahead scheduling matrix to send a predetermined number of data blocks of a data file prior to a predicted access time. For example, if a predetermined look-ahead time is the duration of one time slot, for any time slot greater than or equal to time slot number four, data block 4 (blk4) of a data file should be received by a STB 300 at a subscribing client at or before TS3, but blk4 would not be played until TS4.
  • the process steps for generating a look-ahead scheduling matrix is substantially similar to the process steps described above for FIG.
  • look-ahead scheduling matrix schedules an earlier sending sequence based on a look-ahead time.
  • an exemplary sending sequence based on a look-ahead scheduling matrix, having a look-ahead time of the duration of two time slots can be represented as follows:
  • a three-dimensional delivery matrix for sending a set of data files is generated based on the scheduling matrices for each data file of the set of data files.
  • a third dimension containing IDs for each data file in the set of data files is generated.
  • the three-dimensional delivery matrix is calculated to efficiently utilize available bandwidth in each channel to deliver multiple data streams.
  • a convolution method which is well known in the art, is used to generate a three-dimensional delivery matrix to schedule an efficient delivery of a set of data files.
  • a convolution method may include the following policies: (1) the total number of data blocks sent in the duration of any time slot (TS) should be kept at a smallest possible number; and (2) if multiple partial solutions are available with respect to policy (1), the preferred solution is the one which has a smallest sum of data blocks by adding the data blocks to be sent during the duration of any reference time slot, data blocks to be sent during the duration of a previous time slot (with respect to the reference time slot), and data blocks to be sent during the duration of a next time slot (with respect to the reference time slot).
  • the sending sequence based on a scheduling matrix is as follows:
  • possible combinations of delivery matrices are as follows: t Option 1 : Send video file N at shift 0 TS Total Data Blocks
  • NO 2 TS1 M0,M1,M3,N0,NI,N3 6
  • Ml, M3, M4, NO, Nl, N3, N4 8 TS4 MO
  • Option 2 Send video fileN at shift 1 TS Total Data Blocks
  • Option 3 Send video file N at shift 2 TS Total Data Blocks
  • TS3 MO, Ml, M3, M4, NO, l, N2, N5 8
  • TS5 MO, Ml, M2, M5, NO, Nl, N3 7
  • Option 4 Send video file N at shift 3 TS Total Data Blocks
  • TS5 MO, Ml, M2, M5, NO, Nl,N2 6
  • Option 5 Send video file N at shift 4 TS Total Data Blocks
  • TS1 MO, Ml, M3, NO, Nl, N2, N5 7
  • options 2, 4, and 6 have the smallest maximum number ofdata blocks (i.e., 6 data blocks) sent during any time slot.
  • the optimal delivery matrix in this exemplary embodiment is option 4 because option 4 has the smallest sum of data blocks of any reference time slot plus data blocks of neighboring time slots (i.e., 16 data blocks).
  • the sending sequence of the data file N should be shifted by three time slots.
  • a three-dimensional delivery matrix is generated for each channel server 104.
  • the DOD system 100 sends data blocks for data files M and N in accordance with the optimal delivery matrix (i.e., shift delivery sequence of data file N by three time slots) in the following manner:
  • the STB 300 at client B receives, stores, plays, and rejects data blocks as follows:
  • the STB 300 at the client D receives, stores, plays, and rejects data blocks as follows:
  • any combination of clients can at a random time independently select and begin playing any data file provided by the service provider.
  • the above denotation of “Receive” is slightly misleading as the system is always receiving a continuous stream of data blocks determined by the time slot, but at any given point, the receiving STB may only require certain data blocks, having already received and stored the other received data blocks. Ths need is referred to as “receive” above, but may be more accurately referred to as “non rejected.” Therefore, “receive M4" could be termed “reject all but M4" and “receive none” could better be termed “reject all.”
  • a goal of this invention is to decrease as much idle time as possible, and therefore one embodiment of the current invention is to perform anotrer step after the scheduling matrix is determined, referred to herein as a decreased idle time scheduling matrix.
  • the scheduling matrix clearly has unused bandwidth in the form of idle time during most time slots.
  • the present invention teaches reduction of this idle time by utilizing constant bandwidth from time slot to time slot.
  • the key to accomplishing decreased idle transmission time through constant bandwidth utilization is an understanding that the delivery sequence of the data blocks must be adhered to, while the exact time slot in which a data block is delivered is not relevant except that the data block must be received prior to or at the time in which it must be accessed. Accordingly, constant bandwidth utilization is accomplished by transmitting a constant number of data blocks within each time slot according to the delivery sequence set forth by the scheduling matrix and with disregard to the time slot assigned by the scheduling matrix.
  • the idle time is decreased by moving forward data blocks until four data blocks are scheduled for transmission during each time slot.
  • the procedure for this is to take the next data block in sequence, and move it to the empty space. So for this example, the first block in TSl, blkO, is moved to TSO. The next block in TSl, blkl, is also moved up. Then, since TSO still has an empty data block space, blk3 from TSl is also moved up. TSO then has all of its spaces filled, and now looks like:
  • TS2 Mk3, blk4, blkO, blk4
  • FIG. 7 graphically depicts this new repeating matrix created by filling up idle time.
  • this additional step is a relatively simple step 510, performed at what was the end of the procedure 410.
  • FIGS. 4- 10 dealt with an instance where the selected bandwidth was set to a constant equal to an integer number of data blocks.
  • the constant bandwidth need not be equal to an integer number of data blocks.
  • the delivery sequence adhere to the sequence developed as in FIG. 8.
  • the data stream generated by the delivery sequence developed in FIG. 8 is then provided to a lower level hardware device (e.g., a network card or the channel server) which controls broadcast of the digital data. Rather than broadcasting an integer number of data blocks, the lower level hardware device will transmit as much data as possible within the bandwidth allocated to the file.
  • a lower level hardware device e.g., a network card or the channel server
  • the delivery matrix provides the sequence and the lower level hardware device controls broadcast of data utilizing the allocated bandwidth.
  • an allocated bandwidth which includes a fraction of a data block size can be fully utilized.
  • the lower level device will pause broadcast of this particular data file until bandwidth is again available.
  • the maximum bandwidth used in the original scheduling matrix is used in the decreased idle time matrix. This is so that, no matter at what point a user begins receiving data, the maximum wait time has not changed.
  • bandwidth may be adjusted to either the cost or benefit of time. It is important that for a data-on-demand service that the maximum time taken does not exceed the time it takes to execute the file. If this were to happen, the result would be a scheduling matrix with a total number of time slots greater than the number of time slots the original scheduling matrix contained. If this were a two hour movie, for example, it might take three hours to play, leaving gaps in the middle of the movie. Some applications, however, might be able to use or even desire this function, such as streams of data that can be downloaded without being used immediately.
  • decreasing the idle time is very useful for calculating a scheduling matrix for a single stream of data.
  • an assigned amount of bandwidth may be fully used when transmitting a stream of data.
  • an aspect of the invention is creating a three dimensional delivery matrix. With a three dimensional delivery matrix, a decreased idle time delivery matrix may also be calculated and implemented in exactly the same fashion. However, once the use of the decreased idle time delivery matrix is used on a single stream, and then multiple, fully optimized, single streams are combined together, this will likely ouperform the three dimensional matrix system in most circumstances.
  • a computer implemented method for transmission of an on-demand data file comprising an act of preparing a delivery matrix defining a repeating data transmission sequence suitable for broadcast over a medium to a plurality of clients in a non-specific manner.
  • This act of preparing the delivery matrix further comprises reducing a data file into data blocks having at least a first block, and ordering the data blocks into a said repeating data transmission sequence. Therefore a user may receive the repeating data transmission sequence and begin using the data file in an uninterrupted manner as soon as the first block is received.
  • This repeating data transmission sequence requires a pre-determined bandwidth, and further there is de-minimus idle time in transmission of the repeating data transmission sequence. Also transmission of the data on- demand file requires an amount of transmission bandwidth that is independent of the number of clients.
  • FIG. 9 summarizes in flow chart form how a decreased idle time scheduling sequence is determined.
  • First 520 an original scheduling matrix is generated for a data file.
  • This original scheduling matrix is simply a non-decreased idle time scheduling matrix in accordance with the present invention. It is referred to as "original" for clarity purposes. What becomes apparent about the original scheduling matrix is not the structured matrix itself, but the order in which the data blocks are derived. The order of the data blocks in the original matrix is therefore of primary importance in determining a decreased idle time scheduling matrix.
  • the original matrix is therefore treated as a scheduling sequence 530. As shown above, this scheduling sequence can be used to fill in the idle time in the original matrix. However, at this point it might be desirable to first adjust the amount of bandwidth assigned to the data file 540.
  • a scheduling sequence and decreased idle time scheduling matrix is mainly cognitive.
  • a method for further reducing the bandwidth necessary for broadcasting scheduling matrices of DOD data blocks is to broadcast frequently occurring data blocks on a dedicated channel.
  • selected data blocks occur more frequently than other data blocks within the stream.
  • the original delivery matrix with idle time appeared as follows:
  • the delivery matrix with decreased idle time created by the process of FIG. 9 appears as follows:
  • TS 1 blkO, blk2, blkO, blkl
  • the exemplary decreased idle time matrix represented as a linear repeating stream of data would appear as the following:
  • blkO is transmitted more frequently than blocks 1-5.
  • Different delivery matrices may result in a different data blocks occurring more or less frequently within a given stream of data blocks. In any delivery matrix data blocks occurring earlier in the sequence will occur more frequently, with block 0 always occurring most frequently.
  • the bandwidth required to transmit the primary data stream is reduced by 37.5 %. This reduction is due to BlkO comprising 6 of 16 total data blocks in the primary data stream, such that removal of BlkO effectively reduces the number of data blocks transmitted in the primary data stream by 6.
  • the additional bandwidth required to transmit the prefetch data stream containing BlkO is small in comparison, being dependent on the amount of buffering performed by a receiving STB. This buffering will be discussed in more detail with respect to FIG. 12 below.
  • a significant advantage to this method of transmission is decreased time required to access a selected data-on-demand service. As soon as blkO of a selected DOD service is received a user may begin using the selected service, whereas without the prefetch stream a user would have to wait until blkO occurred in the primary data stream. Because blkO is transmitted continuously over an independent stream, a service may be used without waiting for the next blkO to be received from a stream containing many different data blocks.
  • FIG. 10 illustrates a process at 600 for scheduling DOD data bocks for transmission on a primary data stream and a prefetch data stream in accordance with one embodiment of the present invention.
  • a decreased idle time schedule is represented as a linear sequence of data blocks arranged in the order the data blocks would be transmitted.
  • the most frequently occurring data block is removed from the sequence leaving a shorter sequence of data blocks requiring a much narrower bandwidth.
  • BlkO always occurs most frequently.
  • the data block removed from the decreased idle time sequence is placed in a prefetch data stream comprised entirely of BlkO data blocks.
  • the shortened sequence of data blocks is placed in a primary data stream.
  • the primary data stream and the prefetch data stream are transmitted as two separate repeating data streams on separate bandwidths to receiving set-top-boxes via the transmission medium 110 (FIG. IB).
  • FIG. 11 illustrates a process at 650 for scheduling DOD data blocks for transmission on a primary data stream and a prefetch data stream in accordance with one embodiment, wherein multiple DOD data blocks from a DOD delivery matrix are transmitted on a prefetch data stream thereby minimizing the total necessary transmission bandwidth.
  • a decreased idle time schedule is represented as a linear sequence of data blocks arranged in the order the data blocks would be transmitted.
  • the most frequently occurringdata block is removed from the sequence leaving a shorter primary sequence of data blocks requiring a much narrower bandwidth.
  • the data block removed from the decreased idle time sequence is placed in a prefetch data stream.
  • step 658 a determination is made as to whether the bandwidth required transmit the shorter primary stream of data blocks has been reduced below a predetermined threshold value. This step may require many complex sub-processes that would be apparent to one skilled in the art. If the bandwidth required for the primary data stream is below the threshold, then the process continues to step 660. At step 660 the primary data stream and the prefetch data stream are transmitted as two separate repeating data streams on separate bandwidths to receiving settop-boxes via the transmission medium 110 (FIG. IB).
  • step 654 the process returns to step 654, wherein the data block occurring with the greatest frequency of the remaining data blocks is removed. Then in step 656 the removed data block is added to the prefetch data stream. The process continues to repeat itself until the required transmission bandwidth is determined to be below the threshold value in step 658, at which time the process concludes at step 660.
  • other criteria may be used instead of a threshold value, such as minimizing the combined bandwidth requirement for both the primary and prefetch data streams.
  • a receiving STB In order to have immediate access to a DOD service transmitted by the above method, a receiving STB must pre-load and store some or all of the data contained in the prefetch data stream. Because the STB must first load and store a blkO of any DOD service it is to display, there is a tradeoff between the transmission bandwidth saved and an accessing delay time. By pre-loading the prefetch data stream and storing the prefetch data blocks a client STB may access a selected DOD service with minimal delay. This requires a receiving STB to have a memory sufficient to store the data contained in the prefetch data stream.
  • FIG. 12 illustrates a set-top-box pre-loading process at 700 in accordance with one embodiment of the present invention.
  • the set top box is set to an idle or passive mode. Typically this mode would be a default mode for all client set-top-boxes.
  • the set-top- box receives the prefetch data stream on a dedicated channel. In an exemplary embodiment the prefetch data stream is received on the electronic programming guide channel.
  • the set top box determines whether the prefetch data blocks received are more recent than prefetch data that had been stored earlier. An identifier contained within each prefetch data block indicates to the set top box how recent a data block is.
  • step 708 the set top box stores the latest prefetch data blocks in an internal memory 308 (FIG. 3) in step 708, typically a hard disk drive magnetic storage device. Typically earlier data blocks are overwritten with more recent prefetch data blocks, though older versions may be retained for various reasons.
  • step 710 the set top box is switched to active mode either by a timer, or by u;er command. In an exemplary embodiment the set top box automatically switches to active mode whenever any user command is received by the set top box. In such an embodiment the set top box would remain in the active mode for some time period, after whi i it would return to a passive mode.
  • step 712 the set top box receives a command to play a selected DOD service. This may be accomplished by a user entering a code corresponding to the selected DOD service or by selecting the DOD service from a menu within the electronic program guide service.
  • step 714 the set top box plays the first data block (blkO) of the selected DOD service from the prefetch data blocks stored in step 708.
  • step 716 the set top box tunes into the appropriate channel and receives and stores in memory 308 (FIG. 3) the primary data stream corresponding to the selected DOD service. In an exemplary embodiment this step occurs in parallel with step 714 until the user stops playing the selected DOD service.
  • step 718 if all prefetch data blocks corresponding to the selected DOD service have been played the process continues to step 720.
  • step 720 the remainder of the DOD service is played from the primary stream data blocks previously received in step 716. This allows the first data block (blkO), or the first few data blocks of a DOD service to be played from the stored prefetch data blocks while the primary stream of data blocks corresponding to the selected DOD service are downloaded from a server.
  • Such a system allows seamless viewing of DOD services with minimal delay in access time and reduced bandwidth requirements.
  • sufficient prefetch data blocks must be stored and played to allow data blocks from the primary data stream to download in time for use. This could require increases in either the bandwidth of the prefetch stream or in the primary data stream.
  • the data configured into the prefetch data stream and eventually loaded into the idle set top box through the prefetch stream may change over the course of the day, week, or month to reflect preferences of users and create a maximum of bandwidth savings. For example, on a Monday night in the autumn, a particular sporting event may be made more prevalent in the prefetch data stream as opposed to a weekend night, when the newest family feature movie released has the beginning sequence sent in the prefetch data stream.
  • the set top box user can start the desired programming at any time without delay because of the preloaded data block sequence already stored on the STB while the box was in idle mode.
  • This pre-ietch data block sequence can be continually updated while the STB is idle mode to reflect preprogrammed changes made by the user (such as an order for a movie that the user has not iarted watching) or by a preset sequence update based on anticipated DOD user requests.
EP02731483A 2001-04-24 2002-04-23 Digitales datenabrufrundsendesystem mit vorabrufdatenübertragung Withdrawn EP1407606A1 (de)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US09/841,792 US20020023267A1 (en) 2000-05-31 2001-04-24 Universal digital broadcast system and methods
US841792 2001-04-24
US892017 2001-06-25
US09/892,017 US20020026501A1 (en) 2000-05-31 2001-06-25 Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
US10/054,008 US20020175998A1 (en) 2000-05-31 2001-10-19 Data-on-demand digital broadcast system utilizing prefetch data transmission
US54008 2001-10-19
PCT/US2002/012930 WO2002087246A1 (en) 2001-04-24 2002-04-23 Data-on-demand digital broadcast system utilizing prefetch data transmission

Publications (1)

Publication Number Publication Date
EP1407606A1 true EP1407606A1 (de) 2004-04-14

Family

ID=27368541

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02731483A Withdrawn EP1407606A1 (de) 2001-04-24 2002-04-23 Digitales datenabrufrundsendesystem mit vorabrufdatenübertragung

Country Status (4)

Country Link
EP (1) EP1407606A1 (de)
JP (1) JP2004536491A (de)
KR (1) KR20030092105A (de)
WO (1) WO2002087246A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100701003B1 (ko) * 2005-04-29 2007-03-29 한국전자통신연구원 클라이언트의 요구를 반영한 방송 스케쥴링 방법
US7584497B2 (en) * 2005-05-24 2009-09-01 Microsoft Corporation Strategies for scheduling bandwidth-consuming media events
KR101453131B1 (ko) 2007-12-14 2014-10-27 톰슨 라이센싱 가변 대역폭 채널을 통한 동시송출을 위한 장치 및 방법
EP2225840A1 (de) 2007-12-18 2010-09-08 Thomson Licensing Vorrichtung und verfahren zur dateigrössenschätzung über broadcast-netzwerke
JP6401546B2 (ja) * 2014-08-21 2018-10-10 日本放送協会 コンテンツ配信サーバ、及びコンテンツ配信プログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044396A (en) * 1995-12-14 2000-03-28 Time Warner Cable, A Division Of Time Warner Entertainment Company, L.P. Method and apparatus for utilizing the available bit rate in a constrained variable bit rate channel
JPH09284745A (ja) * 1996-04-09 1997-10-31 Sony Corp 双方向情報伝送システムおよび双方向情報伝送方法
US5886995A (en) * 1996-09-05 1999-03-23 Hughes Electronics Corporation Dynamic mapping of broadcast resources

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO02087246A1 *

Also Published As

Publication number Publication date
KR20030092105A (ko) 2003-12-03
WO2002087246A1 (en) 2002-10-31
JP2004536491A (ja) 2004-12-02

Similar Documents

Publication Publication Date Title
US20020175998A1 (en) Data-on-demand digital broadcast system utilizing prefetch data transmission
US6557030B1 (en) Systems and methods for providing video-on-demand services for broadcasting systems
US6327421B1 (en) Multiple speed fast forward/rewind compressed video delivery system
Paris An interactive broadcasting protocol for video-on-demand
US20020170059A1 (en) Universal STB architectures and control methods
US20020026501A1 (en) Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
US20020026646A1 (en) Universal STB architectures and control methods
US20020138845A1 (en) Methods and systems for transmitting delayed access client generic data-on demand services
WO2002087246A1 (en) Data-on-demand digital broadcast system utilizing prefetch data transmission
JP5038574B2 (ja) 放送システムのためのビデオ・オン・デマンドサービスを提供する方法
WO2002039744A1 (en) Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
WO2002086673A2 (en) Transmission of delayed access client data and demand
CN100484237C (zh) 利用预取数据传输的数据点播数字广播系统
TWI223563B (en) Methods and systems for transmitting delayed access client generic data-on-demand services
EP1402331A2 (de) Verfahren und systeme zum übertragen von klientgenerischen datenabrufdiensten mit verzögertem zugriff
TWI244869B (en) Data-on-demand digital broadcast system utilizing prefetch data transmission
KR20030051800A (ko) 감소된 공전 시간과 감소된 대역폭의 주문형 데이터 방송전달 매트릭스
KR20040063795A (ko) 지연된 억세스 클라이언트 데이터 및 요청의 전송

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20031124

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20051103