EP1340376A1 - Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits - Google Patents

Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits

Info

Publication number
EP1340376A1
EP1340376A1 EP01950629A EP01950629A EP1340376A1 EP 1340376 A1 EP1340376 A1 EP 1340376A1 EP 01950629 A EP01950629 A EP 01950629A EP 01950629 A EP01950629 A EP 01950629A EP 1340376 A1 EP1340376 A1 EP 1340376A1
Authority
EP
European Patent Office
Prior art keywords
data
transmission
data blocks
matrix
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01950629A
Other languages
German (de)
English (en)
Inventor
Khoi Hoang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PrediWave Corp
Original Assignee
PrediWave Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/709,948 external-priority patent/US6725267B1/en
Priority claimed from US09/841,792 external-priority patent/US20020023267A1/en
Priority claimed from US09/892,017 external-priority patent/US20020026501A1/en
Application filed by PrediWave Corp filed Critical PrediWave Corp
Publication of EP1340376A1 publication Critical patent/EP1340376A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26216Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the channel capacity, e.g. network bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4821End-user interface for program selection using a grid, e.g. sorted out by channel and broadcast time

Definitions

  • Video-on-demand (NOD) systems are one type of data-on-demand (DOD) system.
  • NOD systems video data files are provided by a server or a network of servers to one or more clients on a demand basis. These systems will be well understood by those of skill in the art.
  • a server or a network of servers communicates with clients in a standard hierarchical client-server model.
  • a client sends a request to a server for a data file (e.g., a video data file).
  • the server sends the requested file to the client.
  • a client's request for a data file can be fulfilled by one or more servers.
  • the client may have the capability to store any received data file locally in non-volatile memory for later use.
  • the standard client-server model requires a two-way communications infrastructure. Currently, two-way communications requires building new infrastructure because existing cables can only provide one-way communications.
  • HFC hybrid fiber optics coaxial cables
  • Replacing existing cables is very costly and the resulting services may not be affordable to most users.
  • the standard client-server model has many limitations when a service provider (e.g., a cable company) attempts to provide VOD services to a large number of clients.
  • a service provider e.g., a cable company
  • the service provider has to implement a mechanism to continuously listen and fulfill every request from each client within the network; thus, the number of clients who can receive service is dependent on the capacity of such a mechanism.
  • One mechanism uses massively-parallel computers having large and fast disk arrays as local servers. However, even the fastest existing local server can only deliver video data streams to about 1000 to 2000 clients at one time.
  • a method for sending data to a client to provide data-on-demand services comprising the steps of: receiving a data file, specifying a time interval, parsing the data file into a plurality of data blocks based on the time interval such that each data block is displayable during the time interval, determining a required number of time slots to send the data file, allocating to each time slot at least a first of the plurality of data blocks and optionally one or more additional data blocks, such that the plurality of data blocks is available in sequential order to a client accessing the data file during any time slot, and sending the plurality of data blocks based on the allocating step.
  • the parsing step includes the steps of: determining an estimated data block size, determining a cluster size of a memory in a channel server, and parsing the data file based on the estimated data block size and the cluster size.
  • the determining step includes the step of assessing resource allocation and bandwidth availability.
  • a method for processing data received from a server to provide data-on-demand services comprises the steps of: (a) receiving a selection of a data file during a first time slot; (b) receiving at least one data block of the data file during a second time slot; (c) during a next time slot: receiving any data block not already received, sequentially displaying a data block of the data file, and repeating step (c) until all data blocks of the data file has been received and displayed.
  • the method for processing data received from a server is performed by a set-top box at the client side.
  • a data file is divided into a number of data blocks and a scheduling matrix is generated based on the number of data blocks.
  • the scheduling matrix provides a send order for sending the data blocks, such that a client can access the data blocks in sequential order at a random time.
  • a method for generating a scheduling matrix for a data file comprises the steps of: (a) receiving a number of data blocks [x] for a data file; (b) setting a first variable [j] to zero; (c) setting a second variable [i] to zero; (d) clearing all entries in a reference array; (e) writing at least one data block stored in matrix positions of a column [(i+j) modulo x] in a matrix to a reference array, if the reference array does not already contain the data block; (f) writing a data block [i] into the reference array and a matrix position [(i+j) modulo x,j] of the matrix, if the reference array does not contain the data block [i]; (g) incrementing the second variable [i] by one and repeating step (e) until the second variable [i] is equal to the number of data blocks [x]; and (h) incrementing the first variable [j] by one and repeating the step (c) until the first variable [j
  • a data-on-demand system comprises a first set of channel servers, a central controlling server for controlling the first set of channel servers, a first set of up-converters coupled to the first set of channel servers, a combiner/amplifier coupled to the first set of up-converters, and a combiner/amplifier adapted to transmit data via a transmission medium.
  • the data-on-demand system further comprises a channel monitoring module for monitoring the system, a switch matrix, a second set of channel servers, and a second set of up- converters. The channel monitoring module is configured to report to the central controlling server when system failure occurs.
  • the central controlling server in response to a report from the channel monitoring module, instructs the switch matrix to replace a defective channel server in the first set of channel servers with a channel server in the second set of channel servers and a defective up-converter in the first set of up-converters with an up-converter in the second set of up-converters.
  • a method for providing data-on-demand services comprises the steps of calculating a delivery matrix of a data file, sending the data file in accordance with the delivery matrix, such that a large number of clients is capable of viewing the data file on demand.
  • the data file includes a video file.
  • the idle time created in a delivery matrix can be decreased by moving up the next data blocks in the matrix until all time slots are full and the entire bandwidth is used.
  • the delivery matrix can be thought of more as a stream of data maintaining the order of the original matrix. At any time a user may join the stream and being using data-on-demand services as soon as a starting block is received.
  • the matrix can be thought of as a stream, a user can enter the stream at any given time, and wait only as long as it takes to receive a starting block to being using the data-on- demand services, which would be no longer than the predetermined time slots of the original delivery matrix.
  • Another embodiment of the present invention teaches a universal STB capable of receiving and handling a plurality of digital services such as VOD and digital broadcast.
  • This embodiment teaches a universal STB having a highly flexible architecture capable of sophisticated processing of received data.
  • This architecture includes a databus, a first communication device suitable for coupling to a digital broadcast communications medium, a memory typically including persistent and transient memory bi-directionally coupled to the databus, a digital data decoder bi- directionally coupled to the databus, and a central processing unit (CPU) bi-directionally coupled to the databus.
  • the CPU of this embodiment of the present invention implements a STB control process for controlling the memory, the digital decoder, and the demodulator.
  • the STB control process is operable to process digital data such as that received at the first communications device.
  • the STB control process should be capable of receiving data blocks derived from a decreased idle time scheduling matrix as well as parallel streaming of such data blocks.
  • FIGURE 1A illustrates an exemplary DOD system in accordance with an embodiment of the invention.
  • FIGURE IB illustrates an exemplary DOD system in accordance with another embodiment of the invention.
  • FIGURE 2 illustrates an exemplary channel server in accordance with an embodiment of the invention.
  • FIGURE 3 illustrates an exemplary set-top box in accordance with an embodiment of the invention.
  • FIGURE 4 illustrates an exemplary process for generating a scheduling matrix in accordance with an embodiment of the invention.
  • FIGURE 5 graphically illustrates an example of a scheduling matrix of a six data block file.
  • FIGURE 6 graphically illustrates how the data blocks of the scheduling matrix in Figure
  • FIGURE 7 graphically illustrates a new decreased idle time scheduling matrix.
  • FIGURE 8 depicts the addition of the decreased idle time embodiment.
  • FIGURE 9 shows in flow chart form how the decreased idle time embodiment is accomplished.
  • FIGURE 10 graphically depicts multiple stream of repeating data being created from an original scheduling matrix.
  • FIG. 1A illustrates an exemplary DOD system 100 in accordance with an embodiment of the invention.
  • the DOD system 100 provides data files, such as video files, on demand.
  • the DOD system 100 is not limited to providing video files on demand but is also capable of providing other data files, for example, game files on demand.
  • the DOD system 100 includes a central controlling server 102, a central storage 103, a plurality of channel servers 104a- 104n, a plurality of up-converters 106a- 106n, and a combiner/amplifier 108.
  • the central controlling server 102 controls the channel servers 104.
  • the central storage 103 stores data files in digital format.
  • data files stored in the central storage 103 are accessible via a standard network interface (e.g., Ethernet connection) by any authorized computer, such as the central controller server 102, connected to the network.
  • Each channel server 104 is assigned to a channel and is coupled to an up-converter 106.
  • the channel servers 104 provide data files that are retrieved from the central storage 103 in accordance with instructions from the central controlling server 102.
  • the output of each channel server 104 is a quadrature amplitude modulation (QAM) modulated intermediate frequency (IF) signal having a suitable frequency for the corresponding up-converter 106.
  • QAM- modulated IF signals are dependent upon adopted standards.
  • the current adopted standard in the United States is the data-over-cable-systems-interface-specification (DOCSIS) standard, which requires an approximately 43J5MHz IF frequency.
  • the up-converters 106 convert IF signals received from the channel servers 104 to radio frequency signals (RF signals).
  • the RF signals which include frequency and bandwidth, are dependent on a desired channel and adopted standards. For example, under the current standard in the United States for a cable television channel 80, the RF signal has a frequency of approximately 559.25MHz and a bandwidth of approximately 6MHz.
  • the outputs of the up-converters 106 are applied to the combiner/amplifier 108.
  • the combiner/amplifier 108 amplifies, conditions, and combines the received RF signals then outputs the signals out to a transmission medium 110.
  • the central controlling server 102 includes a graphics user interface (not shown) to enable a service provider to schedule data delivery by a drag-and-drop operation. Further, the central controlling server 102 authenticates and controls the channel servers 104 to start or stop according to delivery matrices. In an exemplary embodiment, the central controlling server 102 automatically selects a channel and calculates delivery matrices for transmitting data files in the selected channel. The central controlling server 102 provides offline addition, deletion, and update of data file information (e.g., duration, category, rating, and/or brief description). Further, the central controlling server 102 controls the central storage 103 by updating data files and databases stored therein.
  • data file information e.g., duration, category, rating, and/or brief description
  • an existing cable television system 120 may continue to feed signals into the combiner/amplifier 108 to provide non-DOD services to clients.
  • the DOD system 100 in accordance with the invention does not disrupt present cable television services.
  • Figure IB illustrates another exemplary embodiment of the DOD system 100 in accordance with the invention.
  • the DOD system 100 includes a switch matrix 112, a channel monitoring module 114, a set of back-up channel servers 116a-116b, and a set of back-up up-converters 118a-118b.
  • the switch matrix 112 is physically located between the up-converters 106 and the combiner/amplifier 108.
  • the switch matrix 112 is controlled by the central controlling server 102.
  • the channel monitoring module 114 comprises a plurality of configured set-top boxes, which simulate potential clients, for monitoring the health of the DOD system 100. Monitoring results are communicated by the channel monitoring module 114 to the central controlling server 102.
  • the central controlling server 102 In case of a channel failure (i.e., a channel server failure, an up-converter failure, or a communication link failure), the central controlling server 102 through the switch matrix 112 disengages the malfunctioning component and engages a healthy backup component 116 and/or 118 to resume service.
  • a channel failure i.e., a channel server failure, an up-converter failure, or a communication link failure
  • the central controlling server 102 through the switch matrix 112 disengages the malfunctioning component and engages a healthy backup component 116 and/or 118 to resume service.
  • data files being broadcasted from the DOD system 100 are contained in motion pictures expert group (MPEG) files.
  • MPEG motion pictures expert group
  • Each MPEG file is dynamically divided into data blocks and sub-blocks mapping to a particular portion of a data file along a time axis. These data blocks and sub-blocks are sent during a pre-determined time in accordance with three-dimensional delivery matrices provided by the central controlling server 102.
  • a feedback channel is not necessary for the DOD system 100 to provide DOD services. However, if a feedback channel is available, the feedback channel can be used for other purposes, such as billing or providing Internet services.
  • FIG. 2 illustrates an exemplary channel server 104 in accordance with an embodiment of the invention.
  • the channel server 104 comprises a server controller 202, a CPU 204, a QAM modulator 206, a local memory 208, and a network interface 210.
  • the server controller 202 controls the overall operation of the channel server 104 by instructing the CPU 204 to divide data files into blocks (further into sub-blocks and data packets), select data blocks for transmission in accordance with a delivery matrix provided by the central controlling server 102, encode selected data, compress encoded data, then deliver compressed data to the QAM modulator 206.
  • the QAM modulator 206 receives data to be transmitted via a bus (i.e., PCI, CPU local bus) or Ethernet connections.
  • a bus i.e., PCI, CPU local bus
  • the QAM modulator 206 may include a downstream QAM modulator, an upstream quadrature amplitude modulation/quadrature phase shift keying (QAM/QPSK) burst demodulator with forward error correction decoder, and/or an upstream tuner.
  • the output of the QAM modulator 206 is an IF signals that can be applied directly to an up-converter 106.
  • the network interface 210 connects the channel server 104 to other channel servers 104 and to the central controlling server 102 to execute the scheduling and controlling instructions from the central controlling server 102, reporting status. back to the central controlling server 102, and receiving data files from the central storage 103.
  • any data file retrieved from the central storage 103 can be stored in the local memory 208 of the channel server 104 before the data file is processed in accordance with instructions from the server controller 202.
  • the channel server 104 may send one or more DOD data streams depending on the bandwidth of a cable channel (e.g., 6, 6.5, or 8MHz), QAM modulation (e.g., QAM 64 or QAM 256, and a compression standard/bit rate of the DOD data stream (i.e., MPEG- 1 or MPEG-2).
  • FIG. 3 illustrates a universal set-top box (STB) 300 in accordance with one embodiment of the invention.
  • the STB 300 comprises a QAM demodulator 302, a CPU 304, a local memory 308, a buffer memory 310, a decoder 312 having video and audio decoding capabilities, a graphics overlay module 314, a user interface 318, a communications link 320, and a fast data bus 322 coupling these devices as illustrated.
  • the CPU 302 controls overall operation of the universal STB 300 in order to select data in response to a client's request, decode selected data, decompress decoded data, re-assemble decoded data, store decoded data in the local memory 308 or the buffer memory 310, and deliver stored data to the decoder 312.
  • the local memory 308 comprises non-volatile memory (e.g., a hard drive) and the buffer memory 310 comprises volatile memory.
  • the QAM demodulator 302 comprises transmitter and receiver modules and one or more of the following: privacy encryption/decryption module, forward error correction decoder/encoder, tuner control, downstream and upstream processors, CPU and memory interface circuits.
  • the QAM demodulator 302 receives modulated IF signals, samples and demodulates the signals to restore data.
  • the decoder 312 decodes at least one data block to transform the data block into images display able on an output screen.
  • the decoder 312 supports commands from a subscribing client, such as play, stop, pause, step, rewind, forward, etc.
  • the decoder 312 provides decoded data to an output device 324 for use by the client.
  • the output device 324 may be any suitable device such as a television, computer, any appropriate display monitor, a VCR, or the like.
  • the graphics overlay module 314 enhances displayed graphics quality by, for example, providing alpha blending or picture-in-picture capabilities.
  • the graphics overlay module 314 can be used for graphics acceleration during game playing mode, for example, when the service provider provides games-on-demand services using the system in accordance with the invention.
  • the user interface 318 enables user control of the STB 300, and may be any suitable device such as a remote control device, a keyboard, a smartcard, etc.
  • the communications link 320 provides an additional communications connection. This may be coupled to another computer, or may be used to implement bi-directional communication.
  • the data bus 322 is preferably a commercially available "fast" data bus suitable for performing data communications in a real time manner as required by the present invention. Suitable examples are USB, firewire, etc.
  • data files are broadcast to all cable television subscribers, only the DOD subscriber who has a compatible STB 300 will be able to decode and enjoy data-on-demand services.
  • permission to obtain data files on demand can be obtained via a smart card system in the user interface 318.
  • a smart card may be rechargeable at a local store or vending machine set up by a service provider.
  • a flat fee system provides a subscriber unlimited access to all available data files.
  • data-on-demand interactive features permits a client to select at any time an available data file. The amount of time between when a client presses a select button and the time the selected data file begins playing is referred to as a response time.
  • a response time gets shorter.
  • a response time can be determined based on an evaluation of resource allocation and desired quality of service.
  • the response time becomes a factor only of the time it takes to receive and process that first data block.
  • the number of data blocks (NUM_OF_BLKS) for each data file can be calculated as follows:
  • Estimated_BLK_Size (DataFile Size * TS) / DataFileJLength (1)
  • BLK SIZE (Estimated BLK Size + CLUSTER SIZE - lByte) / CLUSTER _SIZE (2)
  • BLK_SIZE_BYTES BLK_SIZE * CLUSTER_SIZE (3)
  • NUM_OF_BLKS (DataFile_Size + BLK_SIZE_BYTES - lByte)/BLK_SIZE_BYTES (4)
  • the Estimated_BLK_Size is an estimated block size (in Bytes); the DataFile_Size is the data file size (in Bytes); TS represents the duration of a time slot (in seconds); DataFileJLength is the duration of the data file (in seconds); BLK SIZE is the number of clusters needed for each data block; CLUSTER _SIZE is the size of a cluster in the local memory 208 for each channel server 104 (e.g., 64KBytes); BLK_SIZE_B YTES is a block size in Bytes.
  • the number of blocks (NUM_OF_BLKS) is equal to the data file size (in Bytes) plus a data block size in Bytes minus 1, Byte and divided by a data block size in Bytes.
  • Equations (1) to (4) illustrate one specific embodiment. A person of skill in the art would recognize that other methods are available to calculate a number of data blocks for a data file. For example, dividing a data file into a number of data blocks is primarily a function of an estimated block size and the cluster size of the local memory 208 of a channel server 104. Thus, the invention should not be limited to the specific embodiment presented above.
  • Figure 4 illustrates an exemplary process for generating a scheduling matrix for sending a data file in accordance with an embodiment of the invention.
  • this invention uses time division multiplexing (TDM) and frequency division multiplexing (FDM) technology to compress and schedule data delivery at the server side.
  • TDM time division multiplexing
  • FDM frequency division multiplexing
  • a scheduling matrix is generated for each data file.
  • each data file is divided into a number of data blocks and the scheduling matrix is generated based on the number of data blocks.
  • a scheduling matrix provides a send order for sending data blocks of a data file from a server to clients, such that the data blocks are accessible in sequential order by any client who wishes to access the data file at a random time.
  • a number of data blocks (x) for a data file is received.
  • a first variable, j is set to zero (step 404).
  • a reference array is cleared (step 406). The reference array keeps track of data blocks for internal management purposes.
  • j is compared to x (step 408). If j is less than x, a second variable, i, is set to zero (step 412).
  • i is compared to x (step 414). If i is less than x, data blocks stored in the column [(i+j) modulo (x)] of a scheduling matrix are written into the reference array (step 418). If the reference array already has such data block(s), do not write a duplicate copy.
  • the scheduling matrix and the reference arrays are as follows:
  • Appendix A attached to this application describes a step-by-step process of the exemplary process illustrated in Figure 4 to generate the above scheduling matrix and reference arrays.
  • a look-ahead process can be used to calculate a look- ahead scheduling matrix to send a predetermined number of data blocks of a data file prior to a predicted access time. For example, if a predetermined look- ahead time is the duration of one time slot, for any time slot greater than or equal to time slot number four, data block 4 (blk4) of a data file should be received by a STB 300 at a subscribing client at or before TS3, but blk4 would not be played until TS4.
  • the process steps for generating a look-ahead scheduling matrix is substantially similar to the process steps described above for Figure 4 except that the look- ahead scheduling matrix in this embodiment schedules an earlier sending sequence based on a look-ahead time. Assuming a data file is divided into six data blocks, an exemplary sending sequence based on a look-ahead scheduling matrix, having a look-ahead time of the duration of two time slots, can be represented as follows:
  • a three-dimensional delivery matrix for sending a set of data files is generated based on the scheduling matrices for each data file of the set of data files.
  • a third dimension containing IDs for each data file in the set of data files is generated.
  • the three-dimensional delivery matrix is calculated to efficiently utilize available bandwidth in each channel to deliver multiple data streams.
  • a convolution method which is well known in the art, is used to generate a three-dimensional delivery matrix to schedule an efficient delivery of a set of data files.
  • a convolution method may include the following policies: (1) the total number of data blocks sent in the duration of any time slot (TS) should be kept at a smallest possible number; and (2) if multiple partial solutions are available with respect to policy (1), the preferred solution is the one which has a smallest sum of data blocks by adding the data blocks to be sent during the duration of any reference time slot, data blocks to be sent during the duration of a previous time slot (with respect to the reference time slot), and data blocks to be sent during the duration of a next time slot (with respect to the reference time slot).
  • the sending sequence based on a scheduling matrix is as follows:
  • TS 1 blkO, blkl, blk3, blk4
  • Option 1 Send video file N at shift 0 TS Total Data Blocks
  • TSI M0,M1,M3,N0,NI,N3 6
  • M2, NO, N2 4 TS3 MO
  • Ml, M3, M4, NO, Nl, N3, N4 8 TS4 MO
  • M4, NO, N4 4 TS5 MO, MI, M2, M5, NO, NI, N2, N5 8
  • Option 2 Send video file N at shift 1 TS Total Data Blocks
  • TS0 M0, N0, N1, N3 4
  • TSI MO, MI, M3, NO, N2 5
  • Option 3 Send video file N at shift 2 TS Total Data Blocks
  • Option 4 Send video file N at shift 3 TS Total Data Blocks
  • Option 5 Send video file N at shift 4 TS Total Data Blocks
  • options 2, 4, and 6 have the smallest maximum number of data blocks (i.e., 6 data blocks) sent during any time slot.
  • the optimal delivery matrix in this exemplary embodiment is option 4 because option 4 has the smallest sum of data blocks of any reference time slot plus data blocks of neighboring time slots (i.e., 16 data blocks).
  • the sending sequence of the data file N should be shifted by three time slots.
  • a three-dimensional delivery matrix is generated for each channel server 104.
  • the DOD system 100 sends data blocks for data files M and N in accordance with the optimal delivery matrix (i.e., shift delivery sequence of data file N by three time slots) in the following manner:
  • the STB 300 at client A receives, stores, plays, and rejects data blocks as follows:
  • the STB 300 at the client D receives, stores, plays, and rejects data blocks as follows:
  • any combination of clients can at a random time independently select and begin playing any data file provided by the service provider.
  • the above denotation of "Receive” is slightly misleading as the system is always receiving a continuous stream of data blocks determined by the time slot, but at any given point, the receiving STB may only require certain data blocks, having already received and stored the other received data blocks. This need is referred to as “receive” above, but may be more accurately referred to as
  • This empty block space equates to bandwidth which is not being used, and therefore is wasted bandwidth.
  • a goal of this invention is to decrease as much idle time as possible, and therefore one embodiment of the current invention is to perform another step after the scheduling matrix is determined, referred to herein as a decreased idle time scheduling matrix.
  • An exemplary model of a decreased idle time scheduling matrixes can be explained with reference to the six block scheduling matrix described above, but repeated here for convenience. Idle time during which bandwidth could be utilized to transmit a data block is denoted " ⁇ -->" for clarity:
  • the scheduling matrix clearly has unused bandwidth in the form of idle time during most time slots.
  • the present invention teaches reduction of this idle time by utilizing constant bandwidth from time slot to time slot.
  • the key to accomplishing decreased idle transmission time through constant bandwidth utilization is an understanding that the delivery sequence of the data blocks must be adhered to, while the exact time slot in which a data block is delivered is not relevant except that the data block must be received prior to or at the time in which it must be accessed. Accordingly, constant bandwidth utilization is accomplished by transmitting a constant number of data blocks within each time slot according to the delivery sequence set forth by the scheduling matrix and with disregard to the time slot assigned by the scheduling matrix.
  • the idle time is decreased by moving forward data blocks until four data blocks are scheduled for transmission during each time slot.
  • the procedure for this is to take the next data block in sequence, and move it to the empty space. So for this example, the first block in TSI, blkO, is moved to TSO. The next block in TSI, blkl, is also moved up. Then, since TSO still has an empty data block space, blk3 from TSI is also moved up. TSO then has all of its spaces filled, and now looks like:
  • TS 1 blkO, blk2, blkO, blkl
  • TS 1 blkO, blk2, blkO, blkl
  • TS6 and TS7 would have the same data blocks as TS2 and TS3. Therefore what is actually produced by this process in a new, shorter scheduling matrix, now only four time slots long.
  • Fig. 7 graphically depicts this new repeating matrix created by filling up idle time.
  • FIGS. 4 - 10 dealt with an instance where the selected bandwidth was set to a constant equal to an integer number of data blocks.
  • the constant bandwidth need not be equal to an integer number of data blocks.
  • the delivery sequence adhere to the sequence developed as in FIG. 8.
  • the data stream generated by the delivery sequence developed in FIG. 8 is then provided to a lower level hardware device (e.g., a network card or the channel server) which controls broadcast of the digital data. Rather than broadcasting an integer number of data blocks, the lower level hardware device will transmit as much data as possible within the bandwidth allocated to the file.
  • a lower level hardware device e.g., a network card or the channel server
  • the delivery matrix provides the sequence and the lower level hardware device controls broadcast of data utilizing the allocated bandwidth.
  • an allocated bandwidth which includes a fraction of a data block size can be fully utilized.
  • the lower level device will pause broadcast of this particular data file until bandwidth is again available.
  • a service provider can schedule to send a number of data files (e.g., video files) to channel servers 104 prior to broadcasting.
  • the central controlling server 102 calculates and sends to the channel servers 104 three-dimensional delivery matrices (ID, time slot, and data block send order).
  • channel servers 104 consult the three-dimensional delivery matrices to send appropriate data blocks in an appropriate order.
  • Each data file is divided into data blocks so that a large number of subscribing clients can separately begin viewing a data file continuously and sequentially at a random time.
  • a data block size is adjusted to a next higher multiple of a memory cluster size in the local memory 208 of a channel server 104. For example, if a calculated data block length is 720Kbytes according to equation (1) above, then the resulting data block length should be 768Kbytes if the cluster size of the local memory 208 is 64Kbytes.
  • data blocks should be further divided into multiples of sub-blocks each having the same size as the cluster size. In this example, the data block has twelve sub-blocks of 64KBytes.
  • a sub-block can be further broken down into data packets.
  • Each data packet contains a packet header and packet data.
  • the packet data length depends on the maximum transfer unit (MTU) of a physical layer where each channel server's CPU sends data.
  • MTU maximum transfer unit
  • the total size of the packet header and packet data should be less than the MTU. However, for maximum efficiency, the packet data length should be as long as possible.
  • data in a packet header contains information that permits the subscriber client's STB 300 to decode any received data and determine if the data packet belongs to a selected data file (e.g., protocol signature, version, ID, or packet type information).
  • the packet header may also contain other information, such as block/sub-block/packet number, packet length, cyclic redundancy check (CRC) and offset in a sub-block, and/or encoding information.
  • CRC cyclic redundancy check
  • data packets are sent to the QAM modulator 206 where another header is added to the data packet to generate a QAM modulated IF output signal.
  • the maximum bit rate output for the QAM modulator 206 is dependent on available bandwidth.
  • the maximum bit rate is 5.05
  • the QAM-modulated IF signals are sent to the up-converters 106 to be converted to RF signals suitable for a specific channel (e.g., for CATV channel 80, 559.250MHz and 6MHz bandwidth). For example, if a cable network has high bandwidth (or bit rate), each channel can be used to provide more than one data stream, with each data stream occupying a virtual subchannel. For example, three MPEG1 data streams can fit into a 6MHz channel using QAM modulation.
  • the output of the up-converters 106 is applied to the combiner/amplifier 108, which sends the combined signal to the transmission medium 110.
  • BW N x bw
  • bw the required bandwidth per data stream.
  • three MPEG-1 data streams can be transmitted at the same time by a DOCSIS cable channel having a system bandwidth of 30.3 Mbits/sec. because each MPEG-1 data stream occupies 9 Mbits/sec of the system bandwidth.
  • bandwidth is consumed regardless of the number of subscribing clients actually accessing the DOD service. Thus, even if no subscribing client is using the DOD service, bandwidth is still consumed to ensure the on-demand capability of the system.
  • the STB 300 once turned on, continuously receives and updates a program guide stored in the local memory 308 of a STB 300.
  • the STB 300 displays data file information including the latest program guide on a TV screen.
  • Data file information such as video file information, may include movielD, movie title, description (in multiple languages), category (e.g., action, children), rating (e.g., R, PG13), cable company policy (e.g., price, length of free preview), subscription period, movie poster, and movie preview.
  • data file information is sent via a reserved physical channel, such as a channel reserved for firmware update, commercials, and/or emergency information.
  • information is sent in a physical channel shared by other data streams.
  • a subscribing client can view a list of available data files arranged by categories displayed on a television screen.
  • the STB 300 controls its hardware to tune into a corresponding physical channel and/or a virtual subchannel to start receiving data packets for that data file.
  • the STB 300 examines every data packet header, decodes data in the data packets, and determines if a received data packet should be retained. If the STB 300 determines that a data packet should not be retained, the data packet is discarded. Otherwise, the packet data is saved in the local memory 308 for later retrieval or is temporarily stored, in the buffer memory 310 until it is sent to the decoder 312.
  • the STB 300 uses a "sliding window" anticipation technique to lock anticipated data blocks in the memory buffer 310 whenever possible. Data blocks are transferred to the decoder 312 directly out of the memory buffer 310 if a hit in an anticipation window occurs. If an anticipation miss occurs, data blocks are read from the local memory 308 into the memory buffer 310 before the data blocks are transferred to the decoder 312 from the memory buffer 310.
  • the STB 300 responds to subscribing client's commands via infrared (IR) remote control unit buttons, an IR keyboard, or front panel pushbuttons, including buttons to pause, play in slow motion, rewind, zoom and single step.
  • IR infrared
  • a subscribing client does not input any action for a predetermined period of time (e.g., scrolling program menu, or selecting a category or movie)
  • a scheduled commercial is played automatically.
  • the scheduled commercial is automatically stopped when the subscribing client provides an action (e.g., press a button in a remote control unit).
  • the STB 300 can automatically insert commercials while a video is being played.
  • the service provider e.g., a cable company
  • the service provider can set up a pricing policy that dictates how frequently commercials should interrupt the video being played.
  • the STB 300 pauses any data receiving operation and controls its hardware to tune into the channel reserved for receiving data file information to obtain and decode any emergency information to be displayed on an output screen.
  • the STB 300 when the STB 300 is idled, it is tuned to the channel reserved for receiving data file information and is always ready to receive and display any emergency information without delay.

Abstract

L'invention concerne un procédé et un système de mise en oeuvre d'une matrice de programmation à temps morts réduits (520) pour un fichier de données mis sous forme de blocs de données. On génère une matrice de programmation et les temps morts sont remplis avec des blocs de données qui apparaissent plus tard dans la matrice, en gardant la séquence originale des blocs de données. Ce processus est répété (550), ou bien il est procédé à la création d'une nouvelle matrice de programmation à temps morts réduits (560). Des décodeurs spécialement conçus permettent de recevoir ces blocs de données.
EP01950629A 2000-11-10 2001-06-27 Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits Withdrawn EP1340376A1 (fr)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US09/709,948 US6725267B1 (en) 2000-05-31 2000-11-10 Prefetched data in a digital broadcast system
US841792 2001-04-24
US09/841,792 US20020023267A1 (en) 2000-05-31 2001-04-24 Universal digital broadcast system and methods
US892017 2001-06-25
US09/892,017 US20020026501A1 (en) 2000-05-31 2001-06-25 Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
PCT/US2001/020679 WO2002039744A1 (fr) 2000-11-10 2001-06-27 Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits
US709948 2010-02-22

Publications (1)

Publication Number Publication Date
EP1340376A1 true EP1340376A1 (fr) 2003-09-03

Family

ID=27418871

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01950629A Withdrawn EP1340376A1 (fr) 2000-11-10 2001-06-27 Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits

Country Status (7)

Country Link
EP (1) EP1340376A1 (fr)
JP (1) JP2004514336A (fr)
CN (1) CN1203675C (fr)
AU (1) AU2001271600A1 (fr)
CA (1) CA2428829A1 (fr)
HK (1) HK1053402B (fr)
WO (1) WO2002039744A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100868820B1 (ko) * 2004-07-23 2008-11-14 비치 언리미티드 엘엘씨 데이터 스트림을 전달하는 방법 및 시스템과 데이터 저장 레벨을 제어하는 방법
CN101889425B (zh) 2007-12-14 2013-10-30 汤姆逊许可公司 通过可变带宽信道进行同播的设备和方法
CN101889409A (zh) 2007-12-18 2010-11-17 汤姆逊许可公司 基于广播网络的文件大小估计设备和方法
CN107707490B (zh) * 2017-09-26 2021-06-29 郑州云海信息技术有限公司 一种带宽控制方法、装置及可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6003071A (en) * 1994-01-21 1999-12-14 Sony Corporation Image data transmission apparatus using time slots
US5757415A (en) * 1994-05-26 1998-05-26 Sony Corporation On-demand data transmission by dividing input data into blocks and each block into sub-blocks such that the sub-blocks are re-arranged for storage to data storage means
US5930493A (en) * 1995-06-07 1999-07-27 International Business Machines Corporation Multimedia server system and method for communicating multimedia information
US5850218A (en) * 1997-02-19 1998-12-15 Time Warner Entertainment Company L.P. Inter-active program guide with default selection control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0239744A1 *

Also Published As

Publication number Publication date
CN1393108A (zh) 2003-01-22
WO2002039744A1 (fr) 2002-05-16
HK1053402A1 (en) 2003-10-17
CA2428829A1 (fr) 2002-05-16
JP2004514336A (ja) 2004-05-13
HK1053402B (zh) 2006-01-13
CN1203675C (zh) 2005-05-25
AU2001271600A1 (en) 2002-05-21

Similar Documents

Publication Publication Date Title
US6557030B1 (en) Systems and methods for providing video-on-demand services for broadcasting systems
US20020175998A1 (en) Data-on-demand digital broadcast system utilizing prefetch data transmission
US20020026501A1 (en) Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
US20020170059A1 (en) Universal STB architectures and control methods
US20020026646A1 (en) Universal STB architectures and control methods
US20020138845A1 (en) Methods and systems for transmitting delayed access client generic data-on demand services
AU2001266681A1 (en) Methods for providing video-on-demand services for broadcasting systems
EP1340376A1 (fr) Matrices de distribution de diffusion de donnees sur demande a largeur de bande constante et a temps morts reduits
WO2002087246A1 (fr) Systeme de diffusion numerique de donnees sur demande mettant en oeuvre une transmission de donnees preanalysee
WO2002086673A2 (fr) Procedes et systemes d'emission de services sur demande de donnees generiques client a acces differe
EP1402331A2 (fr) Procedes et systemes d'emission de services sur demande de donnees generiques client a acces differe
TWI223563B (en) Methods and systems for transmitting delayed access client generic data-on-demand services
KR20030051800A (ko) 감소된 공전 시간과 감소된 대역폭의 주문형 데이터 방송전달 매트릭스
KR20040063795A (ko) 지연된 억세스 클라이언트 데이터 및 요청의 전송
AU2001253797A1 (en) Universal digital broadcast system and methods

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030602

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PREDIWAVE CORP.

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20051231