EP1451709A1 - Systeme de serveur a large bande interactif - Google Patents

Systeme de serveur a large bande interactif

Info

Publication number
EP1451709A1
EP1451709A1 EP02794098A EP02794098A EP1451709A1 EP 1451709 A1 EP1451709 A1 EP 1451709A1 EP 02794098 A EP02794098 A EP 02794098A EP 02794098 A EP02794098 A EP 02794098A EP 1451709 A1 EP1451709 A1 EP 1451709A1
Authority
EP
European Patent Office
Prior art keywords
title
processors
data
server system
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02794098A
Other languages
German (de)
English (en)
Other versions
EP1451709A4 (fr
Inventor
Steven W. Rose
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INTERACTIVE CONTENT ENGINES LLC
Original Assignee
INTERACTIVE CONTENT ENGINES LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INTERACTIVE CONTENT ENGINES LLC filed Critical INTERACTIVE CONTENT ENGINES LLC
Publication of EP1451709A1 publication Critical patent/EP1451709A1/fr
Publication of EP1451709A4 publication Critical patent/EP1451709A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1019Random or heuristic server selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21815Source of audio or video content, e.g. local disk arrays comprising local storage units
    • H04N21/2182Source of audio or video content, e.g. local disk arrays comprising local storage units involving memory arrays, e.g. RAID disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21815Source of audio or video content, e.g. local disk arrays comprising local storage units
    • H04N21/21825Source of audio or video content, e.g. local disk arrays comprising local storage units involving removable storage units, e.g. tertiary storage such as magnetic tapes or optical disks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/2312Data placement on disk arrays
    • H04N21/2318Data placement on disk arrays using striping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers

Definitions

  • the present invention relates to interactive broadband server systems, and more particularly, to an interactive broadband server system that is capable of delivering may different kinds of data and services including a significantly high number of simultaneous isochronous data streams, such as could be used to deliver video on demand (VOD) services.
  • VOD video on demand
  • An Interactive broadband server (IBS) system is a device that delivers many different kinds of data and provides many simultaneous services. Such services may include video streams from pre-recorded video (ranging from clips to spots to movies), video streams from near-real-time events, two-way voice data, data transport downloads, database interactions, support for credit card transactions, interactive simulations and games, delivery of multimedia content, and any other services known or to be determined. It is desired that the IBS system provide thousands of simultaneous isochronous data streams, where "isochronous" refers to data streams that are time-sensitive and must be delivered continuously without interruption since the streams would otherwise become incoherent.
  • Examples of isochronous data streams include real-time video and audio which are transmitted as soon as they are received, such as a live television feed, training videos, movies, and individually requested advertising.
  • the IBS system must also accurately track, account for, store and bill for all services while providing for network management.
  • Other terms used to describe a fundamentally similar device include Video Server, Media Server, Interactive Broadband Server, Interactive Content Engine, Bandwidth Multiplier, and Metropolitan Media Server.
  • Attempts and proposals have been made to implement IBS systems including embodiments that feature one or more high capacity central servers to those that employ distributed processing systems. The challenge is to provide an IBS solution that is capable of delivering high quality services to thousands of users while maintaining a cost-effective and practical design.
  • An interactive broadband server system includes a plurality of processors, a backbone switch, a plurality of storage devices and a plurality of user processes.
  • the backbone switch enables high speed communication between the processors.
  • the storage devices are coupled to and distributed across the processors to store at least one title, where each title is divided into data chunks that are distributed across two or more of the storage devices.
  • the user processes are configured for execution on the processors for interfacing a plurality of subscriber locations. Each user process is operative to retrieve a requested title from two or more of the processors via the backbone switch and to assemble a requested title for delivery to a requesting subscriber location.
  • the storage devices are organized into a plurality of RAID groups, where data chunks of each stored title are distributed across the RAID groups, hi one configuration, for example, each data chunk is divided into a plurality of sub- chunks that are distributed across one of the RAID groups.
  • RAID group retrieval and assembly functionality may be distributed among the user processes.
  • an interactive broadband server system comprises a backbone switch including a plurality of bi-directional ports, a disk array, a plurality of processors, and a plurality of processes.
  • the disk array comprises a plurality of disk drives and stores a plurality of titles which are sub-divided into a plurality of data chunks. The data chunks are distributed across the disk array.
  • Each processor has a plurality of interfaces, including a first interface coupled to a port of the backbone switch, a second interface coupled to at least one disk drive of the drive array, and a third interface for coupling to a network for interfacing a plurality of subscriber devices.
  • the processes enable each processor to retrieve data chunks of a requested title from two or more of the processors, to assemble the requested title, and to transmit the requested title via the third interface.
  • a plurality of titles each comprise isochronous data content simultaneously delivered to a corresponding plurality of subscriber devices via corresponding third interfaces of the processors.
  • the system may include a plurality of media readers, each coupled to a corresponding one of the processors, and a library storage system.
  • the processors include an additional interface for coupling to a media reader.
  • the library storage system includes a plurality of storage media that collectively store a plurality of titles and is coupled to a port of the backbone switch.
  • the library storage system is configured to receive a title request via the backbone switch and to load a corresponding storage media on any available one of the plurality of media readers.
  • the plurality of processes may include at least one loading process configured to retrieve a title from a media reader, to divide the title into data chunks, and to distribute the data chunks across the disk array via the processors.
  • the loading process may create a title map that locates each data chunk of a title.
  • the plurality of processes may further include at least one user process executed on a processor that retrieves and uses the title map to retrieve each data chunk of the title.
  • the titles may be preprocessed and stored in a predetermined format to reduce loading and processing overhead. Examples of preprocessing include pre-encryption, pre-calculated redundancy information, pre-stored transport protocol, pointers to specific locations within stored title content for a variety of reasons, etc.
  • the pointers for example, may comprise time stamps.
  • An interactive content engine includes a backbone switch including a plurality of ports, processors, media readers, a library storage system, storage devices, and at least one process executed on the processors.
  • the at least one process collectively submits a title request, retrieves a requested title from an available media reader, stores the requested title on the storage devices, and delivers the requested title to one of the plurality of processors.
  • FIG. 1 is a simplified block diagram of a communication system including an interactive broadband server (IBS) system configured according to an embodiment of the present invention.
  • IBS interactive broadband server
  • FIG. 2A is a block diagram of an exemplary embodiment of the IBS system of FIG. 1.
  • FIG. 2B is a block diagram of a portion of another exemplary embodiment of the IBS system of FIG. 1 employing optical disk drives distributed among the processors of FIG. 2 A.
  • FIG. 3 is a block diagram of an exemplary embodiment of each of the processors of FIG. 2.
  • FIG. 4 is a block diagram illustrating an exemplary RAID disk organization in accordance with an embodiment of the present invention.
  • FIG. 5A is a block diagram illustrating user title request (UTR), loading, storage and retrieval of a title initiated by a user process (UP) executing on a selected one of the processors of FIG. 2.
  • UTR user title request
  • UP user process
  • FIG. 5B is a block diagram illustrating an exemplary distributed loading process for accessing and storing requested titles.
  • FIG. 5C is a block diagram illustrating an exemplary coordinated loading process for accessing and storing requested titles using the distributed optical disk drives of FIG. 2.
  • FIG. 6 is a more detailed block diagram illustrating request and retrieval of a title by a user process on one processor and operation of a directory process executed on another processor of the processors of FIG. 2.
  • FIG. 7 is a more detailed block diagram of an exemplary title map employed by various processes, including user processes, directory processes, loading processes, etc.
  • FIG. 8 is a block diagram illustrating a caching and Least-Recently-Used (LRU) strategy used by the IBS system of FIG. 1 for storage and retrieval of titles and data.
  • LRU Least-Recently-Used
  • FIG. 9 is a block diagram illustrating shadow processing according to an embodiment of the present invention.
  • An Interactive Broadband Server System overcomes the most objectionable characteristics of traditional servers by avoiding the necessity of redundant storage of content and the concomitant title by title proactive management of storage. Pre-knowledge of which titles will be most popular is not required, and a single copy of a title may serve any number of simultaneous users, each at a slightly different (and individually controlled) point in time in the title, up to the maximum stream output capacity of the server.
  • each chunk represents a fixed amount of storage or a fixed amount of time.
  • Each user process accessing a title is given the location of each chunk of the title, and is responsible for reassembling them into an independent and (in most cases) isochronous output stream.
  • Each chunk is stored on a redundant array of independent disks (RAID) array, such as in five "sub-chunks", one sub-chunk per drive. These five sub-chunks contain 20% redundant information, allowing the reconstruction of any missing sub-chunk in the case of a drive or processor failure.
  • RAID redundant array of independent disks
  • This approach allows all output streams to be generated from one title, or each output stream to be generated from a different title, and results in a remarkably evenly distributed load on all processors and a backbone (or backplane) switch.
  • Management is automatic and prior knowledge of content popularity is not required.
  • LRU least recently used
  • Each sub-chunk is similarly cached in the memory of the processor on which it is stored, until it becomes the least recently used sub-chunk and is deleted to make room for a currently requested one.
  • Allocation of server resources is automatic and based on instantaneous demand.
  • FIG. 1 is a simplified block diagram of a communication system 100 including an interactive broadband server (TBS) system 109 configured according to an embodiment of the present invention.
  • the IBS system 109 is located at a convenient and exemplary point of distribution 101 and is coupled via a network communication system 107 and an exemplary distribution network 103 for distributing data and services to one or more subscriber locations 105.
  • the IBS system 109 incorporates a library storage system 201 (FIG. 2) incorporating stored (and typically encoded, encrypted and/or compressed) content for delivery to the subscriber locations 105.
  • Bi-directional communication is supported in which subscriber information from any one or more of the subscriber locations 105 is forwarded upstream to the point of distribution 101.
  • the network communication system 107 may be any type of data network which supports the transmission of isochronous data, such as an Asynchronous Transfer Mode (ATM) or Ethernet network, or a wireless, Digital Subscriber Line (DSL) or hybrid fiber coax (HFC) network which modulates data for downstream transport to the subscriber locations 105 and tunes to upstream channels for receiving and demodulating subscriber information.
  • ATM Asynchronous Transfer Mode
  • Ethernet Ethernet
  • DSL Digital Subscriber Line
  • HFC hybrid fiber coax
  • the computer networks 111 may include any type of local area network (LAN), wide area network
  • the other content systems 113 may include the public switched telephone network (PSTN) and/or may be employed for reception and delivery of any type of information, such as television broadcast content or the like.
  • PSTN public switched telephone network
  • the computer networks 111 and/or the other content systems 113 may be local to the point of distribution 101 (e.g. operating as a headend or the like) or may be located upstream and delivered via appropriate data transport mechanisms, such as fiber optic links or the like.
  • the computer networks 111 and/or the other content systems 113 may each be coupled directly to the network communication system 107 or coupled via the IBS system 109 for delivery to the subscriber locations 105.
  • the point of distribution 101 may include appropriate equipment for data transmission, such as, for example, internal servers, firewalls, Internet Protocol (IP) routers, signal combiners, channel re-mappers, etc.
  • IP Internet Protocol
  • the distribution network 103 is configured according to an HFC network in which the subscriber media includes coaxial cables that are distributed from local nodes (e.g., optical nodes or the like which provide conversion between optical and electrical formats) to the respective subscriber locations 105.
  • local nodes e.g., optical nodes or the like which provide conversion between optical and electrical formats
  • source information is distributed from a headend to each of several distribution hubs, which further distributes source information to one or more optical nodes or the like, which in turn distributes the source information to one or more subscriber locations 105 via corresponding subscriber media links, such as coaxial cables.
  • the point of distribution 101 may represent any one of the headend, the distribution hubs or the optical nodes.
  • Each point of distribution supports a successively smaller geographic area.
  • a headend may support a relatively large geographic area, such as an entire metropolitan area or the like, which is further divided into smaller areas, each supported by a distribution hub.
  • the area supported by each distribution hub is further divided into smaller areas, such as neighborhoods within the metropolitan area, each supported by a corresponding optical node.
  • Optical links may be employed, such as, for example, SONET (Synchronous Optical Network) rings or the like. It is understood that any known or future developed media is contemplated for each communication link in the network.
  • each optical node receives an optical signal from an upstream point of distribution, converts the optical signal to a combined electrical signal and distributes the combined electrical signal over a coaxial cable to each of several subscriber locations 105 of a corresponding geographic serving area. Subscriber information is forwarded in electrical format (e.g., radio frequency (RF) signals) and combined at each optical node, which forwards a combined optical signal upstream to a corresponding distribution hub.
  • Each subscriber location 105 includes customer premises equipment (CPE)
  • the CPE at each subscriber location 105 may include a modulating device or the like that encodes, modulates and up converts subscriber information into RF signals or the like.
  • the upstream RF signals from each of the subscriber locations 105 are transmitted on a suitable subscriber medium (or media) to a corresponding node, which converts the subscriber signals to an optical signal.
  • a laser may be used to convert the return signal to an optical signal and send the optical return signal to an optical receiver at a distribution hub over another fiber optic cable.
  • broadband network environments are contemplated, such as any of the broadband network technologies developed by the cable and telephone industries.
  • An example is Asymmetrical Digital Subscriber Line (ADSL) technology that trades reduced upstream bandwidth for greater downstream bandwidth.
  • the telephone industry Fiber-to-the-Curb (FITC) architecture is contemplated, as well as various wireless infrastructures including multi-channel, multipoint distribution service (MMDS) or local multipoint distribution service (LMDS) using a cellular approach.
  • MMDS multi-channel, multipoint distribution service
  • LMDS local multipoint distribution service
  • the source and subscriber information may include any combination of video, audio or other data signals and the like, which may be in any of many different formats.
  • the source information may originate as fixed- or variable-size frames, packets or cells, such as Internet protocol (IP) packets, Ethernet frames, ATM cells, etc., as provided to the distribution hubs.
  • Digital video compression techniques are contemplated, such as discrete cosine transform (DCT) and the family of standards developed by the Moving Pictures Experts Group (MPEG), such as MPEG-1, MPEG- 2, MPEG-4, etc.
  • DCT discrete cosine transform
  • MPEG Moving Pictures Experts Group
  • MPEG-2 supports a wide variety of audio/video formats, including legacy TV, High Definition TV (HDTV) and five- channel surround sound.
  • MPEG-2 provides broadcast-quality resolution that is used in DND (Digital Versatile Disc or Digital Video Disc) movies, and requires from 4 to 20 megabits per second (Mbps) bandwidth depending upon the desired quality of services (QoS).
  • the transmitted data and information may include one or more destination addresses or the like indicating any one or more specific subscriber devices at the subscriber locations 105.
  • the CPE at each subscriber location 105 includes the appropriate communication equipment to receive and demodulate received information, and decode address information to deliver the original content intended for the subscriber. Upstream subscriber information may be handled in a similar manner.
  • FIG. 2 A is a block diagram of an exemplary embodiment of the IBS system 109.
  • the IBS system 109 includes the library storage system 201 coupled to a backbone or backplane switch 203, which is further coupled to each of a series of processors 205, individually labeled PI, P2, ..., Pn, where "n" is a positive integer.
  • Each processor 205 includes one or more hard disk drives configured as a disk array 207, and each processor 205 is further coupled to a corresponding one of a series of modulators/demodulators (MOD/DEMOD) 209, individually labeled MD1, MD2, ..., MDn.
  • MOD/DEMOD modulators/demodulators
  • OSS Operations Support System
  • BSS Business Support System
  • the disk array 207 are individually labeled PaDb, where "a” refers to the processor number and “b” refers to a disk number, which varies from 1 to "x".
  • the number "x” is a positive integer denoting a number of disk drives per processor for the disk array 207.
  • n 100
  • the disk array 207 is further configured into multiple RAIDs for distributing groups or chunks of data among multiple processors 205 and multiple disk drives.
  • the MOD/DEMODs 209 may be incorporated within the network communication system
  • the library storage system 201 may be configured in any one of a variety of ways and its particular configuration and operation is beyond the scope of the present disclosure.
  • each processor 205 is configured to submit a request for a "title" (e.g. video, movie, etc.) or a request for other content to the library storage system 201 via the backbone switch 203, and the library storage system 201 responds by forwarding the requested data or by accessing media incorporating the requested title, such as by loading a corresponding optical disk (e.g., DVD) or tape cartridge or the like, hi one embodiment, the loaded data and information is forwarded to the requesting processor 205 via the backbone switch 203.
  • a "title" e.g. video, movie, etc.
  • the library storage system 201 responds by forwarding the requested data or by accessing media incorporating the requested title, such as by loading a corresponding optical disk (e.g., DVD) or tape cartridge or the like, hi one embodiment, the loaded data and information is forwarded to the requesting processor 205 via
  • the library storage system 201 loads optical disks onto any selected or available one of a plurality of optical disk drives distributed among the processors 205.
  • the format and rate of the data provided depends upon the specific library and data storage configuration.
  • the data rate for video applications may range from 1 Mbps (for VHS video quality) to about 10 Mbps (for DVD quality) or more.
  • the particular format of the data may also vary depending upon the type of data or application. Audio/video data in MPEG-2 format is contemplated for movies and the like, delivered as groups of pictures (GOP) in the form of I, P and B MPEG frames delivered in bit-stream format.
  • GOP groups of pictures
  • the library storage system 201 includes a stack or library of DVD disks and/or tape cartridges that may be further configured in a robotic-based disk access system.
  • the library storage system 201 should include at least an equivalent number of titles as the video rental business and be adaptable to add new titles as they become available, h a Television On Demand application, the number of titles may be in the hundreds of thousands. Many titles will be infrequently requested and thus are stored in the automated library storage system 201, which is configured to deliver any title in an expedient manner upon request.
  • Robotic storage libraries have been developed for the computer industry, ranging in size from jukebox systems that hold a few hundred optical disks to room- sized robots that hold thousands of tape cartridges and optical disks. These libraries have traditionally been characterized as off-line due to overall operating speed, h contrast, in at least one exemplary configuration, the library storage system 201 is designed to offer no more than a 30 second latency from request to delivery, in part by incorporating media readers and read/write devices distributed among the processors 205. Mechanical components of the library storage system 201 are configured for redundant access to all discs, so that only one copy of a title is required in the library storage system 201, and the most likely mechanical failures do not block access to any title.
  • IBS system 109 may be configured as a high stream capacity centralized server, or as a centralized Library with distributed caching in smaller local areas, based on the economics of each system in which it is deployed.
  • the backbone switch 203 includes a plurality of ports, each for interfacing one of the processors 205, one for the management processor 210, if provided, and one or more for interfacing the library storage system 201. Additional or spare ports may be used for various purposes, such as, for example, one or more standby computers for replacing any of the processors 205 in the event of failure, malfunction, maintenance, upgrades, etc.
  • the backbone switch 203 is configured according to the Ethernet standard and includes a sufficient number of ports for coupling the processors 205, 210 and the library storage system 201. Off-the-shelf products are available, such as the chassis-based "Bigiron" family of products manufactured by Foundary Networks.
  • One Bigiron product includes at least 110 ports where each bidirectional port is capable of 1 Gbps data rate in each direction for 2 Gbps full duplex operation.
  • each processor 205 may receive up to 1 Gbps data from other processor or storage units for reassembly into output streams for users connected to that processor, or for storage on the local disk drives.
  • Each processor 205 is connected to one port of the backbone switch 203, so that each of the 100 processors PI -PI 00 may simultaneously receive up to 1 Gbps of data.
  • the library storage system 201 is connected to multiple ports of the backbone switch 203, so that each processor 205 receives data from the library storage system 201 and from other processors.
  • data from requested titles from the library storage system 201 is forwarded from the library storage system 201 by the backbone switch 203.
  • the library storage system 201 is connected to one port primarily for receiving title requests, h this configuration, data that is specifically requested by (user processes on) the target processor is received from optical drives or the like connected to the other processors.
  • the data may include data that is to be stored on drives connected to the target processor coming from loading processes running on other processors.
  • the particular size and total data output capacity of the IBS system 109 as reflected in the number of processor/storage units (processors PI — PI 00) and the size of the backbone switch may be scaled based on the number of subscriber locations 105 supported and expected level of content demanded over time and including peak demand periods.
  • 1-Gbps port rate is exemplary only and other data rates are contemplated, such as 2.5, 4, 10, 40 or more Gbps ports.
  • the OSS 211 executes network monitoring and management software and processes standard management information.
  • the OSS 211 enables an operator to flag error conditions and access and control operation of the IBS system 109 to resolve problems, h one configuration, the OSS 211 is remotely accessible. If and when an error occurs, the remotely accessible OSS 211 enables remote diagnosis and solution. Such remote access avoids the necessity for an operator to go to the physical premises of the point of distribution 101, which is otherwise inconvenient, time- consuming and potentially very costly in terms of subscriber satisfaction or service interruption.
  • Remote access is enabled by a connection to an external network, such as a global network or the like (e.g., the Internet). If provided, the management processor 210 is coupled to an external computer network 111 and accessible via remote control software to enable remote operation of the management system.
  • the BSS 213 includes a control system 215 and a billing system 217.
  • the control system 215 manages content and directs normal operation of the IBS system 109 and directs the operation of the processors 205 and the backbone switch 203.
  • Billing information is sent from the control system 215 to the billing system 217.
  • Each of the processors 205 includes a software agent or application that monitors and tracks billing information for each subscriber location 105 based on a predetermined billing arrangement.
  • the BSS 213 collects the billing information and enables complex forms of billing as desired. For example, a flat fee with fixed monthly additions for additional services plus individual billing for separate events is contemplated.
  • a telephone billing arrangement may include a flat monthly charge plus billing on the basis of utilization on an incremental basis (e.g., minute-by-minute or second-by-second) plus billing on behalf of secondary companies (e.g., long distance providers). Telemarketing services require immediate credit verification and real-time interaction with financial and fulfillment (inventory and shipping) systems.
  • the BSS 213 enables monitoring and tracking of sales of associated businesses to enable billing on a percentage of revenue basis for virtual shopping centers or the like.
  • the OSS and BSS functionality may be provided by the IBS system 109, either by adding dedicated processors (e.g., the management processor 210), or by running the processes in a distributed mamier on the existing processors 205.
  • the IBS is designed to be very reliable, taking full advantage of its implicit redundancy, and has excess processing capacity, so it is appropriate for running processes that demand high reliability.
  • Many of the titles stored in and sourced from the library storage system may be proprietary or otherwise include copyrighted information, h one embodiment, much of the title data may be pre-encrypted and remain encrypted while being processed through the IBS system 109 all the way to the subscriber locations 105.
  • CPE at each subscriber location 105 includes the appropriate decryption functions for decrypting the data for display or performance.
  • Such configuration involves encryption on a title- by-title basis. It may be required that the title data be encrypted on a stream-by-stream basis so that each independent stream of data to each subscriber location 105 be separately encrypted even for the same title. For example, a given title distributed to one subscriber location 105 is separately encrypted with respect to the same title distributed in encrypted form to a different subscriber location 105 (or even the same subscriber at a subsequent time).
  • the MOD/DEMOD 209 shown as MDn 227 illustrates an embodiment that enables stream-by-stream encryption.
  • each title is still encrypted while being processed through the IBS system 109 from the library storage system 201 to the MOD/DEMODs 209.
  • each MOD/DEMOD 209 includes a decryption function 229 for decrypting each title within each MOD/DEMOD 209.
  • the data is then delivered to an encryption function 231 for re- encrypting the data for delivery to the corresponding subscriber location 105. It is noted that even if similar encryption techniques are employed, separate and unique encryption keys may be employed so that the data is uniquely encrypted for each separate stream.
  • FIG. 2B is a block diagram of a portion of another exemplary embodiment of the IBS system 109 employing optical disk drives 219 distributed among the processors 205.
  • each of the optical disk drives 219 such as a DVD drive or any other type of media reader, is connected to a corresponding one of the processors 205.
  • the optical disk drives 219 are also physically located adjacent the library storage system 201 for access by an optical disk loading system 221.
  • the library storage system 201 also includes an optical disk library 223 accessible to the optical disk loading system 223, where the optical disk library 223 stores a plurality of titles and any other content for distribution.
  • the optical disk loading system 221 may comprise a robotic-based loading system or the like that includes an internal processor or control circuitry (not shown) coupled to the backbone switch 203 via at least one communication link 225 for interfacing any of the processors 205.
  • any processor 205 submits a request for a title to the optical disk loading system 221, which retrieves a corresponding one or more disks from the optical disk library 223 and loads the retrieved disks on any selected or available ones of the optical disk drives 219. It is appreciated that the distributed nature of the IBS system 109 enables any of the processors 205 to access data from a disk loaded onto any of the optical disk drives
  • Such distributed configuration allows the optical disk loading system 221 to load disks according to any sequential or random selection process and avoids loading or distribution latency.
  • the relatively large size of the storage of the disk array 207 results in a significant chance that a requested title is stored in the disk array 207, so that there is a relaxed bandwidth requirement between the library storage system 201 and the disk array 207.
  • the distributed optical disk drive embodiment has sufficient bandwidth to handle title requests.
  • Titles stored in the library storage system 201 may be stored in a proprietary format that may include several enhancements for fast loading with low processing overhead.
  • the content may be pre-encrypted, with RAID redundancy pre-calculated and stored, with transport protocol aheady applied to the resulting streams, and with pointers to specific locations within the content (for example, time stamps, transport headers) that may require further processing, or that may be required for further processing (e.g. groups of pictures (MPEG-2 GOPs) for fast forward, rewind, and splicing one stream to the next).
  • MPEG-2 GOPs groups of pictures
  • a recording and processing system 233 may be provided for converting data in standard or any other available formats (e.g., MPEG, DVD, etc.) into the desired proprietary format described above for storage on optical media in the optical disk library 223.
  • FIG. 3 is a block diagram of an exemplary embodiment of each of the processors 205.
  • the exemplary configuration of the IBS system 109 shown in FIG. 2 illustrates a massively interconnected processor array (MIPA) configuration in which each of the processors 205 are configured in a substantially similar manner. In this manner, instead of a single or a small number of complex or high cost server systems, a large number of relatively simple low-end and low-cost computer systems may be employed.
  • Each processor 205 may be implemented using relatively standard desktop or server personal computer (PC) system components or the like.
  • PC personal computer
  • Each processor 205 includes a relatively standard bus structure or system 301 coupled to one or more central processing units (CPUs) 303, a memory system 305 including any combination of random access memory (RAM) and read-only memory (ROM) devices, an optional video interface 307 for interfacing a display, and one or more Input/Output (I/O) interfaces for interfacing corresponding I/O devices, such as a keyboard and mouse or the like.
  • the bus system 301 may include multiple buses, such as at least one host bus, one or more peripheral buses, one or more expansion buses, etc., each supported by bridge circuits or controllers as known to those skilled in the art.
  • the bus system 301 is also coupled to an Integrated Drive Electronics (IDE) controller 311 for coupling to typically two IDE disk drives 313, such as Disk 1 and Disk 2 (PaDl, PaD2) of the disk array 207 for a given processor Pa.
  • IDE Integrated Drive Electronics
  • the bus system 301 includes or otherwise interfaces one or more Peripheral Component Interconnect (PCI) buses, such as a 32 bit, 33 megahertz (MHz) PCI bus 315.
  • PCI bus 315 is coupled to three PCI disk drive controllers 317 (shown as PCI 1, PCI 2 and PCI 3), each for interfacing at least two PCI disk drives 319.
  • the PCI disk drives 319 are shown implementing Disks 3-8 (PaD3 - PaD8) of the processor Pa.
  • the bus system 301 is also coupled to a high speed disk controller 329, such as a Small Computer System Interface (SCSI) adapter, a Firewire controller, a Universal Serial Bus version 2.0 (USB 2) controller, etc., for interfacing a corresponding one or more of the distributed optical disk drives 219.
  • a high speed disk controller 329 such as a Small Computer System Interface (SCSI) adapter, a Firewire controller, a Universal Serial Bus version 2.0 (USB 2) controller, etc.
  • the bus system 301 is also coupled to another PCI bus 321, which is a 64 bit, 66 MHz PCI bus in the embodiment shown.
  • the PCI bus 321 interfaces at least two 1-Gbps Ethernet network interface cards (NICs) 323 and 325, for interfacing the / backplane switch 203 and a corresponding one of the MOD/DEMODs 209, respectively.
  • NICs 1-Gbps Ethernet network interface cards
  • a 64 bit, 66 MHz PCI bus is capable of a raw data throughput of over 4 Gbps, so that it is sufficient for handling the full duplex data throughput of the two 2-Gbps full duplex NICs 323, 325.
  • a software application block (“APPS”) 327 representing one or more application programs or the like loaded into the memory 305 and executed by the CPU 303 for performing the functions and processes of the processor 205 as described further below.
  • a user process may be executed for managing each user supported by the particular processor 205.
  • other programs are included for detecting title requests from subscriber locations 105 via the NIC 325, forwarding each title request to the library storage system 201 via the NIC 323, receiving partial title data for processing and storage in the disk drives 313, 319 via the NIC 323, retrieval of title data from the disk drives 313, 319 into memory 305, processing of title data, and delivery of data to a requesting subscriber location 105 via the NIC 325.
  • many other processing and functions may be defined for implementing the IBS system 109, such as billing applications, management applications, error detecting and correcting code (ECC) for RAID data storage, etc.
  • ECC error detecting and correcting code
  • Each processor 205 may be configured with any suitable proprietary or public domain operating system (OS), such as a selected OS from among the Microsoft Windows family of operating systems or suitable versions and configurations of Linux, hi one embodiment, a combination of Linux OS along with Real Time (RT) Linux is contemplated.
  • OS operating system
  • Real Time Operating Systems (RTOS), Real Time Application Interface (RTAI) and Real Time Network (RT Net) are contemplated for handling real-time or isochronous operations directly and via networks for enabling real-time response.
  • Various protocols and interfaces may be employed, such as Lightweight Directory Access Protocol (LDAP) for file structuring, Real Time Transport (RTS), Real Time Streaming Protocol (RTSP), Message Passing Interface (MPI), etc.
  • LDAP Lightweight Directory Access Protocol
  • RTS Real Time Transport
  • RTSP Real Time Streaming Protocol
  • MPI Message Passing Interface
  • a cluster configuration with a Cluster Message Passing (MP) layer is contemplated for executing billing, user interface and management operations.
  • FIG. 4 is a block diagram illustrating an exemplary organization of the disk array 207 into a RAID disk organization 401 in accordance with one embodiment of the present invention.
  • the first disk Dl of the first processor PI (or disk drive P1D1) is numbered as the first disk drive 1
  • the first disk Dl of the second processor P2 (or disk drive P2D1) is numbered as the second disk drive 2, and so on so that the first disk drive of each of the processors PI -PI 00 form the disk drives 1-100.
  • the next disk drive 101 is the second disk drive of the first processor PI
  • the next disk drive 102 is the second disk drive of the second processor P2 and so on.
  • the 8 disk drives of the first processor PI are numbered 1, 101, 201, 301, 401, 501, 601 and 701, respectively.
  • the 8 disk drives of the second processor P2 are numbered 2, 102, 202, 302, 402, 502, 602 and 702, respectively, and so on.
  • the disk drives 1-800 are organized into RAIDs of 5 disk drives each for a total of 160 RAIDs, where the first RAID 1 is formed by disk drives 1-5, the second RAID 2 is formed by disk drives 6-10 and so on until the last RAID 160 is formed by the disk drives 796-800.
  • Each RAID may be managed or controlled by a RAID controller, which is frequently implemented by a separate processor or a software processor or any other known configuration, h the embodiment shown and described herein, however, each RAID group exists only conceptually so that there is no associated RAID controller. Instead, RAID control functionality and operation is distributed among the processors 205.
  • a loading process such as the loading process (LP) 509 (FIG. 5)
  • DP directory process
  • Different loading processes may be storing content into the same RAID group simultaneously.
  • a user process such as a user process (UP) 503, which is reading a title is requesting sub-chunks as listed in the directory entry for that title, then doing any necessary calculations to recreate a missing sub-chunk. If the management system is notified that a failed drive has been replaced, it launches a rebuild process on a lightly loaded or spare processor, which does not have to be directly controlling any of the affected drives.
  • UP user process
  • Each RAID is illustrated as Rj, where "j" is a RAID index from 1 to 160. h this manner, there are 160 RAID groups labeled R1-R160, each controlled by multiple software processes which may be executing throughout the array of processors 1-100.
  • Each RAID group may comprise any combination of hardware, logic and processing, where processing may be incorporated in other processes described further below, such as user processes, retrieval processes, loading processes, etc. Because of this, it would be possible to organize RAID groups on single drive boundaries rather than five drive boundaries, but the exemplary embodiments illustrated herein are shown with five drive boundaries for conceptual simplicity.
  • each RAID may include any number of disk drives (less than or greater than 5) and each RAID may be controlled by a processor associated with any of the processors and not necessarily only one processor such as the processor associated with the first disk drive. It is nonetheless desired that each RAID group include only one disk drive of a given processor to maximize data distribution among processors.
  • the RAID configurations enable data streams to be subdivided into data chunks that are further subdivided and distributed or striped among the disk drives of the RAIDs. For example, a data chunk is processed to include redundant information (using ECC or the like), and the resulting processed data chunk is subdivided into five sub-chunks and distributed to each of the five disk drives in a RAID.
  • FIG. 5A is a block diagram illustrating user title request (UTR), loading, storage and retrieval of a title initiated by a user process (UP) 503 executing on a selected one of the processors 205, shown as processor Pa 501.
  • Each processor 205 executes a separate user process for each of the downstream subscriber locations 105 (users) that it supports.
  • the UP 503 illustrates exemplary operation of user processes for retrieving and sending a title to a subscriber location 105 in response to a user title request (UTR) for that title from that subscriber location 105.
  • the UP 503 forwards the UTR to a directory process (DP) 505 executed on a processor Pd 502, which represents any other of the processors 205 or the management processor 210.
  • DP directory process
  • the DP 505 first determines if the title is aheady stored in the disk array 207 by consulting a Master Directory (MD) 601 (FIG. 6). If the title is found not to be loaded in the disk array 207, the DP 505 allocates memory (or determines where the next disk space is available) and creates a Title Map (TM) 507 that identifies the location of each successive "chunk" of the title in the disk array 207. As described further below, the title data is divided into data chunks, which are further divided into data sub-chunks, which are distributed among the RAIDs formed by the disk array 207. The TM 507 is a data map that identifies the location of each chunk (and thus sub-chunk) of the title.
  • MD Master Directory
  • TM Title Map
  • the TM 507 aheady exists for that title in the MD 601 and the DP 505 copies the TM 507 to the UP 503 via the backbone switch 203, where the UP 503 stores it as a local copy shown as TM 507'.
  • the UP 503 may optionally initialize the TM 507' as further described below to incorporate any parameters or variables associated with the particular user or subscriber location 105. If the title was found not to be loaded in the MD 601 and thus not stored in the disk array 207, then the DP 505 invokes a loading process (LP) 509 for accessing the title from the library storage system 201.
  • LP loading process
  • the LP 509 sends a request for the title via the backbone switch 203, and the library storage system 201 retrieves a source media 511 of the data, such as a DVD, tape cartridge, etc., and loads the selected media onto an appropriate reader (e.g. tape drive, DVD player, etc.).
  • the reader forwards the data for access by the LP 509 on the processor Pa 501 either via another of the processors 205 or via the backbone switch 203 depending upon the library configuration.
  • the data may be transferred in many different manners, such as a bit-stream of data or the like, and may come from distant as well as local library resources.
  • the LP 509 organizes the data into a successive array of data chunks Cl, C2, C3, ... Ci, as shown at 513.
  • Each data chunk corresponds to a selected parameter associated with the data, such as a predete ⁇ nined timing interval or data size.
  • each data chunk Ci co ⁇ esponds to approximately one (1) second of video data.
  • Video data to be played at 4 Mbps may be divided into approximately 500 kilobit (Kb) chunks co ⁇ esponding to one second of data.
  • Kb kilobit
  • the LP 509 is responsible for determining the appropriate divisions between chunks of data.
  • the data is organized into I, P and B frames that may further be provided in decoding order rather than presentation order. In the
  • the LP 509 determines groups of pictures (GOPs) and determines the appropriate divisions between GOPs to co ⁇ espond with the selected size or timing interval, such as every 30 displayable frames or the like per second.
  • GOPs groups of pictures
  • the LP 509 may further perform additional processing on content of the chunks of data depending upon the particular configuration.
  • processing may include, for example, insertion of bidirectional linked-lists or tags and timing information into the content for pu ⁇ poses of fast forward (FF), rewind (RW), and jump functionality.
  • Such functionality may be desired for implementing personal video recorder (PVR) capabilities for the user, so that the user may process a title in a similar manner as a VCR, such as being able to rewind, fast-forward, pause, record, etc. the content while viewing or for delayed viewing.
  • PVR personal video recorder
  • the data chunks Cl-Ci are processed into data chunks Cl'-Ci' denoting altered data depending upon the desired processing performed.
  • the data chunks Cl'-Ci' may further be processed for RAID purposes, such as the insertion of ECC data or the like. As shown, for example, the data chunks Cl' and C2' are processed into data chunks Cl ⁇ cc(l-5) and C2' ECC (1-5), respectively, where each data chunk includes consecutive sub-chunks indexed 1-5 denoting RAID data to be distributed among the five disks of a co ⁇ esponding RAID.
  • the LP 509 consults the TM 507 to determine where each chunk of data is to be stored. The precise location in the disk a ⁇ ay 207 need not necessarily be specified. In one embodiment, the TM 507 forwards the first chunk C1' ECC (1-5) to a processor
  • the random algorithm may be pseudo random, for 'example, to ensure that data is evenly distributed in which the next RAID selected is from a group in which a data chunk has not been stored until all RAIDs are used, and the process is repeated.
  • maintaining a predetermined or sequential RAID order may provide predictability advantages or retrieval efficiencies.
  • the data chunk C1' ECC (1-5) is forwarded by the LP 509 to processor Pb 515, which distributes the sub-chunks 1-5 of the data chunk C1' ECC (1-5) among the disk drives of its RAID 519.
  • processor Pb 515 which distributes the sub-chunks 1-5 of the data chunk C1' ECC (1-5) among the disk drives of its RAID 519.
  • the five sub-chunks Cl ⁇ cc(l), C1'E CC (2), C1'ECC(3), C1' ECC (4) and C1' ECC (5) are distributed among the five disk drives of the RAID 519 as shown.
  • the number of sub-chunks generated by the LP 509 equals the number of disk drives of the RAIDs, and that any suitable number of disk drives per RAID may be used.
  • the ECC process ensures that the data of any one disk drive may be reconstructed by the data from the other disk drives in the RAID. For example, a sub-chunk C1' ECC (3) may be reconstructed from sub-chunks C1' ECC (1, 2,4,5).
  • the data chunk C2' EC c(l-5) is forwarded by the LP 509 to the processor Pc 521, which distributes the sub-chunks of the data chunk C2' ECC (1-5) among the disk drives of its RAID 525 in a similar manner as previously described for the RAID 519. This process is repeated for all of the data chunks C1' ECC (1-5) of the title.
  • the UP 503 is informed or otherwise determines when data is available for retrieval and for forwarding to the requesting subscriber location 105 in response to the UTR. For example, the UP 503 may momtor the TM 507' or may be informed by the DP 505 that data is available, or may simply submit requests and wait for data to be delivered. In one embodiment, the UP 503 waits until the entire title is accessed, processed and stored in the RAIDs.
  • the UP 503 begins retrieving and forwarding data as soon as predetermined amount of the title is stored or as soon as requested data is delivered, hi any event, the UP 503 consults the TM 507' and requests each data chunk from the indicated processors 205 by submitting successive data requests (DRQx) and receiving co ⁇ esponding data responses (DRSx), where "x" is an integer denoting a data chunk index (e.g., DRQ1, DRQ2, etc.).
  • each processor 205 executes a local Retrieval Process (RP) that interfaces the co ⁇ esponding RAID for retrieving requested data chunks.
  • RP local Retrieval Process
  • the RP receives a data request DRQx, retrieves the requested data from the local RAID, and forwards a co ⁇ esponding DRSx.
  • the processor Pb 515 executes an RP 527 that receives a data request DRQ1, retrieves the data chunk C1'ECC(1-5) from the RAID 519, and responds with a co ⁇ esponding data response DRS1.
  • the processor Pc 521 executes an RP 529 that receives a data request DRQ2, retrieves the data chunk C2' ECC (1-5) from the RAID 525, and responds with a co ⁇ esponding data response DRS2.
  • the UP 503 receives the data responses and forwards the data chunks to the requesting subscriber location 105.
  • the UP 503 may perform further data processing depending upon the type of data being retrieved.
  • the UP 503 includes an MPEG decoder 613 (FIG. 6) that reorganizes MPEG-2 data from decode order into presentation order for consumption by the CPE at the subscriber location 105.
  • the access, storage and retrieval process outlined and described above is particularly suitable for isochronous data, such as audio/video data to be "performed" or otherwise consumed by a television or telephone or computer, etc., at the subscriber location 105 in real time.
  • the output data is not limited to isochronous data, but may also be bursty, asynchronous data such as data retrieved for display by a browser executed on a computer. Also, if the CPE at the subscriber location 105 has local storage, the process may operate in a similar manner except that the transmission may be asynchronous rather than isochronous and may occur at a data rate different from the presentation rate of the data.
  • FIG. 5B is a block diagram illustrating an exemplary distributed loading process for accessing and storing requested titles, h this embodiment, if the DP 505 finds the title in the MD 601 is not loaded, it invokes a master loading process (MLP) 531.
  • MLP master loading process
  • the MLP 531 sends a request to the library storage system 201 and consults the TM 507 in a similar manner as previously described for determining where the data chunlcs are to be stored. Instead of retrieving the data to the processor Pd 502, the
  • MLP 531 invokes local loading processes (LLP) which communicate with the processors 205 that control the disk drives constituting each of the RAID groups in which the co ⁇ esponding data sub-chunks are to be stored.
  • LLP local loading processes
  • an LLP 533 is invoked on the processor Pb 515 and the data chunk Cl is forwarded directly from the source media 511 to the LLP 533.
  • an LLP 535 is invoked on the processor Pc 521 and the data chunk C2 is forwarded directly from the source media 511 to the LLP 535.
  • subsequent processing may be performed on the respective LLPs 533, 535 and stored in the co ⁇ esponding RAIDs 519, 525 as directed by the MLP 531.
  • the respective data chunks are forwarded directly to, and processed directly by distributed processors requiring less bandwidth of the backplane switch 203. It is noted that even though the data may be provided directly to a DVD player connected to a processor 205, the backplane switch 203 is still employed to transfer data to the appropriate ones of the processors 205 executing the LLPs.
  • FIG. 5C is a block diagram illustrating an exemplary coordinated loading process for accessing and storing requested titles using the distributed optical disk drives 219.
  • the DP 505 invokes the MLP 531, which submits a request to the library- storage system 201 in a similar manner as previously described.
  • the library storage system 201 selects a random or available one of the distributed optical disk drives 219, such as one associated with a processor Pe 537 as shown. In this manner, the source media 511 is local to the processor Pe 537.
  • the MLP 531 identifies the processor Pe 537 (such as being informed by the library storage system 201) and invokes an LLP 539 on the processor Pe 537 for loading and storing the title in a similar manner as previously described, such as distributing the data chunks to processors Pb 515, Pc 521, etc.
  • the RAID and ECC processing may optionally and conveniently be performed on the processor Pe 537 since all the data passes through that processor, h this manner, bandwidth usage on the backplane switch 203 is reduced.
  • any given process such as the loading process 509 or 531, the retrieval processes 529, etc., may be executing on any one or more of the processors 205.
  • FIGs 5 A - 5C illustrate the loading and retrieval processes operating with entire chunks at a time through respective processors for clarity of illustration, it is understood that the data sub-chunks of each chunk are distributed within a RAID group, which spans across several processors. Thus, any given processor may read or write only one sub-chunk at a time for a given storage or retrieval process.
  • FIG. 5 A illustrates chunk Cl'(l-5) handled by a processor Pb 515
  • each individual data sub-chunk maybe written or read by a separate processor associated with a co ⁇ esponding disk drive of a given RAID group.
  • each processor 205 may control data read from or written to connected disk drives, RAID functionality may be handled by separate processes executed on other processors (e.g., RAID functionality may be virtual and implicit in system functionality).
  • FIG. 6 is a more detailed block diagram illustrating request and retrieval of a title by a user process on the processor Pa 501 and operation of the DP 505 executed on a processor Pd 502. Similar blocks or processes may assume identical reference numbers, hi a similar manner as previously described, the user process (UP) 503 receives and forwards the UTR to the DP 505 executed on the processor Pd 502.
  • the DP 505 includes the MD 601, which further includes a title list 603 which lists all of the titles available in the library storage system 201 and the co ⁇ esponding location(s) of each title. All titles remain stored and available in the library storage system 201, and may further be copied in the disk array 207.
  • the MD 601 also includes a storage file 604, which maps all of the storage in the disk array 207 including empty space and the location of every title. Titles that have been previously requested are retrieved and stored in the disk array 207 using any configuration of the loading process(es) previously described, and a title map is created by the DP 505 and stored within the MD 601.
  • the title maps cu ⁇ ently stored in the MD 601 are shown as TMs 605, each having a respective title, shown as Title 1, Title 2, Title 3, etc.
  • An entry is made in the title list 603 associated with each title stored in the disk a ⁇ ay 207 and having a DM 601 in the MD 601 for reference by the DP 505.
  • a DM 605 exists for it in the MD 601 and is further reflected in the title list 603.
  • the DP 505 is shown as a central process located on any one of the processors 205 or the management processor 210. In an alternative embodiment, the DP 505 may be a distributed processes executed across the processors 205. It is desired, however, that the MD 601 be centrally located to maintain consistency and coherency of the titles and other data of the library storage system 201.
  • the titles are stored in the disk a ⁇ ay 207 based on a least-recently used (LRU) policy, hi this manner, once an existing title is stored in the disk a ⁇ ay 207 and referenced in the MD 601, it remains there until overwritten by a more recently requested title when the storage space is needed for the new title and when the existing title is the oldest title reference in the MD 601.
  • the MD 601 tracks the relative age of each title stored in the disk a ⁇ ay 207 in a history file 607, where the age of a title is reset when that title is requested again.
  • any title newly loaded from the library storage system 201 or requested again from the MD 601 becomes the most recent title, and an age parameter is stored for each title in the MD 601 co ⁇ esponding to when it was last requested.
  • the empty storage is used and loaded titles remain stored in the disk a ⁇ ay 207.
  • the DP 505 allocates space by overwriting one or more of the oldest titles in the MD 601 to store the new title.
  • a title is overwritten in the disk a ⁇ ay 207
  • its associated DM 605 is removed from the MD 601.
  • the local reference is removed from the title list 603 so that the title list 603 indicates that the title is only located in the library storage system 201. If the overwritten and erased title is requested again, it is newly loaded from the library storage system 201.
  • the DP 505 consults the storage file 604 and the history file 607 to allocate storage space for the new title, and then creates a new DM within the MD
  • TM title map aheady exists in the MD 601.
  • the UTR is forwarded to the DP 505, which locates or otherwise creates the TM 507 (shown as "Title 4").
  • the DP 505 then forwards a copy of the TM 507 to the requesting processor Pa 501, which stores a local copy as the TM 507' as previously described.
  • the UP 503 initializes header information of the TM 507' according to its own operating parameters and user contract.
  • the UP 503 then consults the TM 507' for the location of the data chunks and cooperates with the local Retrieval Processes (RPs) of the other processors 205 managing the data, representatively shown as 609, via the backbone switch 203 to retrieve the stored data chunks, h the embodiment shown, the UP 503 includes an ECC decoder 611 or the like for converting the ECC encoded format to the actual data without the incorporated redundant information. Recall that the data chunks are stored as sub-chunks on the disk drives of a RAID, so that the ECC decoder 611 is used to reconstruct the original data by removing the redundant data. It is noted that the multiple disk drives of any given RAID may have variable speeds or response times.
  • the local RPs 609 do not necessarily return the data as a single chunk but instead as a series of sub-chunks according to the response times of the individual disk drives of the particular RAID. Also, a given disk drive of a given RAID may be faulty or non-responsive for whatever reason, hi either event, the UP 503 employing the ECC decoder 611 is able to decode the co ⁇ ect data using less than all of the defined sub-chunks.
  • the ECC decoder 611 is capable of reconstructing the original data using any 4 of the 5 sub-chunks of data, h one embodiment, the ECC decoder 611 automatically regenerates the original data rather than waiting for the last sub-chunk to arrive to speed up the overall process. h the embodiment shown, the UP 503 further includes an MPEG decoder 613 or the like for reconstructing the data into presentation order if desired. For example, if MPEG data is accessed and stored in decoding order, the MPEG decoder 613 reconstructs the data into presentation order and then sends the data to the requesting subscriber location 105.
  • the UP 503 operates in isochronous mode in one embodiment in which the data stream to the subscriber location 105 is maintained at the appropriate rate to ensure proper operation of the CPE at the subscriber location 105.
  • the UP 503 may operate in asynchronous mode and deliver each data chunk in sufficient time for proper presentation by the CPE at the subscriber location
  • the UP 503 is further configured to detect additional commands from the subscriber location 105, such as rewind (RW), pause or fast forward (FF) commands, and if supported by the processor Pa 501 and if allowed according the co ⁇ esponding user agreement, alters the data stream accordingly. For example, the UP 503 may interrupt the cu ⁇ ent data stream and move backwards or forwards in the TM 507 by accessing previous or subsequent data in response to a RW command or a FF command, respectively.
  • RW rewind
  • FF fast forward
  • a set of local agents or processes 615 is shown provided executing on the Pa 501, which are provided on each of the processors 205.
  • each of the processors 205 execute one or more local agents that interface co ⁇ esponding management, OSS, BSS, control and billing functions and processes executed on the management processor 210.
  • the management processor 210 is not provided and the management, OSS, BSS, control and billing functions and processes are distributed among one or more of the processors 205.
  • the agents or processes 615 may include, for example, a local business process for tracking user activity associated with each of the subscriber locations 105 supported by the processor Pa 501. The local business process tracks the user activity of all users supported by the processor Pa 501 for purposes of billing or the like.
  • the local business agent interfaces with the BSS 213 for sending user information including billing information or any other desired information.
  • a local business agent may perform the functions of the software agent or application that momtors and tracks billing information for each subscriber location 105 based on a predetermined billing a ⁇ angement as previously described.
  • the billing information is sent to the control system 215 for forwarding to the billing system 217 of the BSS 213.
  • FIG. 7 is a more detailed block diagram of an exemplary title map TM that may be used as the TM 507 previously described.
  • the DM may include a title field 701 for storing the title and a Contract Parameters and Constraints field 703 for storing any information associated with the particular user and/or any applicable contract associated with that user or subscriber location 105. Some examples are shown, such as a View Time Period value 705, a Pause Timing value 707, a FF Count 709 and a RW Count value 711.
  • the View Time Period value 705 may be included to represent a maximum amount of time that the user has to actively view the title after being requested.
  • the Pause Timing value 707 may be included to represent a maximum amount of time that the title will be available without further payment if the user interrupts their viewing.
  • the FF and RW Count values 709, 711 may be included to count or otherwise limit the number of times that the user has to perform a fast- forward or rewind function, respectively. For example, a "no rewind" clause or a "rewind no more than three times” clause may be included in a studio contract in which the RW count value 711 is used to prevent rewind or to limit the number of rewinds to three, respectively.
  • a Cu ⁇ ent Data Pointer 713 may be included to store an address or pointer to mark the location between data sent and data to be sent, so that the UP 503 is able to keep track of data progress.
  • a Data Section 715 is provided to store a list of data chunk addresses or pointers, such as a consecutive or linked list of data chunk fields 717 including information such as data chunk number, size, location (loc), etc.
  • the UP 503 initializes the DM according to the applicable user contract in place, such as the particular View and Pause timing parameters or FF/RW Counts as appropriate.
  • the Cu ⁇ ent Data Pointer 713 is reset to zero or otherwise to point to the first data chunk to initialize the viewing process at the beginning. Resetting the Cu ⁇ ent Data Pointer 713 might be necessary if the DM is copied from another processor 205 for a title aheady stored.
  • FIG. 8 is a block diagram illustrating a caching and Least-Recently-Used
  • LRU LRU
  • the LRU strategy is employed in the disk a ⁇ ay 207 forming the RAIDs in that empty storage space is used first and when there is no more empty space, the oldest data is overwritten first.
  • the caching strategy ensures that data is pulled from the fastest memory or storage location in which it is stored at any given time.
  • the caching strategy is hierarchical in that the disk a ⁇ ay 207 serves as a data "cache" for the library storage system 201, and the respective memories 305 of the processors 205, shown collectively as a memory 823, serves as the data cache for the disk a ⁇ ay 207.
  • a stack of title requests 801 is a simplified representation of UTRs initiated by users for titles, individually labeled as "A”, "B", "C”, etc.
  • a first block 803 represents the first five UTRs for titles A, B, C, D and E, respectively.
  • a storage representation 815 is a simplified representation of the collective storage capacity of the disk a ⁇ ay 207, shown as capable of storing only up to five equal-sized titles. Of course, in an actual configuration, the storage capacity of the disk a ⁇ ay 207 is substantially greater and the titles are of variable size.
  • the first five titles A-E are shown as stored in consecutive locations as shown by the storage representation 815. Again, this representation is simplified in that each title is actually subdivided and distributed among the RAIDs.
  • the disk a ⁇ ay 207 is initially empty, so that each title A-E must be retrieved from the library storage system 201 resulting in the greatest data retrieval latency within the IBS system 109. It is also noted that each of the first sets of titles retrieved are stored in empty data locations rather than overwriting existing titles, so that each title remains in the disk a ⁇ ay 207 as long as possible according to the LRU strategy.
  • the next title requested "B" shown at block 805 is aheady stored as shown by the storage representation 815 and thus may be retrieved from the RAIDs of the disk a ⁇ ay 207 rather than the library retrieval system 201.
  • the "master" copy of the DM in the MD 601 is copied to any of the processors 205 requesting the same title.
  • the original DM for titles B and C are copied and reused so that the titles B and C remain in the disk array 207 longer than if only requested once.
  • An additional partial data strategy may optionally be employed in that even partially stored titles in the disk a ⁇ ay 207 are re-used rather than retrieving the entire title from the library storage system 201. For example, in this partial data configuration, even if the titles B or C were partially overwritten in the storage representation 815, the existing portion may be used while the erased portion is retrieved from the library storage system 201. Such partial data strategy would require additional data tracking and management functions.
  • a new title F is next requested as shown at 809 at a time when empty storage space in the disk a ⁇ ay 207 is no longer available.
  • the oldest data in the disk a ⁇ ay 207 is the title A, so that the new title F replaces the old title A in accordance with the LRU strategy as shown by a new storage representation 817.
  • the titles B, C, D and E remain stored in the disk a ⁇ ay 207 along with the new title F.
  • a new title G is next requested as shown at 811, which again must replace the oldest data in the disk a ⁇ ay 207, which is the title D as shown by a new storage representation 819 including titles F, B, C, G and E.
  • the title A is requested again as shown at 813.
  • LRU strategy stores data in the disk a ⁇ ay 207 as long as possible before allowing the data to be overwritten by new titles.
  • the CPUs 827 execute the processes 829 that process the title data including the user processes (UP) and local RP processes used to retrieve data from the disk a ⁇ ay 207.
  • the CPUs 827 generally operate using the memory 823, so that data retrieved from the disk a ⁇ ay 207 is first stored within the local memory 823 prior to forwarding to the backbone switch 203 and/or to a subscriber location 105.
  • the CPUs 827 may include L2 caches 825 that enable faster and more efficient data retrieval.
  • the CPUs 827 automatically retrieve data requested by the processes 829 first from the L2 cache 825 if there, then from the memory 823 if there, and finally from the disk a ⁇ ay 207. If the data is not aheady stored in the disk a ⁇ ay 207, the DP
  • the memory 823 and the L2 caches 825 serve as cache layers above the disk a ⁇ ay 207.
  • the memory 823 and the L2 caches 825 generally operate according to the LRU strategy, so that data remains in these memories as long as possible before being overwritten by new data. According to the LRU strategy, when the RAM cache area is full and new data is read, the oldest segment of data in the RAM is erased by the new content that reuses that area of memory in a similar manner as the LRU strategy employed within the disk a ⁇ ay 207.
  • FIG. 9 is a block diagram illustrating shadow processing according to an embodiment of the present invention.
  • a first processor Pa 901 executes a first user process UPl 903 which uses a first title map TM1 905.
  • a second processor Pb 907 executes a shadow user process for UPl, shown as UPl Shadow 909, which uses a shadow title map TM1 Shadow 911.
  • UPl Shadow 909 which uses a shadow title map TM1 Shadow 911.
  • UPl Shadow 909 merely mimics or minors UPl 903 and performs relatively minimal processing, h particular, UPl Shadow 909 simply tracks the progress of UPl 903, such as shadowing the cu ⁇ ent data location within the DM1 905 by the DM1 Shadow 911, among other shadow functions, h this manner, the amount of overhead processing performed by the processor Pb 907 for implementing the shadow processing UPl Shadow 909 and DM1 Shadow 911 is minimal.
  • the second processor Pb 907 executes a second user process UP2 913 which uses a second title map DM2 915.
  • the first processor Pa 901 executes a shadow user process for UP2, shown as UP2 Shadow 917, which uses a shadow director map DM2 Shadow 919. Again, as long as UP2 913 is operating normally, UP2 Shadow 917 merely mimics or minors or otherwise tracks the progress of UP2 913 among other shadow functions.
  • the processors Pa 901 and Pb 907 track each other with respect to at least one user process.
  • every user process executed on a given processor such as the processor Pa 901
  • the processor Pa 901 may execute at least one shadow process for another user process executed on another processor 205, or otherwise may execute up to 250 or more shadow processes assuming each processor 205 generally handles up to 250 user processes.
  • all user processes executed on a given processor, such as the processor Pa 901 are shadowed by one other processor, such as the processor Pb 907, and vice-versa.
  • a heartbeat signal HB1 is provided from the processor Pa 901 to the processor Pb 907.
  • the HB1 signal may be generated by a software process, a hardware process, or a combination of both.
  • the HB1 signal may be directly associated with the user process UPl 903 executing on the processor Pa 901, or may be associated with all user processes executing on the processor Pa 901.
  • the HB1 signal may be hardware related and generated by the processor Pa 901 or its CPU.
  • the HB1 signal is a periodic or continuous signal that generally operates to indicate the status of the originating computer.
  • the processor Pb 907 assumes that the processor Pa 901 is operating normally and continues to shadow its progress. If, however, the HB1 signal indicates failure, such as when the processor Pa 901 fails to assert the HB1 signal for a predetermined period of time, or if/when the HB1 signal is negated or asserted (such as a ground signal or open circuit hardware signal), then the processor Pb 907 assumes failure of the associated processes or of the processor Pa 901, and the processor Pb 907 activates the user process UPl Shadow 909 to take over for the primary user process UPl 903.
  • UPl Shadow 909 keeps track of UPl 903, UPl Shadow 909 almost immediately takes over exactly where UPl 903 left off in a transparent manner so that the end user at the co ⁇ esponding subscriber location 105 does not experience interruption in service.
  • the timing of the HB1 signal is designed to enable the shadow process UPl Shadow 909 to assume control of service of the UPl 903 without interruption in service.
  • Another heartbeat signal HB2 asserted by the processor Pb 907 to the processor Pa 901 operates in a similar manner to enable the shadow process UP2 Shadow 917 to take over processing of the primary process UP2 913 in the event of a failure of one or more process executing on the processor Pb 907 or a failure of the processor Pb 907 itself.
  • One format for the heartbeat signal is a numeric value which indicates the position of the master process in the file being displayed, thus serving as a heartbeat and status indicator.
  • all of the primary user processes executed on the processor Pa 901 are shadowed by co ⁇ esponding shadow processes on the processor Pb 907 and vice- versa.
  • the processors 205 are all paired so that a first processor shadows a second processor of the pair and vice-versa. In this manner, the shadow processes of a shadowing processor assume primary processing responsibility in the event of failure of any of the processors 205.
  • a co ⁇ esponding one of a series of similar switches 921 is provided at the outputs of each pair of processors 205 to enable transparent and automatic switching between primary and shadow processes.
  • the output of the processor Pa 901 is coupled to a port 927 and the output of the processor Pb 907 is coupled to a port 929 of the switch 921.
  • a third port 931 of the switch 921 is coupled to a modulator/demodulator MDa 923 for the processor Pa 901 and another port 933 is coupled to a modulator/demodulator MDb 925 for the processor Pb 907.
  • the switch 921 is addressed-based, such as a 4- port Ethernet switch or the like where each port 927-933 operates at 1 Gbps or more.
  • the data asserted by the processor Pa 901 is addressed to the MDa 923 so that data entering the port 927 is forwarded by the switch 921 to the port 931.
  • the data asserted by the processor Pb 907 is addressed to the MDb 925 so that data entering the port 929 is forwarded by the switch 921 to the port 933. h the event of a failure of the processor
  • the HB2 signal indicates such failure so that the shadow process UP2 Shadow
  • the shadow process UP2 Shadow 917 asserts data to the port 927 of the switch 921 yet addressed to the MDb 925, so that the switch 921 automatically forwards the data to its port 933 rather than port 931.
  • the data asserted by the process UPl 903 continues to be addressed to the MDa 923, so that this data entering port 927 is still forwarded by the switch 921 to the port 931. In this manner, failure of the processor Pb 907 is essentially transparent to the subscriber locations 105 associated with the processes UPl 903 and UP2 913.
  • the shadow process UPl Shadow 909 takes over for the failed primary process UPl 903 and addresses data to the MDa 923, which is forwarded by the switch 921 from port 929 to port 931. h this manner, failure of the processor Pa 901 is essentially transparent to the subscriber locations 105 associated with the processes UPl 903 and UP2 913.
  • the switch 921 automatically handles upstream subscriber data so that the activated shadow process receives any subscriber data sent from the co ⁇ ect subscriber location 105.
  • the shadow process UP2 Shadow 917 is activated to take over for the primary process UP2 913, then data sent to the MDb 925 addressed to the processor Pb 907 for the failed primary process UP2 913 is automatically forwarded by the switch 921 from port 933 to port 927 and received by the shadow process UP2 Shadow 917.
  • shadow process UPl Shadow 909 is activated to take over for the primary process UPl 903, then data sent to the MDa 923 addressed to the processor Pa 901 for the failed primary process UPl 903 is automatically forwarded by the switch 921 from port 931 to port 929 and received by the shadow process UPl Shadow 909.
  • the processor Pa 901 is shadow-paired with the processor Pb 907, then in the event of failure of the processor Pa 901 (or Pb 907), then the other processor Pb 907 (or Pa 901) assumes responsibility for all user processes of the failed processor. In this manner, the shadowing processor immediately assumes all data processing responsibility for all of the users for both processors.
  • the shadowing process has been described with respect to user processes, that process shadowing is enabled for other processing functions associated with user service, such as, for example, RAID controllers, retrieval processes, loading processes, directory processes, business processes, etc. If such failure occurs during high usage such as during peak hours, and if the processors 205 are not designed to handle the combined total processing of such pairs of processors, then it is possible (or likely) that the remaining processor is unable to handle the entire processing capacity for both processors.
  • service degradation occurs gracefully in the event a given processor 205 is over-subscribed, such as in the event of a failure of another processor.
  • excess network bandwidth may be used for "Barker channels", previews, and available bit rate asynchronous traffic. If a failure occurs and the shadowing process cannot assume all cu ⁇ ent processing, Barker channels or previews revert from videos to still images or are diverted to other services, such as infomercials or the like. ABR traffic may be greatly diminished. If a large portion of the bandwidth of a processor 205 are used for non-revenue services, such services are the first to be eliminated in the event of an emergency.
  • customer streams are interrupted on a priority basis, such as first-come, first- served (FCFS) or higher revenue streams are maintained, etc.
  • the shadowing processor attempting to assume responsibility for both processors determines that it is unable to assume the entire processing capacity of the failed processor, then the shadowing processor selectively assumes responsibility for only those processes that it is able to handle at the time.
  • the shadowing processor may take on only a subset of user processes on a FCFS basis or the like. In this manner, many users would not experience interruption in service although the remaining users would.
  • the titles and information may be stored various fo ⁇ nats in the library storage system 201 depending upon the type of data and depending upon the processing capabilities of the IBS system 109.
  • the data may be processed by one or more processes during the loading and storing processes or during the retrieval and delivery processes. Examples have been given herein, such as converting MPEG- 2 data from decoding order to display order, generating ECC information for RAID storage, adding tags and timing information for enabling PVR capabilities (e.g., fast- forward, rewind, pause), etc.
  • Other information and content may be added, such as splice content for adding commercials, meta-data or added information to be displayed or used during display, contractual obligations (e.g., expiration dates and the like), etc.
  • the titles may be pre-recorded into a chosen fo ⁇ nat for storage in the library storage system 201 to incorporate some or all of the added information and to reduce or otherwise eliminate post-processing when the title is requested and loaded into the disk a ⁇ ay 207.
  • One exemplary fo ⁇ nat for metadata is extensible Markup Language (XML), as exemplified by the MPEG-7 standard, h one configuration, one or more of the processors 205 include processing and recording capabilities for storing received content into the desired format. Alternatively, separate recording stations (not shown) are provided for converting content from third parties from given or standard formats into the desired format for consumption by the IBS system 109.
  • the data may be re-organized and supplemented with the additional information listed above and may further be pre-processed in RAID format including ECC information and then stored in a desired format.
  • RAID format including ECC information
  • each title stored in the library storage system 201 may be associated with a co ⁇ esponding bandwidth or data rate depending upon the type of data stored, h this manner, the IBS system 109 handles variable content with data rates variable between less than 1 Mbps to greater than 20 Mbps at any given time.
  • each component in the IBS system 109 has a predetermined maximum bandwidth capability that should not be exceeded at any given time.
  • the backbone switch 203 and each of the processors 205 have a given maximum bandwidth associated therewith. Since the data being processed has variable data rates, it is possible that the data stacks together at one point in the system causing an overload or bottleneck.
  • the random or pseudo random storage of data from the library storage system 201 into the disk a ⁇ ay 207 should alleviate the bandwidth stacking problem, although the problem may not be completely solved using random algorithms.
  • the management function centrally executed at the management processor 210 or distributed among the processors 205 manages the bandwidth to avoid bandwidth stacking and potential throughput bottleneck that may cause an overload condition
  • the management function tracks bandwidth usage at each component and further tracks the additional bandwidth associated with each requested title. As each title request is made, the management function compares the cu ⁇ ent and additional bandwidth required to add the new title and compares the new bandwidth requirement with the maximum bandwidth parameters at any given point. Such bandwidth tracking also accounts for bandwidth needs over time to avoid or otherwise minimize bandwidth usage peaks that may potentially exceed the maximum bandwidth parameters, hi one embodiment, the management function employs a latency or delay increment to avoid potential overload conditions.
  • the management function if the management function identifies an overload condition, the new title request is not launched immediately and the management re-calculates bandwidth usage after adding the delay increment. If an overload condition would still occur, the management function continues to re-calculate using additional delay increments until any determined overload conditions are eliminated or otherwise minimized. In this manner, the new title is launched after a calculated number of delay increments to eliminate or minimize overload conditions, h one configuration, a new title may be delayed indefinitely if excessive bandwidth cannot be avoided. Alternatively, after bandwidth usage peaks are minimized after a certain delay, the management function anticipates all excessive overload conditions and re-allocates bandwidth to spread the processing over time and eliminate the overload conditions. For example, additional pre-processing may be performed or distributed processing may be employed to avoid the overload conditions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un système (109) de serveur à large bande interactif comprenant de multiples processeurs (205), un commutateur central (203), de multiples dispositifs de stockage (207) et des processus (503) utilisateur multiples. Le commutateur central autorise une communication à grande vitesse entre les processeurs. Les dispositifs de stockage sont répartis entre les processeurs de façon à stocker des titres, chaque titre étant divisé en blocs de données (513), ces derniers étant divisés entre les dispositifs de stockage. Les processus utilisateur sont configurés en vue d'une exécution sur les processeurs de façon à interfacer de multiples localisations (105) d'abonnés. Chaque processus utilisateur est en état de fonctionnement de façon à localiser un titre demandé à partir d'au moins deux des processeurs via le commutateur central et de façon à assembler un titre demandé en vue d'une distribution à une localisation d'abonnée demandeuse. Les dispositifs de stockage peuvent être organisés en groupes RAID (401). Des lecteurs (219) de supports répartis et un système de stockage (201) de bibliothèque peuvent être prévus. De multiples titres isochrones peuvent ainsi être distribués simultanément à des abonnés en aval. On peut traiter préalablement des titres et les stocker dans un format prédéterminé de façon à réduire le chargement et les surcharges système de traitement.
EP02794098A 2001-11-28 2002-11-27 Systeme de serveur a large bande interactif Withdrawn EP1451709A4 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US33385601P 2001-11-28 2001-11-28
US333856P 2001-11-28
US304378 2002-11-26
US10/304,378 US7437472B2 (en) 2001-11-28 2002-11-26 Interactive broadband server system
PCT/US2002/038346 WO2003046749A1 (fr) 2001-11-28 2002-11-27 Systeme de serveur a large bande interactif

Publications (2)

Publication Number Publication Date
EP1451709A1 true EP1451709A1 (fr) 2004-09-01
EP1451709A4 EP1451709A4 (fr) 2010-02-17

Family

ID=26973990

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02794098A Withdrawn EP1451709A4 (fr) 2001-11-28 2002-11-27 Systeme de serveur a large bande interactif

Country Status (8)

Country Link
US (1) US7437472B2 (fr)
EP (1) EP1451709A4 (fr)
JP (1) JP4328207B2 (fr)
CN (1) CN100430915C (fr)
AU (1) AU2002359552A1 (fr)
CA (1) CA2465909C (fr)
MX (1) MXPA04005061A (fr)
WO (1) WO2003046749A1 (fr)

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031474B1 (en) 1999-10-04 2006-04-18 Srs Labs, Inc. Acoustic correction apparatus
US7277767B2 (en) 1999-12-10 2007-10-02 Srs Labs, Inc. System and method for enhanced streaming audio
AU2002247257A1 (en) * 2001-03-02 2002-09-19 Kasenna, Inc. Metadata enabled push-pull model for efficient low-latency video-content distribution over a network
KR100502106B1 (ko) * 2002-10-17 2005-07-20 한국전자통신연구원 스트라이핑 기법을 이용한 레이드 시스템에서의 데이터재구성 방법
US8923186B1 (en) * 2003-05-08 2014-12-30 Dynamic Mesh Networks, Inc. Chirp networks
US9172738B1 (en) 2003-05-08 2015-10-27 Dynamic Mesh Networks, Inc. Collaborative logistics ecosystem: an extensible framework for collaborative logistics
US10785316B2 (en) 2008-11-24 2020-09-22 MeshDynamics Evolutionary wireless networks
US9363651B1 (en) 2003-05-08 2016-06-07 Dynamic Mesh Networks, Inc. Chirp networks
US9258765B1 (en) 2003-05-08 2016-02-09 Dynamic Mesh Networks, Inc. Chirp networks
US7437458B1 (en) * 2003-06-13 2008-10-14 Juniper Networks, Inc. Systems and methods for providing quality assurance
SG145736A1 (en) * 2003-08-12 2008-09-29 Research In Motion Ltd System and method for processing encoded messages
JP4437650B2 (ja) 2003-08-25 2010-03-24 株式会社日立製作所 ストレージシステム
US7885186B2 (en) * 2003-10-03 2011-02-08 Ciena Corporation System and method of adaptively managing bandwidth on optical links shared by multiple-services using virtual concatenation and link capacity adjustment schemes
JP4257783B2 (ja) * 2003-10-23 2009-04-22 株式会社日立製作所 論理分割可能な記憶装置及び記憶装置システム
DE602004029925D1 (de) * 2003-12-02 2010-12-16 Interactive Content Engines Ll Synchronisiertes datentransfersystem
CN1332334C (zh) * 2004-01-17 2007-08-15 中国科学院计算技术研究所 一种多处理机通信装置及其通信方法
JP2005267008A (ja) * 2004-03-17 2005-09-29 Hitachi Ltd ストレージ管理方法およびストレージ管理システム
US7681105B1 (en) * 2004-08-09 2010-03-16 Bakbone Software, Inc. Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US7681104B1 (en) * 2004-08-09 2010-03-16 Bakbone Software, Inc. Method for erasure coding data across a plurality of data stores in a network
CA2584525C (fr) 2004-10-25 2012-09-25 Rick L. Orsini Systeme analyseur syntaxique de donnees securise et procede correspondant
US20070028010A1 (en) * 2005-08-01 2007-02-01 Texas Instruments, Inc. Peripheral device utilization monitoring
JP4771528B2 (ja) * 2005-10-26 2011-09-14 キヤノン株式会社 分散処理システムおよび分散処理方法
ES2658097T3 (es) 2005-11-18 2018-03-08 Security First Corporation Método y sistema de análisis de datos seguro
EP1811378A2 (fr) * 2006-01-23 2007-07-25 Xyratex Technology Limited Système informatique, ordinateur et procédé de stockage de fichiers de données
JP4829632B2 (ja) * 2006-02-10 2011-12-07 株式会社リコー データ暗号化装置、データ暗号化方法、データ暗号化プログラム、および記録媒体
US7930468B2 (en) * 2006-05-23 2011-04-19 Dataram, Inc. System for reading and writing on flash memory device having plural microprocessors
US7949820B2 (en) * 2006-05-23 2011-05-24 Dataram, Inc. Method for managing memory access and task distribution on a multi-processor storage device
US7882320B2 (en) * 2006-05-23 2011-02-01 Dataram, Inc. Multi-processor flash memory storage device and management system
EP2050002A2 (fr) * 2006-08-01 2009-04-22 Massachusetts Institute of Technology Mémoire virtuelle extrême
US8806227B2 (en) * 2006-08-04 2014-08-12 Lsi Corporation Data shredding RAID mode
KR101256225B1 (ko) * 2006-08-10 2013-04-17 주식회사 엘지씨엔에스 디바이스 인터페이스 방법 및 그 장치
US20110276657A1 (en) * 2007-12-20 2011-11-10 Chalk Media Service Corp. Method and system for the delivery of large content assets to a mobile device over a mobile network
KR101426270B1 (ko) * 2008-02-13 2014-08-05 삼성전자주식회사 소프트웨어의 전자 서명 생성 방법, 검증 방법, 그 장치,및 그 방법을 실행하기 위한 프로그램을 기록한 컴퓨터로읽을 수 있는 기록매체
WO2009151789A2 (fr) * 2008-04-17 2009-12-17 Sony Corporation Double lecture pour contenu multimédia
KR101496975B1 (ko) * 2008-05-28 2015-03-02 삼성전자주식회사 고체 상태 디스크 및 이에 대한 입출력방법
US8949695B2 (en) * 2009-08-27 2015-02-03 Cleversafe, Inc. Method and apparatus for nested dispersed storage
RU2622621C2 (ru) * 2009-11-04 2017-06-16 Амотек Ко., Лтд. Система и способ для потоковой передачи воспроизводимого контента
US20110179185A1 (en) * 2010-01-20 2011-07-21 Futurewei Technologies, Inc. System and Method for Adaptive Differentiated Streaming
CN102143347B (zh) * 2010-02-01 2013-09-11 广州市启天科技股份有限公司 一种多方远程互动系统
US8671265B2 (en) 2010-03-05 2014-03-11 Solidfire, Inc. Distributed data storage system providing de-duplication of data using block identifiers
US8514651B2 (en) 2010-11-22 2013-08-20 Marvell World Trade Ltd. Sharing access to a memory among clients
US9054992B2 (en) 2011-12-27 2015-06-09 Solidfire, Inc. Quality of service policy sets
US9838269B2 (en) * 2011-12-27 2017-12-05 Netapp, Inc. Proportional quality of service based on client usage and system metrics
US9195622B1 (en) 2012-07-11 2015-11-24 Marvell World Trade Ltd. Multi-port memory that supports multiple simultaneous write operations
CN105051750B (zh) 2013-02-13 2018-02-23 安全第一公司 用于加密文件系统层的系统和方法
US9258664B2 (en) 2013-05-23 2016-02-09 Comhear, Inc. Headphone audio enhancement system
US20150244795A1 (en) 2014-02-21 2015-08-27 Solidfire, Inc. Data syncing in a distributed system
US10135896B1 (en) * 2014-02-24 2018-11-20 Amazon Technologies, Inc. Systems and methods providing metadata for media streaming
US9798728B2 (en) 2014-07-24 2017-10-24 Netapp, Inc. System performing data deduplication using a dense tree data structure
US10133511B2 (en) 2014-09-12 2018-11-20 Netapp, Inc Optimized segment cleaning technique
US9671960B2 (en) 2014-09-12 2017-06-06 Netapp, Inc. Rate matching technique for balancing segment cleaning and I/O workload
US9836229B2 (en) 2014-11-18 2017-12-05 Netapp, Inc. N-way merge technique for updating volume metadata in a storage I/O stack
CN107209702B (zh) 2014-12-09 2020-11-03 马维尔以色列(M.I.S.L.)有限公司 用于在存储器中执行同时读取和写入操作的系统和方法
US9720601B2 (en) 2015-02-11 2017-08-01 Netapp, Inc. Load balancing technique for a storage array
US9762460B2 (en) 2015-03-24 2017-09-12 Netapp, Inc. Providing continuous context for operational information of a storage system
US9710317B2 (en) 2015-03-30 2017-07-18 Netapp, Inc. Methods to identify, handle and recover from suspect SSDS in a clustered flash array
US11099746B2 (en) 2015-04-29 2021-08-24 Marvell Israel (M.I.S.L) Ltd. Multi-bank memory with one read port and one or more write ports per cycle
US11403173B2 (en) 2015-04-30 2022-08-02 Marvell Israel (M.I.S.L) Ltd. Multiple read and write port memory
WO2016174521A1 (fr) 2015-04-30 2016-11-03 Marvell Israel (M-I.S.L.) Ltd. Mémoire à ports de lecture et une porte d'écriture
US10089018B2 (en) 2015-05-07 2018-10-02 Marvell Israel (M.I.S.L) Ltd. Multi-bank memory with multiple read ports and multiple write ports per cycle
US9740566B2 (en) 2015-07-31 2017-08-22 Netapp, Inc. Snapshot creation workflow
WO2017068928A1 (fr) * 2015-10-21 2017-04-27 ソニー株式会社 Dispositif de traitement d'informations, procédé de commande associé, et programme informatique
WO2017068926A1 (fr) * 2015-10-21 2017-04-27 ソニー株式会社 Dispositif de traitement d'informations, procédé de commande associé, et programme informatique
US10929022B2 (en) 2016-04-25 2021-02-23 Netapp. Inc. Space savings reporting for storage system supporting snapshot and clones
CN107526533B (zh) * 2016-06-21 2020-08-11 伊姆西Ip控股有限责任公司 存储管理方法及设备
US10642763B2 (en) 2016-09-20 2020-05-05 Netapp, Inc. Quality of service policy sets
US10802757B2 (en) * 2018-07-30 2020-10-13 EMC IP Holding Company LLC Automated management of write streams for multi-tenant storage
CN112672200B (zh) * 2020-12-14 2023-10-24 完美世界征奇(上海)多媒体科技有限公司 视频生成方法和装置、电子设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974503A (en) * 1997-04-25 1999-10-26 Emc Corporation Storage and access of continuous media files indexed as lists of raid stripe sets associated with file names
WO2000058856A1 (fr) * 1999-03-31 2000-10-05 Diva Systems Corporation Serveur de stockage a disques-uct jumeles
EP1107582A2 (fr) * 1999-12-08 2001-06-13 Sony Corporation Enregistrement de données et méthodes et dispositif de reproduction

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55157181A (en) * 1979-05-25 1980-12-06 Nec Corp Buffer memory control system
US5421031A (en) * 1989-08-23 1995-05-30 Delta Beta Pty. Ltd. Program transmission optimisation
US5247347A (en) * 1991-09-27 1993-09-21 Bell Atlantic Network Services, Inc. Pstn architecture for video-on-demand services
JP2666033B2 (ja) * 1993-02-18 1997-10-22 日本アイ・ビー・エム株式会社 データ供給装置
EP0625857B1 (fr) * 1993-05-19 1998-06-24 ALCATEL BELL Naamloze Vennootschap Serveur vidéo
DE69317267T2 (de) * 1993-05-19 1998-06-25 Alsthom Cge Alcatel Netzwerk für Video auf Anfrage
US5581479A (en) * 1993-10-15 1996-12-03 Image Telecommunications Corp. Information service control point, which uses different types of storage devices, which retrieves information as blocks of data, and which uses a trunk processor for transmitting information
US5473362A (en) * 1993-11-30 1995-12-05 Microsoft Corporation Video on demand system comprising stripped data across plural storable devices with time multiplex scheduling
JP3617089B2 (ja) * 1993-12-27 2005-02-02 株式会社日立製作所 映像蓄積配送装置及び映像蓄積配送システム
US5732239A (en) * 1994-05-19 1998-03-24 Starlight Networks Method for operating a disk storage system which stores video data so as to maintain the continuity of a plurality of video streams
US5521631A (en) * 1994-05-25 1996-05-28 Spectravision, Inc. Interactive digital video services system with store and forward capabilities
US5606359A (en) * 1994-06-30 1997-02-25 Hewlett-Packard Company Video on demand system with multiple data sources configured to provide vcr-like services
US5671377A (en) * 1994-07-19 1997-09-23 David Sarnoff Research Center, Inc. System for supplying streams of data to multiple users by distributing a data stream to multiple processors and enabling each user to manipulate supplied data stream
DE69521374T2 (de) * 1994-08-24 2001-10-11 Hyundai Electronics America, Milpitas Videoserver und diesen verwendendes System
US5712976A (en) * 1994-09-08 1998-01-27 International Business Machines Corporation Video data streamer for simultaneously conveying same one or different ones of data blocks stored in storage node to each of plurality of communication nodes
WO1996017306A2 (fr) * 1994-11-21 1996-06-06 Oracle Corporation Serveur de media
US5729279A (en) * 1995-01-26 1998-03-17 Spectravision, Inc. Video distribution system
EP0727750B1 (fr) * 1995-02-17 2004-05-12 Kabushiki Kaisha Toshiba Serveur données continues et méthode de transfert de données permettant de multiples accès simultanés de données
US5608448A (en) 1995-04-10 1997-03-04 Lockheed Martin Corporation Hybrid architecture for video on demand server
US5742892A (en) 1995-04-18 1998-04-21 Sun Microsystems, Inc. Decoder for a software-implemented end-to-end scalable video delivery system
US5721815A (en) * 1995-06-07 1998-02-24 International Business Machines Corporation Media-on-demand communication system and method employing direct access storage device
JP3088268B2 (ja) * 1995-06-21 2000-09-18 日本電気株式会社 ビデオ・オン・デマンドシステムにおけるビデオサーバ
US5678061A (en) * 1995-07-19 1997-10-14 Lucent Technologies Inc. Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks
US5790794A (en) 1995-08-11 1998-08-04 Symbios, Inc. Video storage unit architecture
US6049823A (en) * 1995-10-04 2000-04-11 Hwang; Ivan Chung-Shung Multi server, interactive, video-on-demand television system utilizing a direct-access-on-demand workgroup
US5862312A (en) * 1995-10-24 1999-01-19 Seachange Technology, Inc. Loosely coupled mass storage computer cluster
US5978843A (en) * 1995-12-06 1999-11-02 Industrial Technology Research Institute Scalable architecture for media-on-demand servers
US6128467A (en) * 1996-03-21 2000-10-03 Compaq Computer Corporation Crosspoint switched multimedia system
US6032200A (en) * 1996-09-30 2000-02-29 Apple Computer, Inc. Process scheduling for streaming data through scheduling of disk jobs and network jobs and the relationship of the scheduling between these types of jobs
US5892915A (en) * 1997-04-25 1999-04-06 Emc Corporation System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list
US6230200B1 (en) * 1997-09-08 2001-05-08 Emc Corporation Dynamic modeling for resource allocation in a file server
US6134596A (en) * 1997-09-18 2000-10-17 Microsoft Corporation Continuous media file server system and method for scheduling network resources to play multiple files having different data transmission rates
US6415373B1 (en) * 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6374336B1 (en) * 1997-12-24 2002-04-16 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6182128B1 (en) * 1998-03-05 2001-01-30 Touchmusic Entertainment Llc Real-time music distribution systems
US6101547A (en) 1998-07-14 2000-08-08 Panasonic Technologies, Inc. Inexpensive, scalable and open-architecture media server
US6370579B1 (en) * 1998-10-21 2002-04-09 Genuity Inc. Method and apparatus for striping packets over parallel communication links
US6289383B1 (en) * 1998-11-30 2001-09-11 Hewlett-Packard Company System and method for managing data retrieval bandwidth
US6401126B1 (en) * 1999-03-10 2002-06-04 Microsoft Corporation File server system and method for scheduling data streams according to a distributed scheduling policy
US6275898B1 (en) * 1999-05-13 2001-08-14 Lsi Logic Corporation Methods and structure for RAID level migration within a logical unit
US6604155B1 (en) * 1999-11-09 2003-08-05 Sun Microsystems, Inc. Storage architecture employing a transfer node to achieve scalable performance
WO2001043438A1 (fr) * 1999-12-10 2001-06-14 Diva Systems Corporation Procede et appareil permettant de stocker du contenu dans un environnement de video a la demande
US6898285B1 (en) * 2000-06-02 2005-05-24 General Instrument Corporation System to deliver encrypted access control information to support interoperability between digital information processing/control equipment
US20020157113A1 (en) * 2001-04-20 2002-10-24 Fred Allegrezza System and method for retrieving and storing multimedia data
US20030046704A1 (en) * 2001-09-05 2003-03-06 Indra Laksono Method and apparatus for pay-per-quality of service for bandwidth consumption in a video system
US6907466B2 (en) * 2001-11-08 2005-06-14 Extreme Networks, Inc. Methods and systems for efficiently delivering data to a plurality of destinations in a computer network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974503A (en) * 1997-04-25 1999-10-26 Emc Corporation Storage and access of continuous media files indexed as lists of raid stripe sets associated with file names
WO2000058856A1 (fr) * 1999-03-31 2000-10-05 Diva Systems Corporation Serveur de stockage a disques-uct jumeles
EP1107582A2 (fr) * 1999-12-08 2001-06-13 Sony Corporation Enregistrement de données et méthodes et dispositif de reproduction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATTERSON D A ET AL: "A CASE FOR REDUNDANT ARRAYS OF INEXPENSIVE DISKS (RAID)" SIGMOD RECORD, ACM, NEW YORK, NY, US, 1 January 1988 (1988-01-01), pages 109-116, XP000577756 ISSN: 0163-5808 *
See also references of WO03046749A1 *

Also Published As

Publication number Publication date
JP2005527130A (ja) 2005-09-08
EP1451709A4 (fr) 2010-02-17
CA2465909C (fr) 2009-09-15
MXPA04005061A (es) 2004-08-19
AU2002359552A1 (en) 2003-06-10
US20030115282A1 (en) 2003-06-19
CN100430915C (zh) 2008-11-05
WO2003046749A1 (fr) 2003-06-05
US7437472B2 (en) 2008-10-14
CA2465909A1 (fr) 2003-06-05
JP4328207B2 (ja) 2009-09-09
CN1596404A (zh) 2005-03-16

Similar Documents

Publication Publication Date Title
US7437472B2 (en) Interactive broadband server system
KR100231220B1 (ko) 다수의 디스크상에 저장된 스트라이프들로 분할되어 있는 데이타 유닛들중 요청된 데이타 유닛을 검색하는 방법 및 장치(A disk access method for delivering multimedia and video information on ctemand over wide area networks)
US6442599B1 (en) Video storage unit architecture
EP1168845B1 (fr) Système de serveur vidéo
KR100192723B1 (ko) 매체 스트리머
WO1996017306A9 (fr) Serveur de media
US20020157113A1 (en) System and method for retrieving and storing multimedia data
US20030204856A1 (en) Distributed server video-on-demand system
JPH0887385A (ja) キャッシュ管理を有するビデオ用に最適化された媒体ストリーマ
Shenoy et al. Issues in multimedia server design
JPH08154233A (ja) ビデオ用に最適化された媒体ストリーマ
JPH08130714A (ja) ビデオ用に最適化された媒体ストリーマ・ユーザ・インタフェース
JPH08154234A (ja) 等時性データ・ストリームを生成するビデオ用に最適化された媒体ストリーマ
WO2007127741A2 (fr) Système de serveur multimédia
US20030154246A1 (en) Server for storing files
US20020073172A1 (en) Method and apparatus for storing content within a video on demand environment
KR19990028246A (ko) 데이터 기억 장치
Gafsi et al. Design and implementation of a scalable, reliable, and distributed VOD-server
Halvorsen et al. The INSTANCE Project: Operating System Enhancements to Support Multimedia Servers
Murphy et al. Supporting video on demand
Pittas The New Storage Paradigm for Multichannel Video Transmission
Vin et al. Storage Architectures for Digital Imagery
Ford An IEEE 1394-Based Architecture for Media Storage and Networking

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040628

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1068993

Country of ref document: HK

A4 Supplementary search report drawn up and despatched

Effective date: 20100119

17Q First examination report despatched

Effective date: 20100426

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20120601

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1068993

Country of ref document: HK