CA2451901A1 - System for delivering data over a network - Google Patents
System for delivering data over a network Download PDFInfo
- Publication number
- CA2451901A1 CA2451901A1 CA002451901A CA2451901A CA2451901A1 CA 2451901 A1 CA2451901 A1 CA 2451901A1 CA 002451901 A CA002451901 A CA 002451901A CA 2451901 A CA2451901 A CA 2451901A CA 2451901 A1 CA2451901 A1 CA 2451901A1
- Authority
- CA
- Canada
- Prior art keywords
- data
- latency
- client
- data streams
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4722—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6156—Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
- H04N21/6181—Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6581—Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17336—Handling of requests in head-ends
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N2007/1739—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal the upstream communication being transmitted via a separate link, e.g. telephone line
Abstract
This invention describes a new method and system for delivering data over a network to a large number of clients, which may be suitable for building lar ge- scale Video-on-Demand (VOD) systems. In current VOD systems, the client may suffer from a long latency before starting to receive the requested data tha t is capable of providing sufficient interactive functions, or the reverse, without significantly increasing the network load. The method utilizes two groups of data streams, one responsible for minimizing latency while the oth er one provides the required interactive functions. In the anti-latency data group, uniform, non-uniform or hierarchical staggered stream intervals may b e used. The system of this invention may have a relatively small startup laten cy while users may enjoy most of the interactive functions that are typical of video recorders. Furthermore, this invention may also be able to maintain th e number of data streams, or the bandwidth, required.
Claims (141)
1. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including:
- at least one anti-latency signal generator for generating at least one anti-latency data stream containing at least a leading portion of data for receipt by a client; and - at least one interactive signal generator for generating interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
- at least one anti-latency signal generator for generating at least one anti-latency data stream containing at least a leading portion of data for receipt by a client; and - at least one interactive signal generator for generating interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
2. The system of Claim 1, wherein:
- said data is fragmented into K segments each requiring a time T to transmit over the network;
- the anti-latency data streams includes M anti-latency data streams; and - the interactive data streams includes N interactive data streams.
- said data is fragmented into K segments each requiring a time T to transmit over the network;
- the anti-latency data streams includes M anti-latency data streams; and - the interactive data streams includes N interactive data streams.
3. The system of Claim 1, wherein:
- the anti-latency data streams contains the leading portion of said data only;
- the interactive data streams contains a whole set of said data.
- the anti-latency data streams contains the leading portion of said data only;
- the interactive data streams contains a whole set of said data.
4. The system of Claim 2, wherein:
- each of the M anti-latency data stream contains substantially identical data repeated continuously within said anti-latency data stream, and wherein each successive anti-latency data stream is staggered by an anti-latency time interval; and - each of the N interactive data stream repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
- each of the M anti-latency data stream contains substantially identical data repeated continuously within said anti-latency data stream, and wherein each successive anti-latency data stream is staggered by an anti-latency time interval; and - each of the N interactive data stream repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
5. The system of Claim 4, wherein:
- each of the M anti-latency data stream has J segments; and - the anti-latency time interval >= T.
- each of the M anti-latency data stream has J segments; and - the anti-latency time interval >= T.
6. The system of Claim 4, wherein the interactive time interval >= JT.
7. The system of Claim 5, wherein M >= J.
8. The system of Claim 7, wherein M = J.
9. The system of Claim 6, wherein N >=
10. The system of Claim 9, wherein N = .
11. The system of Claim 8 or 10, wherein M = N = J = .
12. The system of Claim 4, wherein each of the N interactive data streams contains the whole set of said data having K segments.
13. The system of Claim 4, wherein each of the N interactive data streams contains the remaining portion of said data only.
14. The system of Claim 4, wherein:
- the client is connected to any one of the M anti-latency data streams when the client raises a request for said data; and - the client is connected to any one of the N interactive data streams.
- the client is connected to any one of the M anti-latency data streams when the client raises a request for said data; and - the client is connected to any one of the N interactive data streams.
15. The system of Claim 2, wherein - the anti-latency data streams includes:
I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval;
- each of the N interactive data streams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval;
- each of the N interactive data streams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
16. The system of Claim 15, wherein - each of the finishing data stream has J segments; and - the anti-latency time interval >= T.
17. The system of Claim 15, wherein the interactive time interval >= JT.
18. The system of Claim 16, wherein M >= .
19. The system of Claim 18, wherein M = .
20. The system of Claim 17, wherein N >= .
21. The system of Claim 20, wherein N = .
22. The system of Claim 19 or 21, wherein J = .
23. The system of Claim 15, wherein each of the N interactive data streams contains the whole set of said data having K segments.
24. The system of Claim 15, wherein each of the N interactive data streams contains the remaining portion of said data only.
25. The system of Claim 15, wherein:
- the client is connected to the leading data stream when the client raises a request for said data;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to any one of the N interactive data streams.
- the client is connected to the leading data stream when the client raises a request for said data;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to any one of the N interactive data streams.
26. The system of Claim 2, wherein:
- each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval = ;
- the M anti-latency data streams [1 to M] are generated such that .cndot. an m th anti-latency data stream has F m segments, wherein F m is an m th Fibonacci number; and .cndot. the F m segments are repeated continuously within the m th anti-latency data stream.
- each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval = ;
- the M anti-latency data streams [1 to M] are generated such that .cndot. an m th anti-latency data stream has F m segments, wherein F m is an m th Fibonacci number; and .cndot. the F m segments are repeated continuously within the m th anti-latency data stream.
27. The system of Claim 26, wherein:
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams; and until all data in the leading portion is received by the client.
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams; and until all data in the leading portion is received by the client.
28. The system of Claim 27, wherein:
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
29. The system of Claim 26, wherein each of the N interactive data streams contains the whole set of said data having K segments.
30. The system of Claim 26, wherein each of the N interactive data streams contains the remaining portion of said data only.
31. The system of Claim 26, wherein F M >= .
32. The system of Claim 26, wherein m starts from 1.
33. The system of Claim 26, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
34. The system of Claim 2, wherein:
- each of the N interactive data steams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval = ;
- in the M anti-latency data streams, I. the leading portion of said data contains [1 to] J leading data segments [labeled]; and II. the leading data segments are distributed in the M anti-latency data streams such that an j th leading segment is repeated by an anti-latency time interval<= jT within the anti-latency data streams.
- each of the N interactive data steams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval = ;
- in the M anti-latency data streams, I. the leading portion of said data contains [1 to] J leading data segments [labeled]; and II. the leading data segments are distributed in the M anti-latency data streams such that an j th leading segment is repeated by an anti-latency time interval<= jT within the anti-latency data streams.
35. The system of Claim 34, wherein:
- the client is connected to all of the M anti-latency data streams when the client raises a request for said data; and - the leading portion of said data in the M anti-latency data streams is buffered in the client.
- the client is connected to all of the M anti-latency data streams when the client raises a request for said data; and - the leading portion of said data in the M anti-latency data streams is buffered in the client.
36. The system of Claim 35, wherein:
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
37. The system of Claim 34, wherein each of the N interactive data streams contains the whole set of said data having K segments.
38. The system of Claim 34, wherein each of the N interactive data streams contains the remaining portion of said data only.
39. The system of Claim 34, wherein M
40. The system of Claim 34 wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows:
wherein those segments in blank contains any data.
wherein those segments in blank contains any data.
41. The system of Claim 2, wherein the M anti-latency data streams - contains the leading portion of said data; and - further includes two batches of data streams being a 1 st set of anti-latency data streams and a 2 nd set of anti-latency data streams.
42. The system of Claim 41, wherein:
- the 1 st anti-latency data streams have A 1 st anti-latency data streams [from 1 to A], wherein I. an .alpha. th anti-latency data stream has F.alpha. segments, and F a is an .alpha.th Fibonacci number; and II. the F.alpha. segments are repeated continuously within the .alpha.th 1 st anti-latency data stream - the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti-latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse jump frame period;
such that the client can perform a coarse jump function when the client is connected to the B 2 nd anti-latency data stream.
- the 1 st anti-latency data streams have A 1 st anti-latency data streams [from 1 to A], wherein I. an .alpha. th anti-latency data stream has F.alpha. segments, and F a is an .alpha.th Fibonacci number; and II. the F.alpha. segments are repeated continuously within the .alpha.th 1 st anti-latency data stream - the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti-latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse jump frame period;
such that the client can perform a coarse jump function when the client is connected to the B 2 nd anti-latency data stream.
43. The system of Claim 42, wherein:
- the client is connected to at least the .alpha.th and (.alpha.+1)th 1 st anti-latency data streams when the client raises a request for said data;
- the data in at least the .alpha.th and (.alpha.+1)th 1 st anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive 1 st anti-latency data streams;
until all data in the A 1 st anti-latency data streams is received by the client.
- the client is connected to at least the .alpha.th and (.alpha.+1)th 1 st anti-latency data streams when the client raises a request for said data;
- the data in at least the .alpha.th and (.alpha.+1)th 1 st anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive 1 st anti-latency data streams;
until all data in the A 1 st anti-latency data streams is received by the client.
44. The system of Claim 43, wherein:
- the client is connected to any one of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client; and - the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client.
- the client is connected to any one of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client; and - the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client.
45. The system of Claim 42, wherein each of the N interactive data streams contains the whole set of said data having K segments.
46. The system of Claim 42, wherein each of the N interactive data streams contains the remaining portion of said data only.
47. The system of Claim 42, wherein said coarse jump frame period includes E
data segments, and FA >= 2E.
data segments, and FA >= 2E.
48. The system of Claim 42, wherein a starts from 1.
49. The system of Claim 42, wherein a starts from 4 and the repeating 1 st, 2 nd, and 3 rd anti-latency data streams have the following configuration:
50. The system of Claim 41, wherein:
- the 1 st anti-latency data streams have A 1 st anti-latency data streams [from 1 to A], wherein I. an .alpha.th anti-latency data stream has F.alpha. segments, wherein F.alpha. is an .alpha.th Fibonacci number; and II. the F.alpha. segments are repeated continuously within the .alpha.th 1 st anti-latency data stream - the 2 nd anti-latency data streams have B 2 nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse jump frame period such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data streams.
- the 1 st anti-latency data streams have A 1 st anti-latency data streams [from 1 to A], wherein I. an .alpha.th anti-latency data stream has F.alpha. segments, wherein F.alpha. is an .alpha.th Fibonacci number; and II. the F.alpha. segments are repeated continuously within the .alpha.th 1 st anti-latency data stream - the 2 nd anti-latency data streams have B 2 nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse jump frame period such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data streams.
51. The system of Claim 50, wherein:
- the client is connected to at least the .alpha. th and (.alpha.+1)th 1 st anti-latency data streams when the client raises a request for said data;
- the data in at least the .alpha. th and (.alpha.+1)th 1 st anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive 1 st anti-latency data streams;
until all data in the A 1 st anti-latency data streams is received by the client.
- the client is connected to at least the .alpha. th and (.alpha.+1)th 1 st anti-latency data streams when the client raises a request for said data;
- the data in at least the .alpha. th and (.alpha.+1)th 1 st anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive 1 st anti-latency data streams;
until all data in the A 1 st anti-latency data streams is received by the client.
52. The system of Claim 51, wherein:
- the client is connected to the leading data stream after all data in the 1 st anti-latency data streams is received by the client;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to anyone of the N interactive data streams after all data in the B 2 nd anti-latency data streams is received by the client.
- the client is connected to the leading data stream after all data in the 1 st anti-latency data streams is received by the client;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to anyone of the N interactive data streams after all data in the B 2 nd anti-latency data streams is received by the client.
53. The system of Claim 50, wherein each of the N interactive data streams contains the whole set of said data having K segments.
54. The system of Claim 50, wherein each of the N interactive data streams contains the remaining portion of said data only.
55. The system of Claim 50, wherein said coarse jump frame period includes E
data segments, and F A >= 2E.
data segments, and F A >= 2E.
56. The system of Claim 50, wherein a starts from 1.
57. The system of Claim 50, wherein a starts from 4 and the repeating 1 st, 2 nd, and 3 rd data streams of the A 1 st anti-latency data streams have the following configuration:
58. The system of Claim 41, wherein:
- the 1 st anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 st anti-latency data streams contains [1 to] C 1 st data segments; and II. the 1 st data segments are distributed in the A 1 st anti-latency data streams such that an c th leading segment is repeated by an anti-latency time interval <= cT within the A 1 st anti-latency data streams;
- the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti-latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse jump frame period;
such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data stream.
- the 1 st anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 st anti-latency data streams contains [1 to] C 1 st data segments; and II. the 1 st data segments are distributed in the A 1 st anti-latency data streams such that an c th leading segment is repeated by an anti-latency time interval <= cT within the A 1 st anti-latency data streams;
- the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti-latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse jump frame period;
such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data stream.
59. The system of Claim 58, wherein:
the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and - data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 st anti-latency data streams is received by the client.
the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and - data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 st anti-latency data streams is received by the client.
60. The system of Claim 59, wherein:
- the client is connected to any one of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client; and - the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client.
- the client is connected to any one of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client; and - the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client.
61. The system of Claim 58, wherein each of the N interactive data streams contains the whole set of said data having K segments.
62. The system of Claim 58, wherein each of the N interactive data streams contains the remaining portion of said data only.
63. The system of Claim 58, wherein said coarse jump frame period includes E
data segments, and A
data segments, and A
64. The system of Claim 58, wherein six of the A 1 st anti-latency data streams are arranged as follows:
wherein those segments in blank contains any data.
wherein those segments in blank contains any data.
65. The system of Claim 41, wherein:
- the 1 st anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 st anti-latency data streams contains C 1 st data segments; and II. the data segments I are distributed in the A 1 st anti-latency data streams such that an c th leading segment is repeated by an anti-latency time interval <= cT within the A 1 st anti-latency data streams;
- the 2 nd anti-latency data streams have B 2 nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse jump frame period such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data streams.
- the 1 st anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 st anti-latency data streams contains C 1 st data segments; and II. the data segments I are distributed in the A 1 st anti-latency data streams such that an c th leading segment is repeated by an anti-latency time interval <= cT within the A 1 st anti-latency data streams;
- the 2 nd anti-latency data streams have B 2 nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing the rest of the leading portion of said data; and .cndot. being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse jump frame period such that the client can perform a coarse jump interactive function when the client is connected to the B 2 nd anti-latency data streams.
66. The system of Claim 65, wherein:
- the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and - data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 st anti-latency data streams is received by the client.
- the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and - data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 st anti-latency data streams is received by the client.
67. The system of Claim 66, wherein:
- the client is connected to the leading data stream of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to anyone of the N interactive data streams after all data in the B 2 nd anti-latency data stream connected in step F is received by the client.
- the client is connected to the leading data stream of the B 2 nd anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client;
- the client is subsequently connected to any one of the finishing data streams; and - the client is connected to anyone of the N interactive data streams after all data in the B 2 nd anti-latency data stream connected in step F is received by the client.
68. The system of Claim 65, wherein each of the N interactive data streams contains the whole set of said data having K segments.
69. The system of Claim 65, wherein each of the N interactive data streams contains the remaining portion of said data only.
70. The system of Claim 65, wherein said coarse jump frame period includes E
data segments, and
data segments, and
71. The system of Claim 67, wherein six of the A 1st anti-latency data streams are arranged as follows:
wherein those segments in blank contains any data.
wherein those segments in blank contains any data.
72. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
73. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, wherein at least a portion of data in the leading portion is pre-fetched in the client.
74. A system for transmitting data over a network to at least one client including a signal generator for fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contains a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
75. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including:
- at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client;
- a buffer in the client for pre-fetching the leading portion in the client as pre-fetched data; and - at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
- at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client;
- a buffer in the client for pre-fetching the leading portion in the client as pre-fetched data; and - at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
76. The system of Claim 75, wherein the pre-fetched data is refreshed during a refresh time period.
77. The system of Claim 76, wherein the refresh time period is an off-peak period.
78. The method of Claim 76, wherein pre-fetched data is refreshed once per day.
79. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, the anti-latency data streams include:
- a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and - a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing at least the rest of the leading portion of said data;
and .cndot. repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
- a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and - a plurality of finishing data streams, each of the finishing data streams:
.cndot. containing at least the rest of the leading portion of said data;
and .cndot. repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
80. The system of Claim 79, wherein:
- the client is connected to the leading data stream when the client raises a request for said data; and - the client is subsequently connected to any one of the finishing data streams.
- the client is connected to the leading data stream when the client raises a request for said data; and - the client is subsequently connected to any one of the finishing data streams.
81. The system of Claim 79, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval >= T.
82. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, wherein the anti-latency data streams include:
- M anti-latency data streams from [1 to M], wherein an m th anti-latency data stream has F m segments, and F m is an m th Fibonacci number; and wherein said F m segments are repeated continuously within the m th anti-latency data stream.
- M anti-latency data streams from [1 to M], wherein an m th anti-latency data stream has F m segments, and F m is an m th Fibonacci number; and wherein said F m segments are repeated continuously within the m th anti-latency data stream.
83. The system of Claim 82, wherein:
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams; and until all data is received by the client.
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams; and until all data is received by the client.
84. The system of Claim 82, wherein m starts from 1.
85. ~The system of Claim 82, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
86. ~A system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network, including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, wherein the anti-latency data streams include:
- M anti-latency data streams containing [1 to] K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an k th leading segment is repeated by an anti-latency time interval <= kT within the anti-latency data streams.
- M anti-latency data streams containing [1 to] K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an k th leading segment is repeated by an anti-latency time interval <= kT within the anti-latency data streams.
87. ~The system of Claim 86, wherein:
- the client is connected to all of the M anti-latency data streams; and - said data in the M anti-latency data streams is buffered in the client when the client raises a request for said data.
- the client is connected to all of the M anti-latency data streams; and - said data in the M anti-latency data streams is buffered in the client when the client raises a request for said data.
88. ~The system of Claim 86, wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows:
wherein those segments in blank contains any data.
wherein those segments in blank contains any data.
89. A receiver for receiving data being transmitted over a network to at least one client according to Claim 2, including:
- a processor for raising a request for said data; and - at least one connector for connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
- a processor for raising a request for said data; and - at least one connector for connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
90. The receiver of Claim 89, wherein:
- the connector is connected to the N interactive data streams after all data in the M anti-latency data streams is received by the receiver.
- the connector is connected to the N interactive data streams after all data in the M anti-latency data streams is received by the receiver.
91. The receiver of Claim 89, wherein data in the leading portion is received sequentially.
92. The receiver of Claim 89, wherein the receiver connects to at least two of the anti-latency data streams simultaneously.
93. The receiver of Claim 92 further including:
- a buffer for buffering data in the two anti-latency data streams connected to the client that is received by the client sequentially.
- a buffer for buffering data in the two anti-latency data streams connected to the client that is received by the client sequentially.
94. The receiver of Claim 93, wherein the buffer includes random access memory and computer hard disk.
95. ~The receiver of Claim 93, wherein the buffer consists of random access memory.
96. ~The receiver of Claim 89, wherein the receiver connects to all of the anti-latency data streams simultaneously.
97. ~The receiver of Claim 96 further including:
- a buffer for buffering data in the anti-latency data streams connected in the client; and wherein the processor rearranges the buffered data according to a proper sequence.
- a buffer for buffering data in the anti-latency data streams connected in the client; and wherein the processor rearranges the buffered data according to a proper sequence.
98. ~The receiver of Claim 97, wherein the buffer includes random access memory and computer hard disk.
99. ~The receiver of Claim 97, wherein the buffer consists of random access memory.
100. ~The receiver of Claim 89, wherein at least a portion of data in the M
anti-latency data streams is pre-fetched in the client as pre-fetched data.
anti-latency data streams is pre-fetched in the client as pre-fetched data.
101. The receiver of Claim 100, wherein the pre-fetched data is refreshed during a refresh time period.
102. ~The receiver of Claim 101, wherein the refresh time period is 01:00-06:00.
103. ~The receiver of Claim 101, wherein the refresh time period is 10:00-15:00.
104. ~A receiver for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream, including:
- a buffer for pre-fetching the leading portion in the client as pre-fetched data; and - a processor for merging the pre-fetched data to the remaining portion.
- a buffer for pre-fetching the leading portion in the client as pre-fetched data; and - a processor for merging the pre-fetched data to the remaining portion.
105. ~The receiver of Claim 104, wherein the pre-fetched data is refreshed during a refresh time period.
106. ~The receiver of Claim 105, wherein the refresh time period is an off-peak period.
107. ~The receiver of Claim 105, wherein pre-fetched data is refreshed once per day.
108. ~A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including:
- at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of said data for receipt by the client; and - at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
wherein:
- the leading portion of said data .cndot. can be generated at regular anti-latency stream intervals; and .cndot. is generated at the next earliest anti-latency stream interval after at least one client raises a request for said data.
- at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of said data for receipt by the client; and - at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
wherein:
- the leading portion of said data .cndot. can be generated at regular anti-latency stream intervals; and .cndot. is generated at the next earliest anti-latency stream interval after at least one client raises a request for said data.
109. The system of Claim 108, wherein:
- said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- the anti-latency data streams includes M anti-latency data streams, wherein each of the M anti-latency data stream .cndot. contains substantially identical data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
-~the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
- said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- the anti-latency data streams includes M anti-latency data streams, wherein each of the M anti-latency data stream .cndot. contains substantially identical data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
-~the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
110. The system of Claim 109, wherein:
- each of the M anti-latency data stream has J segments; and - the anti-latency time interval >= T.
- each of the M anti-latency data stream has J segments; and - the anti-latency time interval >= T.
111. The system of Claim 110, wherein the interactive time interval >=
JT.
JT.
112. The system of Claim 111, wherein
113. The system of Claim 110, wherein
114. The system of Claim 113, wherein
115. The system of Claim 109, wherein each of the N interactive data streams contains the whole set of said data having K segments.
116. The system of Claim 109, wherein each of the N interactive data streams contains the remaining portion of said data only.
117. The system of Claim 109, wherein:
- the client is connected to the M anti-latency data stream generated for the client when the client raises the request for said data;
- the client is connected to any one of the N interactive data streams; and - the M anti-latency data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
- the client is connected to the M anti-latency data stream generated for the client when the client raises the request for said data;
- the client is connected to any one of the N interactive data streams; and - the M anti-latency data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
118. The system of Claim 108, wherein:
- ~said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- ~the anti-latency data streams includes M anti-latency data streams including:
I. a leading data stream that .cndot. contains at least one leading segment of the leading portion of said data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
II. a plurality of finishing data streams, wherein each of the finishing data streams that:
.cndot. contains the rest of the leading portion of said data;
.cndot. corresponds to one of the leading segments; and .cndot. are generated when the corresponding leading segment is generated;
- the interactive data streams includes N interactive data streams, wherein each of the N interactive data streams is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
- ~said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- ~the anti-latency data streams includes M anti-latency data streams including:
I. a leading data stream that .cndot. contains at least one leading segment of the leading portion of said data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
II. a plurality of finishing data streams, wherein each of the finishing data streams that:
.cndot. contains the rest of the leading portion of said data;
.cndot. corresponds to one of the leading segments; and .cndot. are generated when the corresponding leading segment is generated;
- the interactive data streams includes N interactive data streams, wherein each of the N interactive data streams is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
119. ~The system of Claim 118, wherein - each of the finishing data stream has J segments; and - the anti-latency time interval <= T.
120. The system of Claim 119, wherein the interactive time interval <=
JT.
JT.
121. The system of Claim 119, wherein
122. The system of Claim 120, wherein
123. ~The system of Claim 120, wherein .
124. ~The system of Claim 118, wherein each of the N interactive data streams contains the whole set of said data having K segments.
125. ~The system of Claim 118, wherein each of the N interactive data streams contains the remaining portion of said data only.
126. ~The system of Claim 118, wherein:
- the client is connected to the leading data segment generated for the client when the client raises the request for said data;
- the client is subsequently connected to the corresponding finishing data stream;
- the client is connected to any one of the N interactive data streams; and - the leading data segment and the corresponding finishing data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
- the client is connected to the leading data segment generated for the client when the client raises the request for said data;
- the client is subsequently connected to the corresponding finishing data stream;
- the client is connected to any one of the N interactive data streams; and - the leading data segment and the corresponding finishing data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
127. ~The system of Claim 108, wherein:
- said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval - the anti-latency data streams includes M anti-latency data streams, such that .cndot. an m th anti-latency data stream has F m segments, wherein F m is an m th Fibonacci number;
.cndot. the F m segments can be generated at regular anti-latency stream intervals;
.cndot. the first F m segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and .cndot. subsequent F(m+1) segments are generated before all data in the preceding F m segment is received by the client.
- said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network;
- the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval - the anti-latency data streams includes M anti-latency data streams, such that .cndot. an m th anti-latency data stream has F m segments, wherein F m is an m th Fibonacci number;
.cndot. the F m segments can be generated at regular anti-latency stream intervals;
.cndot. the first F m segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and .cndot. subsequent F(m+1) segments are generated before all data in the preceding F m segment is received by the client.
128. ~The system of Claim 127, wherein:
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams before all data in the leading portion is received by the client.
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams before all data in the leading portion is received by the client.
129. ~The system of Claim 127, wherein:
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client; and - the M anti-latency data streams is terminated after the client is connected to one of the N interactive data streams.
- the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client; and - the M anti-latency data streams is terminated after the client is connected to one of the N interactive data streams.
130. ~The system of Claim 127, wherein each of the N interactive data streams contains the whole set of said data having K segments.
131. ~The system of Claim 127, wherein each of the N interactive data streams contains the remaining portion of said data only.
132. ~The system of Claim 127, wherein
133. ~The system of Claim 127, wherein m starts from 1.
134. ~The system of Claim 127, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
135. ~An anti-latency signal generator for generating a plurality of anti-latency data streams to transmit data over a network to at least one client, wherein the anti-latency data streams include:
- a leading data stream that .cndot. contains at least one leading segment of the leading portion of said data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
- a plurality of finishing data streams, each of the finishing data streams:
.cndot. contains the rest of the leading portion of said data;
.cndot. corresponds to one of the leading segments; and .cndot. are generated when the corresponding leading segment is~
generated.
- a leading data stream that .cndot. contains at least one leading segment of the leading portion of said data .cndot. can be generated at regular anti-latency time intervals; and .cndot. are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
- a plurality of finishing data streams, each of the finishing data streams:
.cndot. contains the rest of the leading portion of said data;
.cndot. corresponds to one of the leading segments; and .cndot. are generated when the corresponding leading segment is~
generated.
136. The anti-latency signal generator of Claim 1.35, wherein:
- the client is connected to the leading data stream when the client raises a request for said data; and - the client is subsequently connected to the corresponding finishing data stream.
- the client is connected to the leading data stream when the client raises a request for said data; and - the client is subsequently connected to the corresponding finishing data stream.
137. The anti-latency signal generator of Claim 135, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval <= T.
138. An anti-latency signal generator for generating M anti-latency data streams to transmit data over a network to at least one client, wherein - an m th anti-latency data stream has F m segments, and F m is an m th Fibonacci number;
- the F m segments can be generated at regular anti-latency stream intervals;
- the first F m segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and - subsequent F (m+1) segments are generated before all data in the preceding F m segment is received by the client.
- the F m segments can be generated at regular anti-latency stream intervals;
- the first F m segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and - subsequent F (m+1) segments are generated before all data in the preceding F m segment is received by the client.
139. The anti-latency signal generator of Claim 138, wherein:
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams until all data in the leading portion is received by the client.
- the client is connected to at least the m th and (m+1)th anti-latency data streams when the client raises a request for said data;
- the data in at least the m th and (m+1)th anti-latency data streams is buffered in the client;
- the client is subsequently connected to successive anti-latency data streams until all data in the leading portion is received by the client.
140. The anti-latency signal generator of Claim 138, wherein m starts from 1.
141. The anti-latency signal generator of Claim 138, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/917,639 US7574728B2 (en) | 2001-07-31 | 2001-07-31 | System for delivering data over a network |
US09/917,639 | 2001-07-31 | ||
US09/954,041 US7200669B2 (en) | 2001-07-31 | 2001-09-18 | Method and system for delivering large amounts of data with interactivity in an on-demand system |
US09/954,041 | 2001-09-18 | ||
PCT/CN2002/000527 WO2003013124A2 (en) | 2001-07-31 | 2002-07-29 | System for delivering data over a network |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2451901A1 true CA2451901A1 (en) | 2003-02-13 |
CA2451901C CA2451901C (en) | 2010-02-16 |
Family
ID=27129728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2451901A Expired - Fee Related CA2451901C (en) | 2001-07-31 | 2002-07-29 | System for delivering data over a network |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP1433324A4 (en) |
JP (1) | JP2005505957A (en) |
KR (1) | KR100639428B1 (en) |
CN (1) | CN100477786C (en) |
AU (1) | AU2002322988C1 (en) |
CA (1) | CA2451901C (en) |
WO (1) | WO2003013124A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8533765B2 (en) | 2005-08-26 | 2013-09-10 | Thomson Licensing | On demand system and method using dynamic broadcast scheduling |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7200669B2 (en) * | 2001-07-31 | 2007-04-03 | Dinastech Ipr Limited | Method and system for delivering large amounts of data with interactivity in an on-demand system |
US7574728B2 (en) | 2001-07-31 | 2009-08-11 | Dinastech Ipr Limited | System for delivering data over a network |
US7174384B2 (en) | 2001-07-31 | 2007-02-06 | Dinastech Ipr Limited | Method for delivering large amounts of data with interactivity in an on-demand system |
CN1228982C (en) * | 2002-12-05 | 2005-11-23 | 国际商业机器公司 | Channel combination method of VOD system |
US6932435B2 (en) | 2003-11-07 | 2005-08-23 | Mckechnie Vehicle Components (Usa), Inc. | Adhesive patterns for vehicle wheel assemblies |
KR20070040403A (en) * | 2004-07-27 | 2007-04-16 | 샤프 가부시키가이샤 | Pseudo video-on-demand system, pseudo video-on-demand system control method, and program and recording medium used for the same |
CN101146211B (en) * | 2006-09-11 | 2010-06-02 | 思华科技(上海)有限公司 | Load balance system and method of VoD network |
DE602007011181D1 (en) * | 2006-10-19 | 2011-01-27 | Thomson Licensing | Method for optimizing the transmission of DVB-IP service information by partitioning into multiple multicast streams |
EP2819364A1 (en) * | 2013-06-25 | 2014-12-31 | British Telecommunications public limited company | Content distribution system and method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5724646A (en) * | 1995-06-15 | 1998-03-03 | International Business Machines Corporation | Fixed video-on-demand |
US5822530A (en) * | 1995-12-14 | 1998-10-13 | Time Warner Entertainment Co. L.P. | Method and apparatus for processing requests for video on demand versions of interactive applications |
US6233017B1 (en) * | 1996-09-16 | 2001-05-15 | Microsoft Corporation | Multimedia compression system with adaptive block sizes |
JP3825099B2 (en) * | 1996-09-26 | 2006-09-20 | 富士通株式会社 | Video data transfer method and video server device |
US6563515B1 (en) * | 1998-05-19 | 2003-05-13 | United Video Properties, Inc. | Program guide system with video window browsing |
EP1138154A1 (en) * | 1999-09-27 | 2001-10-04 | Koninklijke Philips Electronics N.V. | Scalable system for video-on-demand |
US7200669B2 (en) * | 2001-07-31 | 2007-04-03 | Dinastech Ipr Limited | Method and system for delivering large amounts of data with interactivity in an on-demand system |
-
2002
- 2002-07-29 EP EP02754152A patent/EP1433324A4/en not_active Withdrawn
- 2002-07-29 JP JP2003518169A patent/JP2005505957A/en active Pending
- 2002-07-29 KR KR1020047001589A patent/KR100639428B1/en not_active IP Right Cessation
- 2002-07-29 CA CA2451901A patent/CA2451901C/en not_active Expired - Fee Related
- 2002-07-29 AU AU2002322988A patent/AU2002322988C1/en not_active Ceased
- 2002-07-29 CN CNB028147650A patent/CN100477786C/en not_active Expired - Fee Related
- 2002-07-29 WO PCT/CN2002/000527 patent/WO2003013124A2/en active Application Filing
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8533765B2 (en) | 2005-08-26 | 2013-09-10 | Thomson Licensing | On demand system and method using dynamic broadcast scheduling |
Also Published As
Publication number | Publication date |
---|---|
CA2451901C (en) | 2010-02-16 |
AU2002322988C1 (en) | 2008-05-22 |
EP1433324A2 (en) | 2004-06-30 |
WO2003013124A2 (en) | 2003-02-13 |
CN1535536A (en) | 2004-10-06 |
KR100639428B1 (en) | 2006-10-30 |
AU2002322988B2 (en) | 2007-11-15 |
JP2005505957A (en) | 2005-02-24 |
WO2003013124A3 (en) | 2003-05-15 |
KR20040041574A (en) | 2004-05-17 |
CN100477786C (en) | 2009-04-08 |
EP1433324A4 (en) | 2007-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1217543C (en) | Apparatus and method for equivalent VOD system | |
US5561637A (en) | Pace control for multicasting in a video server environment | |
US8719889B2 (en) | Live time-shift system based on P2P technology and method thereof | |
US5815662A (en) | Predictive memory caching for media-on-demand systems | |
US5968120A (en) | Method and system for providing on-line interactivity over a server-client network | |
US6047309A (en) | Recording observed and reported response characteristics at server and/or client nodes in a replicated data environment, and selecting a server to provide data based on the observed and/or reported response characteristics | |
EP0753966A2 (en) | Disk striping method for use in video server environments | |
US20030074667A1 (en) | Method for delivering data over a network | |
JP2009506627A (en) | On-demand system and method using dynamic broadcast scheduling | |
CA2451897A1 (en) | Method for delivering data over a network | |
Gao et al. | Threshold-based multicast for continuous media delivery | |
CA2451901A1 (en) | System for delivering data over a network | |
CN1355904A (en) | System and methods for providing video-on-demand services for broadcasting systems | |
KR960025209A (en) | Method and apparatus for retrieving a requested data unit among data units divided into stripes stored on a plurality of disks | |
JP2003506765A (en) | Method and apparatus for distributing data using a distributed storage system | |
EP0746158A3 (en) | Scalable interactive multimedia server system | |
EP2493191B1 (en) | Method, device and system for realizing hierarchically requesting content in http streaming system | |
US20050015807A1 (en) | Network systems and methods to push video | |
KR100851397B1 (en) | Method for video data delivery using partial divide broadcasting | |
Kim et al. | Channel allocation problem in VoD system using both batching and adaptive piggybacking | |
KR20050085362A (en) | Channel tapping in a near- video-on-demand system | |
TW571594B (en) | Methods for providing video-on-demand services for broadcasting systems | |
US20030131126A1 (en) | System for delivering data over a network | |
EP1175776B2 (en) | Video on demand system | |
Chiueh et al. | The integration of real-time I/O and network support in the Stony Brook Video Server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed | ||
MKLA | Lapsed |
Effective date: 20120730 |