WO2012158161A1 - Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu - Google Patents

Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu Download PDF

Info

Publication number
WO2012158161A1
WO2012158161A1 PCT/US2011/036830 US2011036830W WO2012158161A1 WO 2012158161 A1 WO2012158161 A1 WO 2012158161A1 US 2011036830 W US2011036830 W US 2011036830W WO 2012158161 A1 WO2012158161 A1 WO 2012158161A1
Authority
WO
WIPO (PCT)
Prior art keywords
peer
recited
video content
neighbors
pieces
Prior art date
Application number
PCT/US2011/036830
Other languages
English (en)
Inventor
Yin Zhang
Lili Qiu
Richard Yang YANG
Original Assignee
Splendorstream, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Splendorstream, Llc filed Critical Splendorstream, Llc
Priority to PCT/US2011/036830 priority Critical patent/WO2012158161A1/fr
Publication of WO2012158161A1 publication Critical patent/WO2012158161A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1087Peer-to-peer [P2P] networks using cross-functional networking aspects
    • H04L67/1091Interfacing with client-server systems or between P2P systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/20Arrangements for broadcast or distribution of identical information via plural systems
    • H04H20/24Arrangements for distribution of identical information via broadcast system and non-broadcast system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1061Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
    • H04L67/1063Discovery through centralising entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/632Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Definitions

  • the present invention relates to video streaming, and more particularly to efficiently distributing video content using a combination of a peer-to-peer network and a content distribution network.
  • Video traffic over the Internet may be broadly classified into three categories: (1) live video streaming; (2) video on demand; and (3) video conferencing.
  • live video streaming the video is broadcasted live over the Internet which is watched by participants at approximately the same time.
  • video on demand users can select and watch a video at a particular time and can even forward and rewind the video to an arbitrary offset.
  • video conferencing users located at two or more locations are allowed to interact via two-way video and audio transmissions simultaneously.
  • a method for efficiently distributing video content comprises requesting from a tracker unit to join either an existing live streaming channel, a video on demand streaming channel or a video conference, where the tracker unit is configured to keep track of active peers in a peer-to-peer network.
  • the method further comprises receiving a list of active peers participating in the live streaming channel, the video on demand streaming channel or the video conference from the tracker unit.
  • the method comprises connecting, by a processor, to a subset of peers in the list provided by the tracker unit to become neighbors in the peer-to-peer network.
  • the method comprises receiving a missing piece of video content from one of the neighbors in the peer-to-peer network or from a content distribution network server based on where the missing piece of video content is to be stored in a video buffer.
  • Figure 1 illustrates a network system that combines the use of a peer-to-peer network with a content distribution network to efficiently distribute video content in accordance with an embodiment of the present invention
  • Figure 2 is a hardware configuration of a client device in the network system in accordance with an embodiment of the present invention.
  • Figure 3 is a flowchart of a method for joining an existing live streaming channel in accordance with an embodiment of the present invention
  • Figure 4 is a flowchart of a method for leaving an existing live streaming channel in accordance with an embodiment of the present invention
  • Figure 5 is a flowchart of a method for adding a new neighbor in a peer-to-peer network by issuing a new connection request in accordance with an embodiment of the present invention
  • Figure 6 is a flowchart of a method for handling the connection request discussed in Figure 5 in accordance with an embodiment of the present invention
  • Figure 7 is a flowchart of a method for removing a new neighbor in a peer-to-peer network in accordance with an embodiment of the present invention
  • Figure 8 illustrates a video buffer of the client device in accordance with an embodiment of the present invention
  • Figure 9 is a flowchart of a method for randomly selecting seed clients in accordance with an embodiment of the present invention.
  • Figure 10 is a flowchart of a method for injecting pieces from the content source when the peer has insufficient upload bandwidth in accordance with an embodiment of the present invention
  • Figure 11 is a flowchart of a method for estimating the bandwidth of a client in accordance with an embodiment of the present invention.
  • Figure 12 is a flowchart of a method for reducing the network delay using random tree pushing in accordance with an embodiment of the present invention.
  • the present invention comprises a method, system and computer program product for efficiently distributing video content.
  • a peer-to-peer network and a content distribution network are used in combination to distribute video content.
  • a content distribution network relies on servers distributed across the Internet to achieve high quality content delivery at a high cost.
  • a peer-to-peer network distributes content among peers without incurring server side cost but may experience poor performance.
  • both the peer-to-peer network and the content distribution network are leveraged in a manner that achieves high content delivery and low cost by allowing the peer-to-peer network to serve as much content as possible while using the content distribution network to bootstrap the content in the peer-to-peer network and using it as a fallback whenever the peer-to-peer network has insufficient bandwidth, insufficient quality or when the missing piece of video content in the video buffer of the client device has an immediate deadline.
  • Video traffic over the Internet may be broadly classified into three categories: (1) live video streaming; (2) video on demand; and (3) video conferencing.
  • Each of these services places stringent demands on the content providers, Internet service providers and wireless network providers to service such needs.
  • the principles of the present invention provide a means for more efficiently distributing video content over the Internet, involving live video streaming, video on demand and video conferencing, using a combination of a peer-to-peer network and a content distribution network as discussed further below in connection with Figures 1-11.
  • Figure 1 illustrates a network system that combines the use of a peer-to-peer network with a content distribution network to efficiently distribute video content.
  • Figure 2 is a hardware configuration of a client device in the network system.
  • Figure 3 is a flowchart of a method for joining an existing live streaming channel.
  • Figure 4 is a flowchart of a method for leaving an existing live streaming channel.
  • Figure 5 is a flowchart of a method for adding a new neighbor in a peer-to-peer network by issuing a new connection request.
  • Figure 6 is a flowchart of a method for handling the connection request discussed in Figure 5.
  • Figure 7 is a flowchart of a method for removing a new neighbor in a peer-to-peer network.
  • Figure 8 illustrates a video buffer of the client device.
  • Figure 9 is a flowchart of a method for randomly selecting seed clients.
  • Figure 10 is a flowchart of a method for injecting pieces from the content source when the peer has insufficient upload bandwidth.
  • Figure 11 is a flowchart of a method for estimating the bandwidth of a client.
  • Figure 12 is a flowchart of a method for reducing the network delay using random tree pushing.
  • Figure 1 illustrates a network system 100 that combines the use of a peer-to-peer network 101 with a content distribution network that uses one or more content distribution network servers 102 in accordance with an embodiment of the present invention.
  • a peer-to-peer network 101 refers to distributing the tasks or workloads among peers (represented by clients 103A-103E in network 101) forming what is referred to as a network of nodes (where each node is represented by one of clients 103A-103E).
  • Clients 103A-103E may collectively or individually be referred to as clients 103 or client 103, respectively.
  • peers or clients 103 make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts.
  • peers or clients 103 make a portion of their resources available to upload video content to other peers or clients 103 in peer-to-peer network 101 (e.g., represented by the interconnections of clients 103 among themselves in peer-to-peer network 101) as discussed in greater detail further below.
  • a content distribution network refers to a system of computers containing copies of data places at various nodes (represented by server 102) of a network 100.
  • content distribution network server 102 stores video which may be downloaded by clients 103 (e.g., represented by the connection between clients 103D, 103E and content distribution network server 102).
  • clients 103 may download video content from either content distribution network server 102 or from another client 103 via peer-to-peer network 101.
  • content distribution network server 102 may have the video content desired by client 103.
  • Client 103 can then only download the video content from content distribution network server 102. Later, such content may be distributed among other clients 103 in peer-to-peer network 101 thereby allowing such content to be downloaded from a client 103 within peer-to-peer network 101 instead of from content distribution network server 102.
  • a more detail description of the hardware configuration of client 103 is discussed further below in connection with Figure 2.
  • Network 100 further includes a tracker 104, which is a computing unit configured to keep track of the active clients 103 in peer-to-peer network 101 (e.g., represented by the connection between clients 103 A, 103B and tracker 104) and informs a new client 103 of what other clients 103 it should connect to and download content from as discussed in further detail below.
  • a tracker 104 is a computing unit configured to keep track of the active clients 103 in peer-to-peer network 101 (e.g., represented by the connection between clients 103 A, 103B and tracker 104) and informs a new client 103 of what other clients 103 it should connect to and download content from as discussed in further detail below.
  • Client 103 may be any type of device (e.g., portable computing unit, personal digital assistant (PDA), smartphone, desktop computer system, workstation, Internet appliance and the like) configured with the capability of communicating with other clients 103, server 102 and tracker 104.
  • PDA personal digital assistant
  • network 100 of Figure 1 illustrates a single peer-to-peer network 101 comprising five peers or clients 103 as well as a single content distribution network server 102
  • network 100 may include any number of peer-to-peer networks 101 comprised of any number of clients 103 as well as any number of servers 102 for the content distribution network.
  • the interconnections between clients 103 among themselves as well as between content distribution network server 102 and tracker 104 are illustrative. The principles of the present invention are not to be limited in scope to the topology depicted in Figure 1.
  • Figure 2 illustrates a hardware configuration of a client 103 which is representative of a hardware environment for practicing the present invention.
  • claim 103 has a processor 201 coupled to various other components by system bus 202.
  • An operating system 203 runs on processor 201 and provides control and coordinates the functions of the various components of Figure 2.
  • An application 204 in accordance with the principles of the present invention runs in conjunction with operating system 203 and provides calls to operating system 203 where the calls implement the various functions or services to be performed by application 204.
  • Application 204 may include, for example, an application for efficiently distributing video content as discussed further below in connection with Figures 3-12.
  • ROM 205 is coupled to system bus 202 and includes a basic input/output system (“BIOS”) that controls certain basic functions of client 103.
  • RAM random access memory
  • disk adapter 207 are also coupled to system bus 202.
  • software components including operating system 203 and application 204 may be loaded into RAM 206, which may be client's 103 main memory for execution.
  • Disk adapter 207 may be an integrated drive electronics ("IDE”) adapter that communicates with a disk unit 208, e.g., disk drive. It is noted that the program for efficiently distributing video content as discussed further below in association with Figures 3-12, may reside in disk unit 208 or in application 204.
  • IDE integrated drive electronics
  • Client 103 may further include a communications adapter 209 coupled to bus 202.
  • Communications adapter 209 may interconnect bus 202 with an outside network thereby enabling client 103 to communicate with other similar devices.
  • I/O devices may also be connected to client 103 via a user interface adapter 210 and a display adapter 211.
  • Keyboard 212, mouse 213 and speaker 214 may all be interconnected to bus 202 through user interface adapter 210. Data may be inputted to client 103 through any of these devices.
  • a display monitor 215 may be connected to system bus 202 by display adapter 211. In this manner, a user is capable of inputting to client 103 through keyboard 212 or mouse 213 and receiving output from client 103 via display 215 or speaker 214.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” 'module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider an Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the function/acts specified in the flowchart and/or block diagram block or blocks.
  • video traffic over the Internet may be broadly classified into three categories: (1) live video streaming; (2) video on demand; and (3) video conferencing.
  • clients 103 may establish and manage its own peer-to-peer network topology using the principles of the present invention as discussed below in connection with Figures 3-7.
  • Figure 3 is a flowchart of a method 300 for joining an existing live streaming channel in accordance with an embodiment of the present invention.
  • step 301 client 103 sends a request to tracker 104 to join an existing live streaming channel.
  • step 302 client 103 receives a list of active peers in the live streaming channel.
  • tracker 104 may take into account the geographical location of client 103 in deciding which subset of peers/clients 103 to provide to the requesting client 103.
  • step 303 client connects to a random subset of peers provided by tracker 104 to become neighbors in its own peer-to-peer network 101. That is, after client 103 receives a list of N peers from tracker 104, client 103 connects to a random subset of K peers to become their neighbors in peer-to-peer network 101.
  • Topology Exponent ranges from .5 to 1.
  • step 304 client 103 determines whether the number of peers received by tracker 104 is less than a threshold number, Min Peer Number.
  • Min Peer Number Min_Node_Degree (1/Topology - Exponent) . If the number of peers returned by tracker 104 is less than Min Peer Number, then, in step 305, client 103 requests periodically from tracker 104 more peers to form part of client's 103 peer-to-peer network 101. Additionally, client 103 may discover more peers in the live streaming channel by exchanging peer information with its neighbors.
  • step 306 client 103 does not request from tracker 14 more peers to form part of client's 103 peer-to-peer network 101.
  • method 300 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 300 may be executed in a different order presented and that the order presented in the discussion of Figure 3 is illustrative. Additionally, in some implementations, certain steps in method 300 may be executed in a substantially simultaneous manner or may be omitted.
  • Figure 4 is a flowchart of a method 400 for leaving an existing live streaming channel in accordance with an embodiment of the present invention.
  • step 401 client sends a leave notification message to tracker 104.
  • step 402 client 103 disconnects all its neighbors in its peer-to-peer network 101.
  • tracker 104 removes client 103 from its list of active peers whenever it receives a leave notification message from client 103 or when it fails to receive any keep-alive message from client 103 for Peer Keep Alive lnterval seconds (where Peer Keep Alive lnterval is a configurable parameter) (e.g., Peer Keep Alive lnterval is 30 seconds).
  • client 103 periodically sends keep-alive messages to inform tracker 104 that it is alive and the number of extra neighbors 103 is willing to accept.
  • method 400 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 400 may be executed in a different order presented and that the order presented in the discussion of Figure 4 is illustrative. Additionally, in some implementations, certain steps in method 400 may be executed in a substantially simultaneous manner or may be omitted.
  • Figure 5 is a flowchart of a method 500 for adding a new neighbor in a peer-to-peer network 101 by issuing a new connection request in accordance with an embodiment of the present invention.
  • the threshold K Min_Node_Degree), where N is the total number of peers that client 103 currently knows (i.e., the number of peers in client's 103 peer-to-peer network 101).
  • Max Node Degree is chosen to ensure that the control overhead (due to e.g., the keep-alive messages) is not too burdensome. For example, Max_Node_Degree may be set to equal 15.
  • step 502 client 103 periodically tries to increase its number of neighbors by connecting to more peers.
  • client 103 does not attempt to connect to more peers and reject all subsequent connection requests from peers.
  • method 500 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 500 may be executed in a different order presented and that the order presented in the discussion of Figure 5 is illustrative. Additionally, in some implementations, certain steps in method 500 may be executed in a substantially simultaneous manner or may be omitted.
  • FIG. 6 is a flowchart of a method 600 for handling the connection request discussed in method 500 in accordance with an embodiment of the present invention.
  • client 103 receives a connection request as discussed in connection with method 500.
  • a determination is made as to whether the number of neighbors in client's 103 peer-to-peer network 101 is below a threshold.
  • Max Node Degree is chosen to ensure that the control overhead (due to e.g., the keep-alive messages) is not too burdensome.
  • Max_Node_Degree may be set to equal 15.
  • step 603 client 103 accepts the peer's connection request. Otherwise, in step 604, client 103 does not accept the peer's connection request.
  • method 600 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 600 may be executed in a different order presented and that the order presented in the discussion of Figure 6 is illustrative. Additionally, in some implementations, certain steps in method 600 may be executed in a substantially simultaneous manner or may be omitted.
  • Figure 7 is a flowchart of a method 700 for removing a new neighbor in a peer-to-peer network 101 in accordance with an embodiment of the present invention.
  • client 103 determines if a peer is considered dead.
  • “Dead,” as used herein, refers to a client 103 acting as a peer that does not provide a keep-alive message over a duration of time.
  • client 103 and its neighbors in its peer-to-peer network 101 periodically exchange keep-alive messages to inform each other that they are alive (e.g., once per second).
  • client 103 removes the peer from its peer- to-peer network 101.
  • client 103 determines if the performance of the peer is unsatisfactory. For example, the rate of loss video content between the peer and client 103 is deemed to be too high. In another example, the bandwidth of the peer is too low. In a further example, the response time of the peer is too slow.
  • step 702 client 103 removes the peer from its peer-to-peer network 101.
  • client 103 continues to determine if a peer is considered dead in step 701.
  • method 700 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 700 may be executed in a different order presented and that the order presented in the discussion of Figure 7 is illustrative. Additionally, in some implementations, certain steps in method 700 may be executed in a substantially simultaneous manner or may be omitted.
  • Figures 3-7 While the previous discussion of Figures 3-7 were directed to live video streaming, the principles of the present invention discussed in connection with Figure 3-7 may be applied to the other categories of video traffic, namely, video on demand and video conferencing.
  • video traffic over the Internet may be broadly classified into three categories: (1) live video streaming; (2) video on demand; and (3) video conferencing.
  • Pieces For all such cases of video traffic, the principles of the present invention divide the video content into what are called herein as "pieces.”
  • Each piece may last for a designated period of time, which is represented by the parameter, Piece Duration.
  • each piece contains a timestamp that specifies the offset of the piece within a video stream.
  • each client 103 divides its video buffer into the following four parts as shown in Figure 8.
  • Figure 8 illustrates a video buffer 800 in accordance with an embodiment of the present invention.
  • video buffer 800 comprises a back buffer 801 storing what is referred to herein as the "back buffer pieces.”
  • back buffer 801 stores recently played pieces.
  • Back buffer 801 may provide pieces for any peer whose play point is less than the current client's 103 play point.
  • the size of back buffer 801, represented by the parameter, Back Buffer Size is typically small (e.g., a few seconds).
  • Back Buffer Size can be 5-10 minutes or even longer (if there is enough memory available).
  • Back Buffer Size 10 minutes so as to it make it possible to support high-definition videos, which have much higher data rates and thus impose much higher memory requirement.
  • Video buffer 800 further includes a source protection window 802.
  • Source protection window 802 contains pieces whose deadlines (that is, scheduled play times) are within Source Protection Window Size (parameter representing the size of source protection window 802) pieces from the current play point 803. To assure the quality of video streaming, any missing piece in source protection window 802 will be fetched directly from the content source, such as content distribution network server 102.
  • Source Protection Window Size is set to be very small (typically a few seconds) to minimize the amount of content directly served by the original source, such as content distribution network server 102.
  • Video buffer 800 additionally includes a window of time, referred to herein as the urgent window 804.
  • Urgent window 804 contains pieces whose deadlines (that is, scheduled play times) are within Urgent_Window_Size (parameter representing the size of urgent window 804) pieces after the end of source protection window 802. Missing pieces in urgent window 804 are fetched from neighbors in an earliest-deadline-first fashion as discussed in further detail below.
  • the Urgent Window Size typically lasts for only a few seconds.
  • video buffer 800 includes a window of time, referred to herein as the front buffer 805.
  • Front buffer 805 contains pieces whose deadlines (that is, scheduled play times) are within Front Buffer Size (parameter representing the size of front buffer 805) pieces after the end of urgent window 804. Missing pieces in front buffer 805 are fetched both from the content source, such as content distribution network server 102 (using the direct content injection algorithm described below), and from neighboring clients 103 (using the piece scheduling algorithm described below).
  • Front Buffer Size for live streaming, Front Buffer Size lasts for only a few seconds; for video conferencing, Front Buffer Size only lasts for no more than a second; for video on demand, the size of the Front Buffer Size depends on the largest play point difference between client 103 and all its neighbors. A discussion as to how to choose Front Buffer Size for video on demand is provided further below.
  • Video buffer 800 is maintained as a sliding window. That is, as play point 803 moves forward, the entire buffer shifts forward accordingly. All missing pieces inside source protection window 802 will be fetched directly from the content source; missing pieces in urgent window 804 will only be fetched from peers; missing pieces in front buffer 805 will be fetched from both the content source and from the peers. The details for determining which piece to next fetch are provided below in connection with the "piece scheduling algorithm.” Furthermore, the details for determining from which peer to request a missing piece is discussed further below in connection with the "peer selection algorithm.”
  • the missing piece is in front buffer 805, the missing piece is fetched from neighboring clients 103 in a rarest-latest-first fashion. Specifically, a client 103 computes Count(p), the number of clients 103 within client's 103 1-hop neighborhood that already have piece p. The missing pieces are then sorted in ascending order of Count(p) (thus, "rarest first” is used as the primary order), and when multiple pieces have the same Count(p)), they are sorted in descending order of their timestamps (thus, "latest first” is used to break ties).
  • the missing piece is fetched from neighboring clients 103 in an earliest-deadline-first fashion. Specifically, the missing pieces with the earliest scheduled play time are fetched first.
  • Urgent Window Probability missing pieces in urgent window 804 are fetched before missing pieces in front buffer 805 are fetched; with probability 1 - Urgent Window Probability, missing pieces in front buffer 805 are fetched before missing pieces in urgent window 804 are fetched.
  • the peers with a higher bandwidth, low latency are more preferable.
  • P[kl], P[k2], ..., P[ku] be the set of peers that own a piece p.
  • Client 103 selects a random peer with a probability proportional to their upload bandwidth to request the piece p.
  • client 103 does not fetch from a neighbor when the neighbor's predicted response time is too high compared with the deadline (i.e., scheduled play time) for a given piece.
  • missing pieces in source protection window 802 will be fetched directly from the content source.
  • missing pieces in front buffer 805 may be fetched from the content source using the direct content injection algorithm as discussed below.
  • a piece needs to be first injected from the content source into a subset of clients 103, which are called “seed clients” for the piece herein, before it can be further disseminated among all the clients 103 in a peer-to-peer fashion.
  • seed clients for the piece herein.
  • the content source needs to inject pieces into more clients 103 and help increase the total upload bandwidth and thus improve the dissemination speed.
  • client 103 should directly fetch the piece from the content source in order to assure high video quality.
  • the amount of video content serviced directly by the content source such as content distribution network server 102, is minimized while assuring high video quality for all clients 103.
  • Pieces inside source protection window 802 have immediate deadlines (i.e., a scheduled play time). To assure high video quality, any missing piece inside source protection window 802 is fetched directly from the content source in an earliest-deadline-first fashion. That is, missing pieces with the earliest scheduled play times are fetched first.
  • seed Client Fraction 5%
  • seed Client Fraction 5%
  • Figure 9 is a method 900 for randomly selecting seed clients in accordance with an embodiment of the present invention.
  • step 903 client 103 selects the seed clients based on the computed randomized weight.
  • the choice of random weights ensures that the probability for W[i] > W[j] is equal to bw[i]/(bw[i]+bw[j]) for any i ⁇ j.
  • the probability for W[k] to be among the (l+n)*Seed_Client_Fraction largest weights is proportional to bw[k].
  • method 900 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 900 may be executed in a different order presented and that the order presented in the discussion of Figure 9 is illustrative. Additionally, in some implementations, certain steps in method 900 may be executed in a substantially simultaneous manner or may be omitted.
  • client 103 When there exists the situation of a peer not having sufficient upload bandwidth, client 103 performs the following method to inject the missing pieces from the content source, such as content distribution network server 102.
  • Figure 10 is a method 1000 for injecting pieces from the content source when the peer has insufficient upload bandwidth in accordance with an embodiment of the present invention.
  • client 103 exchanges piece availability via a bitmap with other neighbors/peers in peer-to-peer network 101.
  • neighbors periodically exchange bitmaps that summarize piece availability in their video buffers 800 once every Bitmap Exchange lnterval seconds (where Bitmap Exchange lnterval is a configurable parameter).
  • Bitmap Exchange lnterval is a configurable parameter.
  • a bitmap refers to a binary vector where a "one" indicates a piece is available and a "zero" indicates a piece is missing.
  • the bitmap also contains the current play point information (i.e., play point 803).
  • bitmaps may be piggybacked to any piece request message or data message exchanged among peers.
  • the complete bitmap may only be exchanged when the Bitmap Exchange Timer expires periodically. During the interval after the Bitmap Exchange Timer expires and before it expires again, all control and data messages only specify changes to the most recent complete bitmap.
  • step 1002 client 103 computes the per-piece bandwidth deficit.
  • P[l], P[2] P[n] be client's 103 direct neighbors.
  • P[0] C (C, referring to client 103).
  • the upload bandwidth for P[k] be bw[k].
  • BW bw[0] + bw[l] + ... + bw[n] be the total upload bandwidth within the 1-hop neighborhood of C.
  • Count(p) be the number of clients in set ⁇ P[k]
  • k 0, 1, ..., n ⁇ that already have the piece.
  • Data Rate be the data rate of the video stream.
  • client 103 computes the cumulative bandwidth deficit.
  • client 103 first sorts all the pieces in a rarest- latest-first fashion. Specifically, client 103 sorts pieces in ascending order of Count(p) (thus, "rarest first” is used as the primary order), and when multiple pieces have the same Count(p)), sort such pieces in descending order of their timestamps (thus, "latest first” is used to break ties). Let the sorted pieces be p 1 , p2, ... , pm.
  • client 103 Sorting in rarest-latest- first order ensures that p 1 has the highest per-piece deficit and pm has the lowest per-piece deficit. For each piece pj, client 103 then computes the cumulative bandwidth deficit. [00100] In step 1004, client 103 determines if the cumulative bandwidth deficit is positive. Whenever the cumulative bandwidth deficit is positive, client 103, in step 1005, computes the inject count, Inject Count(pj), i.e., the number of copies pj needs to be injected from the content source into client's 103 1-hop neighborhood, using the following algorithm (AG 1):
  • cum_deficit cum_deficit + Deficit ( p j )
  • step 1006 client 103 does not need to inject pieces from the content source, such as content distribution network server 102.
  • client 103 in step 1007, selects the peers within client's 103 1- hop neighborhood that need to directly inject pj from the content source, such as content distribution network server 102. It is noted that Inject Count(pj) only specifies the total number of additional clients within client's 103 1-hop neighborhood that need to directly inject pj from the content source. It does not specify which client 103 needs to inject piece pj. In order for client 103 to determine whether itself is one of these clients who need to inject piece pj, client 103 applies the same distributed, randomized seed client selection algorithm described above.
  • method 1000 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 1000 may be executed in a different order presented and that the order presented in the discussion of Figure 10 is illustrative. Additionally, in some implementations, certain steps in method 1000 may be executed in a substantially simultaneous manner or may be omitted.
  • Upload traffic sending rate may have to be throttled, because otherwise there can be significant queue buildup, which can cause excessive network delay and even packet losses.
  • a standard token bucket may be used to limit the delay and burstiness of the upload data traffic.
  • the control traffic has a higher priority and is not subject to the rate limiting. Not throttling control traffic is reasonable because control traffic rate is low and will not cause congestion in general. Moreover, most control traffic requires low delay in order to be effective and cannot be queued after data traffic. By rate limiting the upload traffic, creating long queues in a large hidden buffer (e.g., upstream of a digital subscriber line or a cable modem link) may be avoided.
  • the token bucket has the following configurable parameters:
  • Token Generation Rate Upload BW, which limits the average upload traffic rate
  • Token Bucket Capacity Upload BW * Token Bucket Max Burst Delay, which limits the maximum burst size (and thus the queueing delay at the true bottleneck).
  • Token Bucket Max Burst Delay is typically set to a small value (e.g., 200 milliseconds) to avoid sending a large burst of data packets into the network, which may overflow router buffers; and
  • a Additive Increase Multiplicative Decrease (AIMD) scheme is used to adjust the Request Quota for each peer.
  • Request Quota refers to a configurable parameter that specifies a limit as to the number of pieces that may be downloaded over a period of time.
  • Request Quota(P) is upper bounded by the bandwidth-delay product BW(P) * RTT, where BW(P) is the estimated upload bandwidth of peer P, and RTT is the estimated round-trip-time between the current client 103 and neighbor P.
  • the second part of the AIMD scheme involves the multiplicative decrease.
  • Request Quota(P) is reduced to Request_Quota(P)*AIMD_Beta.
  • AIMD Beta is a configurable parameter between 0 and 1 that controls the speed of multiplicative decrease.
  • AIMD Beta 1/2.
  • Request Quota(P) is lower bounded by 1.
  • each client 103 limits the number of concurrent downloads from the content source by the parameter Source Concurrent Download Limit.
  • Source Concurrent Download Limit is 6, because most modern browsers (e.g., Firefox®, Chrome®, Internet Explorer®, Safari®) limit the number of concurrent HTTP connections to 6 or higher.
  • Figure 11 is a method 1100 for estimated the bandwidth of a client 103 in accordance with an embodiment of the present invention.
  • step 1101 client 103 sets msg.send time to the current time of day based on client's 103 local clock whenever client 103 sends a message msg (either data or control) to a neighbor P[i].
  • step 1102 P[i] sets msg.recv time to the current time of day according to P[i]'s local clock when P[i] receives the message of step 1001.
  • step 1103 P[i] computes the one-way delay for msg as:
  • OWD(msg) msg.recv_time - msg.send_time
  • client's 103 local clock and P[i]'s local clock need not be synchronized.
  • the absolute value of OWD(msg) may not be very meaningful.
  • the value of OWD(msg) may be negative.
  • client's 103 and P[i]'s clocks will not drift apart too quickly. That is, the offset between client's 103 local time and P[i]'s local time stay roughly constant.
  • msg is a control message from client 103 to P[i] ⁇ , which is the minimum one-way delay for control messages sent from client 103 to P[i].
  • Examples of a control message includes: a keep- alive message, a bitmap exchange message, a piece request message, an explicit loss notification message, etc.
  • step 1107 P[i] sends bw(C,P[i]) as an attribute in its control messages to client 103.
  • i 1, 2, ... , n ⁇ .
  • MinOWD(C, P[i]) may be unreliable if there are not enough OWD samples. In particular, if MinOWD(C, P[i]) is overestimated, then the upload BW can be overestimated.
  • Min OWD Samples e.g., 30
  • method 11000 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 1100 may be executed in a different order presented and that the order presented in the discussion of Figure 11 is illustrative. Additionally, in some implementations, certain steps in method 1 100 may be executed in a substantially simultaneous manner or may be omitted.
  • video traffic over the Internet may be broadly classified into three categories: (1) live video streaming; (2) video on demand; and (3) video conferencing. While the above description related to live video streaming, the principles of the present invention discussed above may be applied to video on demand and video conferencing as discussed below.
  • the size of the front buffer needs to be configured differently.
  • the Front Buffer Size size of front buffer 805
  • the Front Buffer Size needs to be large enough so that client 103 that starts viewing the video later can fetch pieces that are not yet viewed by an earlier client 103. This can significantly improve the fraction of peer- delivered content especially when the later client 103 has higher upload bandwidth and other resources.
  • Front Buffer Size(C) max ⁇ Min FBS, min ⁇ Max_FBS,max ⁇ Playpoint(P k ) - Playpoint(C)
  • P k is a direct neighbor of C ⁇ ⁇
  • Front Buffer Size(C) has a lower bound of Min FBS and an upper bound of Max FBS.
  • the actual size of front buffer 805 is also determined by the largest difference between the neighbors' play point 803 and client 103's own play point 803. If all the neighbors play points 803 are behind the play point 803 of client 103, then client 103 only needs to buffer Min FBS. Otherwise, client 103 needs to buffer possibly more data for its neighbor to download.
  • Min FBS can be set to a small value (e.g., 10 seconds), and Max FBS can be set to a larger value (e.g., 5-10 minutes).
  • back buffer 801 stores pieces that have been recently played. So long as memory is available, one can make the size of back buffer 801 as large as possible. For example, when the application of the present invention runs inside a browser, it is often deemed safe when the entire video buffer consumes less than 50-70 MB.
  • the Back Buffer Size size of back buffer 801 should be made large enough to cover the maximum play point difference between neighbors. For example, if tracker 104 ensures the maximum play point difference is below 5-10 minutes, then back buffer 801 only needs to span 5-10 minutes.
  • client 103 has the option to keep only a subset of pieces inside back buffer 801. For example, for each piece p inside back buffer 801, client 103 can generate a random number uniform(p, C), which is uniformly distributed between 0 and 1 and uses pair (p, C) as the random seed, where C represents client 103. Client 103 then only keeps a piece p when uniform(p, C) is below a configurable threshold Back Buffer Density (e.g., value between 0 and 1). In this way, the expected number of pieces occupied by back buffer 801 is only Back Buffer Size * Back Buffer Density. By reducing Back Buffer Density, back buffer 801 can span a wider time range without increasing the memory consumption. This technique is particularly useful for supporting high-definition videos where each piece may be very large. The technique is also useful for less popular videos, where play points 803 between neighbors may differ by much more than 5-10 minutes.
  • Back Buffer Density e.g., value between 0 and 1).
  • Back Buffer Size(C) has a lower bound of Min BBS and an upper bound of Max BBS.
  • the actual back buffer size is also determined by the largest difference between the neighbors' play point 803 and client's own play point 803. If all the neighbors play points 803 are before (i.e., greater than) play point 803 of client 103, then client 103 only needs to buffer Min BBS. Otherwise, client 103 needs to buffer possibly more data for its neighbor to download.
  • Min BBS can be set to a small value (e.g., 10 seconds), and Max BBS can be set to a larger value (e.g., 5-10 minutes).
  • video on demand allows client 103 to perform a forward or backward seek operation. If after client 103 performs a forward or backward seek operation, the new play point 803 is still within Max FBS from the neighbors' play points 803, then there is no need to change client's 103 neighborhood. The only thing it requires is for client 103 to readjust the Front Buffer Size(C) based on the new play point 803. Client 103 also needs to inform its neighbors of the new play points 803 so that the neighbors can adjust their Front Buffer Size accordingly. Finally, client 103 needs to inform tracker 104 of its new play point 803.
  • a large change in client's play point 803 requires client 103 to (i) disconnect its existing neighbors, (ii) contact tracker 104 to obtain a new list of peers whose current play points 803 are close to client's 103 new play point 803, and (iii) connect to new neighbors.
  • video conferencing In comparison with live streaming, video conferencing has three key differences: (i) instead of having content source server(s), multiple participating clients 103 of the conference will generate video and audio data that need to be disseminated to a subset of participants. Hence, the communication is many-to-many (as opposed to one-to-many in the case of live streaming); (ii) video conferencing imposes much more stringent performance constraints on audio and video streams; and (iii) the number of participants in a video conference is typically much smaller than the number of clients 103 in a live streaming channel.
  • Video conference can be considered as a special case of live streaming, where each participant publish their audio/video streams to conference server(s) 102, which in turn disseminates the audio/video streams to the other participants 103 who are interested in listening to or watching the streams. Therefore, the mechanism developed for live streaming can be directly applied to support video streaming. In this scheme, clients 103 need to actively pull (i.e., request) pieces from either peers or content source 102. This is referred to herein as the "pull-based approach.”
  • random tree based distribution scheme In addition to applying the above mechanism to support video conferencing, an alternative scheme based on random tree pushing is developed to further reduce network delay.
  • the goal of random tree based distribution scheme is to develop a shallow tree that has enough bandwidth to distribute the content to all the participants. The tree should be shallow since the network delay increases with the depth of the tree.
  • One way is to optimize tree construction based on network topology and traffic. However, this requires up-to-date global information about the network topology and traffic and frequent adaptation to the changes in the topology and traffic. In order to achieve high efficiency without requiring global information or coordination, the following method (random tree based pushing) may be used as discussed below in connection with Figure 12.
  • Figure 12 is a method 1200 for reducing network delay using random tree pushing in accordance with an embodiment of the present invention.
  • step 1201 the source client 103, who generates the audio/video stream, randomly picks a set of nodes (other clients 103) as the next hop forwarders for a given piece of content. Nodes are selected as next hops with a probability proportional to its upload bandwidth, since nodes with higher bandwidth should be preferred as forwarders.
  • step 1202 the source client 103 keeps adding next hops until the total upload bandwidth of all next hops is no less than the bandwidth required to deliver to all the remaining receivers. More formally, let C be the source client and let P[l], P[2], P[n] be the set of receivers. Let bw[k] be the upload bandwidth of P[k]. Let p be a new piece to be disseminated.
  • the source client 103 further partitions the receivers in the current video session and assigns each receiver to one of the next hops, which will be responsible for forwarding the video stream to the assigned receiver either directly or through a multi-hop path.
  • the number of receivers assigned to the next hop is proportional to the next hop's bandwidth. For example, suppose there are 9 receivers and 2 next-hop forwarders: node A has 2 Mbps and node B has 1 Mbps. Node A is responsible for delivering to 6 receivers and node B is responsible for delivering to 3 receivers.
  • P[ki], P[k m ] be the set of m next-hop forwarders determined in step 1101.
  • P[k m+ i] P[k n ] be the set of (n-m) receivers (i.e., non- forwarders).
  • the source client 103 then sends a data message to each forwarder P[k j ], where the data message contains piece p as well as the set of receivers S j . If the source client 103 does not have enough bandwidth to forward piece p to all the forwarders in a timely fashion, then client 103 has the option of forwarding a copy of piece p to the conference server(s) and let the conference server(s) forward piece p to some next-hop forwarders. [00146] In step 1204, after a next-hop forwarder P[k j ] receives piece p, set S j, P[k j ] can directly forward piece p to all the receivers in set S j .
  • P[k j ] it is possible for P[k j ] to pick its own next-hop forwarders using the same probabilistic approach as in step 1201, and assigns the receivers to the forwarder as described in step 1203. This process is repeated recursively until the video reaches all the receivers.
  • next-hop forwarders are selected from the current receivers interested in receiving video from the source client 103.
  • Method 1200 can be easily extended to include other active users (who are not interested in watching client's 103 video stream) as candidate next-hop forwarders.
  • the push-based scheme and the pull-based scheme are not mutually exclusive. They can be easily combined into a hybrid scheme. For example, pieces are primarily distributed using the push-based approach. Meanwhile, client 103 can request (i.e., pull) missing pieces from either its neighbors or the conference server (e.g., content distribution network server 102).
  • the conference server e.g., content distribution network server 1012.
  • method 1200 may include other and/or additional steps that, for clarity, are not depicted. Further, in some implementations, method 1200 may be executed in a different order presented and that the order presented in the discussion of Figure 12 is illustrative. Additionally, in some implementations, certain steps in method 1200 may be executed in a substantially simultaneous manner or may be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne un procédé, un système et un produit-programme d'ordinateur pour distribuer efficacement un contenu vidéo. Un réseau pair-à-pair et un réseau de distribution de contenu sont utilisés en combinaison pour distribuer le contenu vidéo. Un réseau de distribution de contenu repose sur des serveurs répartis à travers Internet pour effectuer une distribution de contenu de qualité élevée à un coût élevé. Un réseau pair-à-pair distribue un contenu entre des pairs sans subir le coût côté serveur, mais peut avoir une médiocre performance. Le réseau pair-à-pair et le réseau de distribution de contenu sont optimisés d'une manière qui permet une distribution de contenu de qualité élevée et à faible coût en permettant au réseau pair-à-pair de servir autant de contenu que possible tout en utilisant le réseau de distribution de contenu pour amorcer le contenu sur le réseau pair-à-pair et en l'utilisant en tant que sécurité à chaque fois que le réseau pair-à-pair a une bande passante insuffisante, une qualité insuffisante ou lorsque l'élément de contenu vidéo manquant dans la mémoire tampon vidéo du dispositif de client a une échéance immédiate.
PCT/US2011/036830 2011-05-17 2011-05-17 Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu WO2012158161A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2011/036830 WO2012158161A1 (fr) 2011-05-17 2011-05-17 Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/036830 WO2012158161A1 (fr) 2011-05-17 2011-05-17 Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu

Publications (1)

Publication Number Publication Date
WO2012158161A1 true WO2012158161A1 (fr) 2012-11-22

Family

ID=47177228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/036830 WO2012158161A1 (fr) 2011-05-17 2011-05-17 Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu

Country Status (1)

Country Link
WO (1) WO2012158161A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015055100A1 (fr) * 2013-10-18 2015-04-23 Tencent Technology (Shenzhen) Company Limited Programmation de téléchargement entre pairs
EP2887688A1 (fr) * 2013-12-23 2015-06-24 Thomson Licensing Distribution de contenu audiovisuel pour dispositifs d'affichage
WO2017174021A1 (fr) * 2016-04-07 2017-10-12 深圳市中兴微电子技术有限公司 Procédé et dispositif de gestion de trafic de ports, et support de stockage informatique

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961539B2 (en) * 2001-08-09 2005-11-01 Hughes Electronics Corporation Low latency handling of transmission control protocol messages in a broadband satellite communications system
US7139313B2 (en) * 1997-03-14 2006-11-21 Microsoft Corporation Digital video signal encoder and encoding method
US20070011262A1 (en) * 2005-06-21 2007-01-11 Makoto Kitani Data transmission control on network
US20090077254A1 (en) * 2007-09-13 2009-03-19 Thomas Darcie System and method for streamed-media distribution using a multicast, peer-to- peer network
US20090177792A1 (en) * 2006-06-27 2009-07-09 Yang Guo Performance Aware Peer-to-Peer Content-on-Demand
US7792982B2 (en) * 2003-01-07 2010-09-07 Microsoft Corporation System and method for distributing streaming content through cooperative networking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7139313B2 (en) * 1997-03-14 2006-11-21 Microsoft Corporation Digital video signal encoder and encoding method
US6961539B2 (en) * 2001-08-09 2005-11-01 Hughes Electronics Corporation Low latency handling of transmission control protocol messages in a broadband satellite communications system
US7792982B2 (en) * 2003-01-07 2010-09-07 Microsoft Corporation System and method for distributing streaming content through cooperative networking
US20070011262A1 (en) * 2005-06-21 2007-01-11 Makoto Kitani Data transmission control on network
US20090177792A1 (en) * 2006-06-27 2009-07-09 Yang Guo Performance Aware Peer-to-Peer Content-on-Demand
US20090077254A1 (en) * 2007-09-13 2009-03-19 Thomas Darcie System and method for streamed-media distribution using a multicast, peer-to- peer network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015055100A1 (fr) * 2013-10-18 2015-04-23 Tencent Technology (Shenzhen) Company Limited Programmation de téléchargement entre pairs
US10455013B2 (en) 2013-10-18 2019-10-22 Tencent Technology (Shenzhen) Company Limited Peer-to-peer upload scheduling
EP2887688A1 (fr) * 2013-12-23 2015-06-24 Thomson Licensing Distribution de contenu audiovisuel pour dispositifs d'affichage
EP2887687A1 (fr) * 2013-12-23 2015-06-24 Thomson Licensing Distribution de contenu audiovisuel pour dispositifs d'affichage
WO2017174021A1 (fr) * 2016-04-07 2017-10-12 深圳市中兴微电子技术有限公司 Procédé et dispositif de gestion de trafic de ports, et support de stockage informatique

Similar Documents

Publication Publication Date Title
US8850497B2 (en) Efficiently distributing video content using a combination of a peer-to-peer network and a content distribution network
CN102355448B (zh) 云流媒体数据传输方法及系统
US7526564B2 (en) High quality streaming multimedia
CN108307198B (zh) 流服务节点调度方法、装置及调度节点
WO2015140695A1 (fr) Gestion de bande passante dans un réseau de distribution de contenu
KR101231208B1 (ko) 피어링 제의 리스트를 제공하는 방법, p2p 네트워크를 형성하는 방법, p2p 어플리케이션 장치, p2p네트워크를 형성하는 단말 및 네트워크 장치
Liang et al. Incentivized peer-assisted streaming for on-demand services
KR20160003024A (ko) 데이터 통신 시스템 및 방법
WO2024021777A1 (fr) Procédé de transmission de données, appareil associé, dispositif et support de stockage
WO2012158161A1 (fr) Distribution efficace de contenu vidéo au moyen d'une combinaison d'un réseau pair-à-pair et d'un réseau de distribution de contenu
US8407280B2 (en) Asynchronous multi-source streaming
Pal et al. A survey on adaptive multimedia streaming
Muscat et al. A Hybrid CDN-P2P Architecture for Live Video Streaming
Famaey et al. Towards intelligent scheduling of multimedia content in future access networks
Ha et al. Topology and architecture design for peer to peer video live streaming system on mobile broadcasting social media
Hossain et al. Distributed dynamic MCU for video conferencing in peer-to-peer network
US10356482B2 (en) Content distribution system and method
Azgin et al. A semi-distributed fast channel change framework for IPTV networks
Jiang et al. Nsync: Network synchronization for peer-to-peer streaming overlay construction
Chan et al. An application-level multicast framework for large scale VOD services
Seung et al. Randomized routing in multi-party internet video conferencing
Byun et al. A tracker-based P2P system for live multimedia streaming services
Krishnamohan TIDE: A scalable continuous-media caching network
Hammami et al. Comprehensive study of buffering mechanisms in hybrid live P2P streaming protocol HLPSP
Ouyang et al. On providing bounded delay service to subscribers in P2P live streaming systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11865802

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11865802

Country of ref document: EP

Kind code of ref document: A1