US20170041238A1 - Data flow control method - Google Patents

Data flow control method Download PDF

Info

Publication number
US20170041238A1
US20170041238A1 US15/301,602 US201515301602A US2017041238A1 US 20170041238 A1 US20170041238 A1 US 20170041238A1 US 201515301602 A US201515301602 A US 201515301602A US 2017041238 A1 US2017041238 A1 US 2017041238A1
Authority
US
United States
Prior art keywords
data
streaming
receiving node
node
media data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/301,602
Inventor
Manh Hung Peter Do
Shuxun Cao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbital Multi Media Holdings Corp
Original Assignee
Orbital Multi Media Holdings Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbital Multi Media Holdings Corp filed Critical Orbital Multi Media Holdings Corp
Assigned to ORBITAL MULTI MEDIA HOLDINGS CORPORATION reassignment ORBITAL MULTI MEDIA HOLDINGS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CAO, Shuxun, DO, MANH HUNG PETER
Publication of US20170041238A1 publication Critical patent/US20170041238A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0002Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0014Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the source coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0015Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the adaptation strategy
    • H04L1/0019Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the adaptation strategy in which mode-switching is based on a statistical approach
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0882Utilisation of link capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/613Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2401Monitoring of the client buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6373Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • H04L65/4069
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the present application relates to data transmission protocols for the transfer of media data from a streaming server to one or more clients. More particularly the present invention provides an enhanced data flow control method that can be used in conjunction with an existing protocol such as TCP/IP.
  • the data flow control method according to the present invention takes into consideration network conditions as well as a receiving node or client device conditions , such as the data buffer of the client player, to improve the speed and quality of media data transmission for Internet protocol television (I PTV) applications.
  • I PTV Internet protocol television
  • Video traffic is currently accountable for over 60% of the world's bandwidth usage over communication networks such as the Internet or any similar wireless communication network today such as LANs, WLANs etc. How such data is injected into a network has a strong influence on the overall data flow through the network. Uncontrolled data injection into a network can lead to congestion impacts such as slow overall traffic flow, packet delay, packet loss, packet out of order, packet re-transmission, flooding/crashing of network devices (routers, switches etc.), and flooding of uncontrollable traffic. These types of events cause network traffic to slow down and sometimes to come to a complete stop if the switching & routing network equipment in use is unable to cope with the flow demand. Additionally, unmanaged data injection will have a negative impact for applications that rely on real-time communication such as VoIP (Voice over IP), live broadcasts of media events, real-time video conferences and other time-sensitive applications.
  • VoIP Voice over IP
  • the Transmission Control Protocol is one of the core protocols of the Internet protocol suite (IP), i.e. the set of network protocols used for the Internet.
  • IP Internet protocol suite
  • TCP provides reliable, ordered, error-checked delivery of a stream of octets between programs running on computers connected to a local area network (LAN0, intranet or the Internet. It resides at the transport layer.
  • IPTV Internet Protocol television
  • IPTV Internet Protocol television
  • TCP is the most commonly used protocol on the Internet. The reason for this is because TCP offers error corrections. When the TCP protocol is used there is a “guaranteed delivery”.
  • TCP Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, a receiving client system or end-point can re-request the packet from a server transmitting data until the whole packet is complete and is identical to the original packet that was transmitted.
  • TCP is an advanced transport protocol with 100% success rate on data delivery, built in flow control and error corrections, which run effectively over unmanaged networks. The use of TCP is currently required for all Open Network IPTV deployments where one or more network segments are not managed by the IPTV service operator.
  • Standard TCP involves large overheads in data transmission due to its default data frame structure.
  • the header refers to the first part of a data cell or packet, containing information such as source and destination addresses and instructions on how the telecommunications network is to handle the data.
  • the header is part of the overhead in a data transmission protocol.
  • the header is usually 40 bytes of each packet (20-byte TCP and 20-byte IP headers).
  • TCP and IP headers can be larger than 20 bytes if “options” are enabled in the data transmitted.
  • ICMP Internet Control Message Protocol
  • TCP does not offer the ability to cut off the transmission flow to improve network congestion. Further TCP is incapable of managing bandwidth sending rates to an IPTV client player without creating unnecessary data waste.
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • UDP uses a simple transmission model with minimum protocol mechanisms. It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user's program.
  • UDP provides checksums for data integrity and port numbers for addressing different functions at the source and destination of the datagram. However, in UPD there is no guarantee of delivery, ordering, or duplicate protection.
  • UDP is suitable for purposes where error checking and correction is either not necessary or is performed at the application prior to transmission, avoiding the overhead of such processing at the network interface level.
  • Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which is not a viable option in a real-time system. If error correction facilities are needed at the network interface level, an application residing on a host or a system for transmitting such data will need to make use of the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP), which are designed for this purpose.
  • TCP Transmission Control Protocol
  • SCTP Stream Control Transmission Protocol
  • UDP has some unique advantages over TCP but also has drawbacks as well. For instance, UDP is required have when the transmission requirements combine methods of unicast and multicast. The use of Multicast allows occupation of the available bandwidth at fixed data rates, without facing user growth capacity issue. However, UDP cannot be used to send important data such as webpages, database information, etc., and its present use is mostly limited to streaming audio and video. UDP can offer speed and is faster for data transmissions when compared to TCP because there is no form of flow control or error correction in UDP. Therefore the data sent over the Internet using UDP is affected by collisions, and errors will be present. Therefore UDP is only recommended for streaming media over a managed network, i.e.
  • UDP quality of service
  • QoS quality of service
  • TCP Transmission Control Protocol
  • the Nagle algorithm proposes improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network.
  • congestion control in IP/TCP Internetworks RRC 8966
  • a “small packet problem” is described where an application repeatedly emits data in small chunks, frequently only 1 byte in size. Since TCP packets have a 40 byte headers (20 bytes for TCP, 20 bytes for IPv4), this results in a 41 byte packet for only 1 byte of useful information, which is a huge overhead. This situation often occurs in Telnet sessions, where most key presses generate a single byte of data that is transmitted immediately. Over slow network links, many such packets can be in transit at the same time, potentially leading to congestion collapse.
  • Nagle's algorithm works by combining a number of small outgoing messages, and sending them all at once. Specifically, a the sender system or application should keep buffering its output until it has a full packet's worth of output, so that output can be sent all at once. This existing technique making use of Nagle algorithm is explained below.
  • TCP collects these small packets and sends them out at once as one whole packet only after such acknowledgement is received. Therefore, as more acknowledgements arrive, more data packets are sent.
  • the round trip time (RTT) value for a TCP connection normally ranges from 100 ms to 300 ms. This delay allows TCP to have enough time to collect small packets before next acknowledgement arrives.
  • the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising identifying a condition of the communication network between said sending and receiving nodes, identifying a condition of the receiving node, and adjusting the media data flow through said communication network based on the identified condition of the communication network and the identified condition of the receiving node.
  • the sending node is configured for encoding and streaming said media data to the receiving node based on a request for such data from the receiving node, and the receiving node is capable of decoding and playback of said media data.
  • the step of identifying the condition of the network comprises detecting the level of network traffic and determining whether the network between the sending node and the receiving is in a normal state or in a congested state, based on the detected level of network traffic;
  • the present invention responsive to a request for media data from the receiving node, the present invention comprises
  • the method comprises adjusting the rate of data streaming to a rate that is equal to a draining rate of the buffer during playback.
  • the method comprises:
  • the method comprises:
  • the method comprises reordering of media data packets arriving at the receiving node out of sequence by making use of the identifier of the sequence in the header part of each media data frame.
  • the method further comprises:
  • a streaming application at the sending node is capable of adaptively encoding the media data to be streamed from the sending node according to a bit rate suitable for the identified buffer conditions of the buffer of the receiving node.
  • the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
  • the method comprises the steps of:
  • identifying the conditions of the receiving node including the screen size, resolution and capability of the display screen connected to said node;
  • the network condition is identified as being congested, then continuing said streaming at the current streaming rate by only streaming I-frames of the media data packet and not streaming B and P frames of said media data to the receiving node, until the network condition changes to normal, to ensure that the media data is continuously streamed for playback at the receiving node.
  • the method comprises:
  • the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
  • the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
  • the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
  • the sending node is an IPTV streaming server and the receiving node is a client device including a multimedia player.
  • the present invention provides a system for implementing the method as claimed in any one of the preceding claims comprising a sending node and a receiving node capable of communication via a communication network, the sending node having a streaming module capable of streaming multimedia data stored in a memory means of the sending node, and the receiving node capable of requesting a multimedia data to be streamed from the sending node for playback on a multimedia player incorporated in the receiving node.
  • FIG. 1 and FIG. 2 show the frame structures for TCP and UDP, respectively.
  • FIG. 3 shows a flow chart depicting an exponential speed up mode for the data flow control method according to a first embodiment.
  • FIG. 4 shows a flow chart depicting an exponential back off mode for data flow control method according to the first embodiment.
  • FIG. 5 shows a flow chart depicting a linear trickle off mode for the data flow control method according to the first embodiment.
  • FIG. 6 shows a method of bitrate selection for a data sharing mode of the data flow control method according to a second embodiment.
  • FIG. 7 shows a method of adaptive bitrate selection for high quality video data playback for the data flow control method according to a third embodiment.
  • FIG. 8 shows a method of bitrate selection for based on resolution for the data flow control method according to the third embodiment.
  • FIG. 9 shows a flow chart depicting a method for a selective frame drop for the data flow control method according to the third embodiment.
  • FIGS. 10 a and 10 b show charts depicting viewing experience with and without the selective frame drop of FIG. 9 , respectively.
  • FIG. 11 shows a flow chart depicting a method for allocation of bandwidth for high motion video frames for the data flow control method according to the third embodiment.
  • FIG. 12 shows a flow chart depicting a buffer repair mode for the data flow control method according to the third embodiment.
  • FIG. 13 shows a flow chart depicting the interaction between modes of the first, second and third embodiments.
  • FIG. 14 shows a flow chart depicting a method for adaptively enabling or disabling the Nagle algorithm according to the present invention.
  • FIGS. 15 a and 15 b show a table and graph depicting the performance test results with and without the use of the method of FIG. 14 , respectively.
  • FIG. 1 illustrates a TCP frame structure
  • FIG. 2 illustrates a UDP frame structure.
  • the payload field in the shown frames contains the actual data.
  • TCP has a more complex frame structure that UDP. This is largely due to TCP being a reliable connection-oriented protocol, as explained in the background section.
  • the additional fields shown in FIG. 1 are those needed to ensure the “guaranteed delivery” offered by TCP. Therefore TCP is a much slower data transmission protocol when compare to UDP, and with much larger overheads. This is especially so if TCP is combined with the use of the Nagle algorithm described in the background section.
  • the present invention provides a new data transmission protocol or data flow control method for use in the Internet protocol suite.
  • the present invention provides a plurality of flow mechanisms or modes for media data packet transmission, preferably video data transmission over a communication network that overcomes the drawbacks of TCP and UDP and provides speed, flow control and error correction mechanisms, with minimal network traffic overheads.
  • the present invention provides a data flow control method that handles data flow management on the application layer of the OSI model.
  • the present invention is concerned with media data and specifically video data for IPTV services, a skilled person would easily understand that the present invention can be used for managing the flow of any type of data and information that can be transported over communication network such as the Internet.
  • the data flow control method is based on monitoring one or more sending node or server side conditions (for instance, an IPTV provider's server for sending the data) as well as one or more receiving node or client side conditions (client device such as a player or a set-top box for receiving the data).
  • the present invention facilitates communication for information and data exchange between the sending server & receiving client for communicating local network conditions at each end. Based on the conditions detected from both the client device and the server device, the method of data flow control according to the first embodiment is able to calculate and predict the network environment.
  • the flow control method according to the present invention is capable of applying one or more data transmission modes or techniques (these modes are explained in detail below) to ensure that high quality video data can be streamed over unmanaged and/or fluctuated networks.
  • the data flow control method of the present invention is capable of consuming unused bandwidth (left over or wasted bandwidth) in the network for more efficient data transmissions by data sharing, local caching and data recycling.
  • the data flow control of the present invention provided high video quality delivery and maintains smoothness of video playback on any network.
  • the data flow control method or protocol incorporates a combination of RTSP (Real-time Streaming Protocol) encapsulated over HTTP (Hypertext Transfer Protocol).
  • RTSP Real-time Streaming Protocol
  • HTTP Hypertext Transfer Protocol
  • the data flow control method according to the present invention is handled in the application layer.
  • the method is capable of implementing one or more modules which reside on either the server side or the client side terminals, or both.
  • the client and server nodes, equipped with the modules for implementing such flow controls constantly work together in collaboration to predict the network flow, adjust data flow, enhance video quality, navigate through various network routes to maintain a good IPTV user experience that conventional data transmission protocols such as TCP and UDP cannot offer.
  • the data flow control method is configured to re-evaluate the video bitrates on the buffer and replace lower/poor quality segments with higher video quality. Such repair takes place safely and effectively only when network condition permits.
  • the flow control method is configured for recycling data by caching popular data on local storage devices to prevent repeated streaming from the server and is also configured to and also share locally cached data with peers.
  • the application layer data flow control method according to the present invention comprises data flow control methods and video quality control methods.
  • data flow control methods or modes that are applied based on network conditions and buffer conditions are:
  • data flow control methods to achieve data sharing to improve overall network and streaming efficiency and reducing network resources are:
  • P2P Hybrid point-to-point
  • video quality flow control methods are:
  • Video frame selective drop or frame bypass (maintaining video continuity by ignoring Non-I frame until network condition improved)
  • the present invention proposes a plurality of data flow control mechanism that can work in conjunction with TCP, the public network and navigate around the congested network segments.
  • the following mechanisms or data flow control modes are different from the techniques applied by traditional TCP or UDP because they are based on a collaboration of network conditions when the data is streamed from a sending node as well as the conditions of the player or client buffer. Previous and existing systems do not have this collaboration and are reliant reporting of anomalies in the network.
  • network conditions and buffer conditions can be obtained from the server (the sending node—this need not be the only or original source of the data and may also be an intermediate node storing the data file) or the client or end user receiving node/player, or by both nodes making use of information exchanges between them.
  • the exponential speed up data flow control mechanism of the first embodiment is shown in FIG. 3 .
  • the preferred steps of this mechanism are explained below:
  • steps 3a-3e set out the main features of the exponential speed-up data flow control mode.
  • the following steps explain mechanisms employed based on additional abnormal buffer conditions and network conditions and sets out the procedure for achieving efficient data flow following exponential speed up mode by interacting with other dataflow control mechanisms of the first embodiment.
  • This back off mechanism of the flow control method of the first embodiment can be triggered upon detection of congestions/conditions of network or player buffer that matches a pre-set back-off criteria.
  • the best solution to ease off congestion for IPTV packet data transmissions is to back-off or navigate using one or more different paths to avoid contributing to the existing network congestion and traffic.
  • the exponential back off mode or mechanism of the data flow control method according to the present invention (also referred to as a friendly back-off mode) is triggered when congestion is detected. This mode will suspend all other flow control modes and reduce data sending rates to near zero or a “0.05 ⁇ rt_birate” to yield bandwidth to other applications.
  • the exponential back off data flow control mechanism of the first embodiment is shown in FIG. 4 .
  • the preferred steps of this mechanism are explained below:
  • the server or a system having a streaming application or module will try to push forward media data according to the ‘Exponential Speed Up’ mechanism set out in item 5.1.1.
  • the client device or player will periodically report to the streaming application it's cached/buffer media data size and ‘real-time play back’ duration, i.e. the amount of playing time left in the buffer.
  • the update is sent every 2 seconds to 5 seconds, depending on the RTP/RTCP (Real time control protocol) calculation used.
  • the server or streaming application can record this information for later use.
  • the streaming server will stop sending data to the TCP layer for a time span, which for instance equals to 1 ⁇ 3 of the time (Real-Time Play Back) reported from player in step 4b.
  • step c After the delayed time in step c expires; the streaming application or module will try to push data at last push speed recorded by Exponential Speed Up mechanism (5.1.1 above) to compensate earlier delay loss in step 4c.
  • step d fails due to network throughput, congestion or if player is not receiving all packets within a normal timeline, the streaming application will stop sending and recompile a new calculation based on the new ‘Real-Time Play Back’ reported from player. The process will continue to yield or free up network bandwidth until the ‘Real-Time Play Back’ is less than 15 seconds (or a defined critical level) or if network condition becomes normal, i.e. there is no congestions and the transmission occurs within a predicted or expected time and at an expected QoS level.
  • ‘Dynamic Multiple Link’ mechanism set out in 5.1.4 can be applied to compensate earlier delay/loss in a quick and efficient manner.
  • the linear or smooth trickle mode of the data flow control method according to the first embodiment is triggered or applied when the cache buffer on player side or client terminal is at the safe level (80% or more of the buffer).
  • the IPTV streaming module or application at the server node will enter the linear trickle mode at the safe level.
  • the streaming application will send media data at 1 ⁇ rt_bitrate speed, which is equal to the draining speed of the cache buffer on player side when the data from the buffer is being used. This ensures that the buffer may be maintained at a safe level, i.e. 80%, to ensure smooth video data playback.
  • FIG. 5 A flowchart depicting the linear trickle mode is seen in FIG. 5 .
  • the data flow is initially shown to be in the exponential speed up mode in 5a.
  • step 5b it is determined if the data cache level is more than 95% and if so, the exponential back-off mode is initiated in step 5c (see 5.1.2). If the cache level is determined to be 80% at step 5d, then at 5f the linear trickle mode is initialled. This determination at Step 5d can also be made after checking the network conditions in step 5e, as shown in FIG. 5 .
  • Data may be not continuous after reassembly at the player side.
  • DML dynamic multiple links
  • This DML mechanism is based on information exchange and cooperation work between the client side and server such that when data is required urgently, a module (this may be a dedicated DML module or integrated with other devices) is capable of computing the total connections needed and to request the server side to accept new connections.
  • the server side determines how many connections it will use depending on other factors and network conditions.
  • the streaming application in the server will try to send data across all the available TCP links in an average manner, i.e. evenly.
  • the server will try the next available link.
  • This unselective sending policy will increase the whole throughput between server and client. It can also be used as an emergency buffer rescue weapon when we need to compete bandwidth resource with other cross traffic to maintain smooth playback.
  • This dynamic sending policy will increase the whole throughput between server and client.
  • the media packets arriving at the client side via multiple links are usually shuffled and arrive out of order. Therefore, re-ordering is required at the client, which can be preferably based on the sequence number located in the RTP header.
  • the client is preferably equipped with a module or application to deal with reordering the out of sequence the RTP packets arriving from different links and give priority to the out of order packets to ensure the buffer is cleanly arranged for continuous playback of the received data.
  • the use multiple paths or links must be applied with management and governance by the IPTV service provider and regulatory services so that this is a fair and friendly strategy for today's networks, especially with many media service provider and other types of data transmissions competing for bandwidth on the same channel.
  • the fair use of the DML mechanism of the first embodiment can yield many benefits, for instance:
  • TCP link i.e. a master link
  • normal conditions i.e. with normal traffic conditions.
  • the DML mechanism is configured to send traffic over other TCP links (slave links) was established at the beginning of the session. Therefore, the plurality of links is established before the data transmission takes placed based on network and client conditions and these links are used in a dynamic matter as traffic along a network channel changes.
  • the player/client side will be unable to determine if the reason is A or B, and even if determined, will be unable to react to such condition. Therefore, by making use of DML mechanism, the client can collaborate with the server in an attempt to use DML to achieve better throughput. If the data flow condition does not show improvement, then it may be assumed that condition B has occurred and the data flow method may choose to switch to a different mechanism or mode for dealing with the abnormal condition. For instance, an adaptive streaming strategy as set out in item 5.1.5 may be applied by the flow control.
  • the DML mechanism of the first embodiment allows fully utilisation of bandwidth resource, and also treats other network traffic fairly.
  • DML mechanism is dynamically adjusted under predetermined conditions. For instance, in case of condition “A” above, a possible reaction of the data flow control method of the present invention is to use more links i.e. using the DML mechanism, to achieve extra TCP resources to rapidly fill the IPTV player buffer and exit the crowd in the network , as congestion can be eased by not joining existing traffic.
  • This mechanism is based on cooperation between both client and server.
  • the client is responsible for establishing new links, following which he server could send media data across some or all of the available TCP links.
  • the above discussion relates to links between one server and one client.
  • the following describes a further aspect of the dynamic multiple link mechanism for use with more than one server capable of streaming the required video data file.
  • the player (client) will set up another connection to an alternative suggested streaming server based on information on the plurality of server available for use.
  • Such information may be available in an index file or data structure and comprises information based geo-location, availability and available capacity of each server.
  • the player will try to identify additional streaming servers and will proceed to request concurrent data transfer from all the identified streaming servers holding the same data.
  • the player will continue to monitor the data effectiveness from all active sources and if one specific source is not performing as required, then it will stop the connection from this server and request the other concurrent streaming servers to alter the data flow pattern.
  • a hybrid Server to Client and Client to Client data transmission method may be applied by the flow control method of the present invention. This is explained in more detail in the second embodiment relating to data sharing techniques. Data givers (Server or other client devices) that have faster response times and a better network path will be chosen as the route for data distribution.
  • the multiple concurrent routing mechanisms making use of the DML mechanism of the data flow control mechanism of the present invention provides the flow control method of the present invention capability to navigate via multiple traffic routes and avoid congested segments dynamically based on network and client buffer conditions detected.
  • Adaptive bitrate streaming is a technique used in streaming multimedia over computer networks. It works by detecting a user's bandwidth and CPU capacity in real time and adjusting the quality of a video stream accordingly. It requires the use of an encoder which can encode a single source video at multiple bit rates. The player client switches between streaming the different encodings depending on available resources. As a result; very little buffering, fast start time and a good experience for both high-end and low-end connections can be obtained for IPTV applications.
  • Adaptive streaming is used nowadays in HLS or DASH video streaming service. These standard progressive downloads and switch streams are decisive in real-time based on the network flow.
  • existing adaptive streaming techniques do not consider client player capability, buffer conditions or the video quality for playback at the client.
  • the data flow control method of the present invention proposes an adaptive streaming mechanism for switching video stream based on the network conditions and at the same time also considering highest video delivery. This is achieved by collaboration between the server side and the client side to receive information from relating to client side (player) conditions such as the buffer level, playback remaining time and the current sending speed. This enables a better “stream switch” decision. Therefore, by taking into consideration network conditions as well as buffer conditions, adaptive streaming according to the first embodiment of the present invention can deliver a high video quality output.
  • Data flow control method is concerned with modes and mechanisms for data sharing, local data caching and reuse of such data to reduce network overheads. These are explained in detail below:
  • the data flow control mechanism of the present invention in the second embodiment overcomes this by automatically caching the last popular viewed content at the client/local device based on a pre-set storage space in the client device.
  • the contents that are cached or removed from this storage can be based their popularity score.
  • popularly viewed contents reside on the device and can be re-viewed even if the device is not connected to the internet. This prevents unnecessary retransmission to conserve overall energy.
  • Data recycling mechanisms of the flow control method of the second embodiment is available for video on demand (VOD) or replay TV.
  • a data caching policy module is implemented at the client end along with a RAM buffer size of 20 Megs and a local storage reserve of 2 GB (HDD) for instance.
  • the data recycling mechanism includes rules and policies to specify that data delivered to the client device will be indexed, organized and recorded for later use.
  • FIG. 6 shows a flowchart depicting a data flow control mechanism with data recycling, such that a local cache is consulted before data is pulled from the server.
  • the data flow control method of the present invention first checks for the content at both local RAM and HDD storage. If there is a copy locally, this is played immediately. In some instances, only part of the popular content may be locally cached.
  • the data recycling mechanism at the time of playback is also configured to request the streaming server to start streaming any missing portion of the video file. The continuing portion of the data will be requested at the same bitrate level, and after the first few Group of Pictures GOP, streaming is resumed for the rest of the session. This method allows bandwidth to be efficiently utilised only when necessary and can achieve instant playback, which also improve user experience.
  • Point to point communication between servers or client devices commonly referred to as P2P is not a new concept and it has been widely used in many applications over the Internet.
  • P2P policies are unsuitable for high quality video streaming for IPTV application.
  • the data flow control methods of the present invention propose a “Hybrid P2P” streaming mechanism which works as a combination of Client to Server & Client to Client P2P basis. The use of one of these P2P methods is determined when the network is safe to share data with other peers, without impacting smooth video playback.
  • the hybrid P2P mechanism of the second embodiment initially involves requesting data from the streaming server as normal. Once the player buffer is at a safe level, then the hybrid P2P mechanism considers getting the data from a closest neighbouring client device (peer) rather than requesting the server for data. In order to co-exist with adaptive streaming strategy and provide the best quality (see 5.1.5), a high video bitrate is exchanged in the P2P system. This hybrid P2P streaming mode is often trigged when the buffer is healthy and data flow control method exits the hybrid P2P mode when the buffer is less than 15%.
  • Hybrid P2P mechanism of data flow control is a data sharing concept involving a combination of a server sending data to clients, a client sharing data to other clients and a client sharing data to many clients.
  • This can be viewed as a hierarchy tree structure, with server A being the original source of data , which provides it to client A, client A then provides this data to client nodes B, C D and so on.
  • the source for a leaf node X can either be a streaming server or another leaf node that has the same data and is capable of providing this data to leaf node X.
  • hybrid P2P Utilizing hybrid P2P under a normal network condition will eventually peak streaming at the highest video bitrate.
  • the data flow control mechanism of the second embodiment switches the player to hybrid P2P.
  • Clients devices that participate in P2P will need to be configured such that they can act as a “data giver” or a “data consumer” or both and this information can be stored in the backend systems and accessed when a data file that is also available in a data giver's device is requested by another client.
  • a client device initially streams a movie to the device, the movie information and the data blocks are recorded into a central database for future distribution guides.
  • the information in the databases will guide this client to those peers that have the content and are permitted as a “data givers”. If the new client's request found no match, the player will automatically exit hybrid P2P and resume data flow based on the other data sharing modes of the present invention as set out in the above embodiments of the present invention.
  • one client device can share cached data with one or multiple clients within network and vice versa.
  • the server side can save up to 90% of bandwidth and I/O resources. The more client devices are registered as a data giver in a P2P network, a lesser load is required on the server side. This allows IPTV service operators to reduce server hosting cost significantly.
  • the third embodiment of the data flow control method of the present invention includes modes and mechanisms for Video Quality Control to ensure that the highest quality of video data is provided to I PTV end users. These mechanisms are explained below:
  • the adaptive streaming in the third embodiment is based on a video quality greedy policy.
  • Adaptive streaming is a technique used in streaming multimedia over computer networks and functions by detecting a user's bandwidth and CPU capacity in real time, and adjusting the quality of a video stream accordingly.
  • This mechanism requires the use of an encoder which can encode a single source video at multiple bit rates.
  • the player client is capable of switching between streaming the different encodings depending on available resources. This results in very little buffering, fast start time and a good experience for both high-end and low-end connections.
  • FIG. 7 A preferred mechanism for implementing adaptive data streaming for high quality video data is shown in FIG. 7 and is also explained below:
  • NoLimitUpThreshSec means that the server can switch to upper bit rate level quality without any limitation.
  • QualityGreedyDurSec means the length of time to maintain the Quality Greedy Switch Policy. This policy will either keep or switch to upper bit rate level quality.
  • QualityGreedyThreshSec defines when to start Quality Greedy Switch Policy.
  • the adaptive switch process is described as follows in relation to FIG. 7 :
  • the adaptive streaming mechanism maintains current video quality. If the sending operation is not blocked, this is switched to upper bit rate level quality.
  • a start bitrate selection mechanism is proposed as part of the data flow control mechanism of the present invention.
  • the local storage or cache is checked. If there is a copy locally, the request is not streaming server and instead the player plays the local data immediately (similar to data recycling of 5.2.1).
  • the bitrate selection mechanism according to the third embodiment of the present invention proposes a method streaming at the same bit rate with the local file. The player side requests the appropriate files from the server and starts playing. The initial bitrate does not need to be the lowest video bitrate. There are many factors which determine which bitrate should be played. In the present embodiment, this depends on the resolution of the playback device screen.
  • the lowest video bitrate is not suitable and could have a very bad video quality. There is a therefore a balance between fast start and the video quality to be struck in the bitrate selection mechanism of the data flow control method of the invention.
  • FIG. 8 A preferred method for implementing start bitrate selection mechanism according to the third embodiment can be seen in FIG. 8 .
  • the data flow control method is configured for checking the device type ID and determining which video file is to be sent instead of always start sending with the lowest video bitrate.
  • the start bitrate selection mechanism can provide noticeable results. For example when movies are played on a big screen TV, lower video bitrate often shows lots of flaws. Therefore, when the player type is identified as a TV, the data flow control means can start sending bitrates at the 2nd or 3rd highest level at the outset rather than the lowest bitrate.
  • the internet connection could sometimes fall below the lowest video bitrate and all of the video streaming mechanisms and modes that are/were applied may not be able to cope with the congestion. This event is rare but can cause the buffer to be emptied and video playback can be interrupted. Sometime a few kbps of data makes a difference between smooth playback and video buffering. When this condition occurs, a choice of either accepting the video buffering effect or providing other options for maintaining smooth playback is to be made by the data flow control means.
  • the selective frame drop mode of the present invention is depicted in FIG. 9 .
  • This mechanism is a dynamic procedure to “not” stream nond frames (aka B/P frames) within a video GOP. This will create a video jumping effect but at the same time it allows continuous streaming when only 20% of the required bandwidth is available. This is probably one form of acceptable effect during a bandwidth shortage period. Audio is not degraded or interrupted, which in most cases will be acceptable to users.
  • smooth playback can be ensure by setting two dropping levels , A. dropping 50% B/P frames and B. dropping 100% B/P frames in one GOP (Group of Picture). This will temporally reduce 30%-80% bandwidth required and utilize this saving to transmit the remaining video frames and the audio to the player. During this time window, the video may present some skipping effect and is likely to remain this way until the network can recover from the severe temporally congestion.
  • FIGS. 10 a is an indication of the IPTV end user viewing experience when the selective frame drop mechanism is applied and FIG. 10 b is an indication of the viewing experience without this mechanism. As depicted, a buffering effect is inevitable in FIG. 10 b.
  • Video encoding can have different modes and filters to enhance video quality.
  • the data flow control method according to the present invention provides a mechanism or policy for dealing with high motion picture frames to enhance viewing experience, provide the highest viewing quality and efficient manage network resources.
  • VBR Very Bitrate Rate
  • encoding is a mode that can yield good video quality output. This encoding generates a large GOP for those fast motion moving scenes and smaller GOP for those with less motion, and generates large GOP (Group of Pictures) and vice versa.
  • Each time the flow control method processes big GOPs it consumes more network resources which create network spiking or jittering.
  • the “fast motion pictures” scenes may trick the data flow protocol, being used into falsely switch from the current video quality to a next lower quality level.
  • the big GOP may falsely alert the adaptive streaming mechanism of the data flow control (set out in 5.3.1) to switch to a lower bitrate.
  • This false trigger significantly impacts the viewing experience.
  • the present invention in a third embodiment proposes a data flow control mechanism implementing a policy described below and referred to a ‘high- motion pictures first’ or high-motion picture priority policy to obtain a better viewing experience under limited network conditions.
  • the high-motion picture first mechanism is set out in FIG. 11 .
  • the first minute or so is utilised by the data flow control method to gauge and detect network bandwidth. If it is determined that the bandwidth is adequate to sustain the highest video bitrates, incremental bitrate switching is stopped and the flow control jumps directly to the highest level.
  • Selective GOP is also an important factor in enhancing video viewing experience of the data flow control mechanism of the present invention. Each GOP is inspected and their sizes are considered. If the GOP size is much bigger than the video bitrates, this translates into a high motion event.
  • the GOP size is calculated by the data flow control mechanism before sending the first packet of the moving picture in one GOP. If the average bit rate in this GOP is 30% less than the average bit rate of the current movie clip, this GOP is flagged as a “Low Motion Picture GOP”. If the average bit rate in one GOP is 30% more than the average bit rate of the current movie clip, this GOP is flagged as a “Fast Motion Picture GOP”.
  • the data flow control mechanism continues to monitor the network throughput by checking to see if the highest video bitrate is being streamed or not. If the stage of streaming is not at the highest bitrate, the network condition deemed to be poor and the data flow control method will not be able send data using the higher bitrate at all time. This condition would impact playback video quality significantly as a slight change of bandwidth would falsely tell the player to request a lower bitrate.
  • the condition of the local buffer as well as the type of GOP that is being sent is identified. If the buffer is at the safe threshold and if the sending GOP is flagged as “Low Motion Picture GOP”, then the data flow control method switches to a lower bitrate to yield additional bandwidth for a “Fast Motion Picture GOP”.
  • the data flow control mechanism clocks down to a lower bitrate for those static or low motion picture GOPs to preserve bandwidth for the higher GOPs at a higher bitrate. As the result, a constant video quality as well as smooth streaming effect is maintained.
  • the high motion picture first policy allows the data flow control method to continue sending higher bitrates during a congestion time window to always allocate more bandwidth for those high motion picture GOPs.
  • Dynamic adaptive streaming in 5.3.1
  • selection frame drop 5.3.1
  • the data flow method of the present invention proposes a buffer repair or enhancement mechanism to monitor the network condition and buffer filling rates and to then predict how much time and speed is available to allow the flow control method to replace lower quality GOP in the buffer with a high quality GOP.
  • This mechanism of buffer repair improves video playback quality. As streaming takes place, the network fluctuates and so does video quality.
  • the buffer is segmented into multiple segments which consist of various video bitrates that form a continuous playback timeline. Some segment videos are of low bitrates which have a negative impact to the viewing experience.
  • buffer repair mechanism of the data flow control is applied when the buffer reaches a safe level i.e. 80% full. During this mode, the flow control method is configured to check for previously streamed segments in the buffer that have low video bitrates and are still queuing for playing. The buffer repair mechanism is then configured request the segments to be replaced with higher video bitrates, before it turns to playback. This ensures that the first part of the buffer always has the highest video bitrate and playback with highest video quality.
  • the buffer repair or enhancement mechanism is shown in FIG. 12 and is explained in detail below:
  • the flow control mechanism ensures that the player maintains a one GOP queue which stores all the GOP data that will be sent to the video decoder.
  • the player monitors the GOP queue periodically. If the time span of this queue is less than 10 seconds, then no action is taken. Otherwise, the player will check whether there is any GOP in the queue having only part of B/P frames (Partial GOP). If there is Partial GOP not be sent to the decoder in 10 seconds, then the mechanism is configured to check current server sending speed. If server sending speed is less than or equal to 1.0 ⁇ rt_bitrate, then no action is taken. Otherwise, if the sending speed is more than 1.0 ⁇ re_bitrate, the flow control mechanism requests the server to resend that GOP with all frames at the same quality.
  • the lowest quality GOP is identified in this queue and compared with the current receiving GOP quality. If the lowest quality GOP is higher than the current data, no action is taken. Or else if the server current sending speed is more than 1.0 ⁇ rt_bitrate, player will request the server to resend this GOP at one level higher quality.
  • the data flow mechanism After receiving this GOP at one level higher quality resent by server, the data flow mechanism uses this GOP to replace old GOP in the GOP queue.
  • the following mechanisms provide flow control techniques that can be applied to existing TCP data transmissions to provide an enhanced data flow control method according to a fourth embodiment of the present invention.
  • the Nagle Algorithm explained in the Background section 2 has a default (200+ms time delay) negative impact to IPTV services, especially when the users initiate interactive service such as channel changing, content queries, accounting etc. Therefore the present invention proposes a method to dynamically enable/disable Nagle algorithm based on the type of action and request to ensure the best effect can be achieved.
  • the flow control method of the fourth embodiment disables the Nagle algorithm when a command exchange between the user device and the server is detected. This eliminates at least 200 ms delay on the TCP transport layer. By doing this, it is possible to reduce the number of packets that are going to be injected into the network and also improve user interactive experience.
  • FIG. 13 A preferred procedure for applying the dynamic Nagle algorithm application as explained above is shown in FIG. 13 . Details of tests conducted when the Nagle algorithm was in an enabled, disabled and an adaptive state is shown in FIG. 14 a, with the different packet sending rate for each of the above mention states shown in FIG. 14 b . These tests were carried out in a LAN environment.
  • the following Linux controls can be applied to existing TCP to provide an enhanced data flow control according to the present invention.
  • This parameter allows TCP to use big window size on receiver and sender. This will increase overall throughput.
  • This parameter allows TCP to use time stamp option in its header. This will help TCP to estimate the RTT value (round trip time)
  • This parameter allows TCP receiver to send selective acknowledgments to report multiple packet loss instead only one packet per acknowledgement. This will help the sender to retransmit the lost packets more quickly.
  • This parameter is only valid for kernel 2.6.13 or later versions. It allows user to change the congestion control algorithm to get better performance of the special applications.
  • the cwnd can be up to 4 MB. This will increase TCP throughput on high speed network.
  • FIG. 13 The interaction of the above described modes, policies , methods and mechanisms that make up the proposed data flow control protocol or method of the first, second and third embodiment of the present invention is shown in FIG. 13 .

Abstract

The present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising identifying a condition of the communication network between said sending and receiving nodes, identifying a condition of the receiving node, and adjusting the media data flow through said communication network based on the identified condition of the communication network and the identified condition of the receiving node.

Description

    1. FIELD OF THE INVENTION
  • The present application relates to data transmission protocols for the transfer of media data from a streaming server to one or more clients. More particularly the present invention provides an enhanced data flow control method that can be used in conjunction with an existing protocol such as TCP/IP. The data flow control method according to the present invention takes into consideration network conditions as well as a receiving node or client device conditions , such as the data buffer of the client player, to improve the speed and quality of media data transmission for Internet protocol television (I PTV) applications.
  • 2. BACKGROUND
  • Video traffic is currently accountable for over 60% of the world's bandwidth usage over communication networks such as the Internet or any similar wireless communication network today such as LANs, WLANs etc. How such data is injected into a network has a strong influence on the overall data flow through the network. Uncontrolled data injection into a network can lead to congestion impacts such as slow overall traffic flow, packet delay, packet loss, packet out of order, packet re-transmission, flooding/crashing of network devices (routers, switches etc.), and flooding of uncontrollable traffic. These types of events cause network traffic to slow down and sometimes to come to a complete stop if the switching & routing network equipment in use is unable to cope with the flow demand. Additionally, unmanaged data injection will have a negative impact for applications that rely on real-time communication such as VoIP (Voice over IP), live broadcasts of media events, real-time video conferences and other time-sensitive applications.
  • The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite (IP), i.e. the set of network protocols used for the Internet. TCP provides reliable, ordered, error-checked delivery of a stream of octets between programs running on computers connected to a local area network (LAN0, intranet or the Internet. It resides at the transport layer. Internet Protocol television (IPTV) is a system through which television services are delivered using the Internet protocol suite over a packet-switched network such as a LAN or the Internet, instead of being delivered through traditional terrestrial, satellite signal, and cable television formats. TCP is the most commonly used protocol on the Internet. The reason for this is because TCP offers error corrections. When the TCP protocol is used there is a “guaranteed delivery”. This is due to a method called “flow control” in TCP. Flow control determines when data needs to be re-sent, and stops the flow of data until previous packets are successfully transferred. This works because if a packet of data is sent, a collision may occur. When this happens, a receiving client system or end-point can re-request the packet from a server transmitting data until the whole packet is complete and is identical to the original packet that was transmitted. Thus, TCP is an advanced transport protocol with 100% success rate on data delivery, built in flow control and error corrections, which run effectively over unmanaged networks. The use of TCP is currently required for all Open Network IPTV deployments where one or more network segments are not managed by the IPTV service operator.
  • However, adopting TCP in an IPTV Streaming application has many drawbacks and can cause network traffic issues due to the structure of this protocol. Standard TCP involves large overheads in data transmission due to its default data frame structure. The header refers to the first part of a data cell or packet, containing information such as source and destination addresses and instructions on how the telecommunications network is to handle the data. The header is part of the overhead in a data transmission protocol. For typical TCP/IP transmissions, i.e. most Internet traffic, the header is usually 40 bytes of each packet (20-byte TCP and 20-byte IP headers). TCP and IP headers can be larger than 20 bytes if “options” are enabled in the data transmitted. Internet Control Message Protocol (ICMP), i.e. the protocol used for sending test and control messages, have headers that are 28 bytes. This overhead due to the headers can impact IPTV user experience, especially when the network conditions are abnormal, i.e. congested due to heavy traffic flow. TCP does not offer the ability to cut off the transmission flow to improve network congestion. Further TCP is incapable of managing bandwidth sending rates to an IPTV client player without creating unnecessary data waste.
  • The User Datagram Protocol (UDP) is also one of the core members of the Internet protocol suite. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without prior communications to set up special transmission channels or data paths. UDP uses a simple transmission model with minimum protocol mechanisms. It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user's program. UDP provides checksums for data integrity and port numbers for addressing different functions at the source and destination of the datagram. However, in UPD there is no guarantee of delivery, ordering, or duplicate protection.
  • UDP is suitable for purposes where error checking and correction is either not necessary or is performed at the application prior to transmission, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which is not a viable option in a real-time system. If error correction facilities are needed at the network interface level, an application residing on a host or a system for transmitting such data will need to make use of the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP), which are designed for this purpose.
  • UDP has some unique advantages over TCP but also has drawbacks as well. For instance, UDP is required have when the transmission requirements combine methods of unicast and multicast. The use of Multicast allows occupation of the available bandwidth at fixed data rates, without facing user growth capacity issue. However, UDP cannot be used to send important data such as webpages, database information, etc., and its present use is mostly limited to streaming audio and video. UDP can offer speed and is faster for data transmissions when compared to TCP because there is no form of flow control or error correction in UDP. Therefore the data sent over the Internet using UDP is affected by collisions, and errors will be present. Therefore UDP is only recommended for streaming media over a managed network, i.e. a network where the quality of service (QoS) is managed by the service provider, or when data loss is not a concerning factor for the transmission. Due to its simplicity and light weight design, UDP is an ideal transport protocol when transmitting data over QoS managed networks where packet collision is unlikely to occur. UDP offers fast data injection, lower packet overhead and faster respond time compared to TCP. For I PTV, the above benefits can improve TV-like experience, especially fast channel switching, immediate movie play-back etc. and also reduce stress on the servers and network devices. However, with UDP, traffic collision and packet loss inevitable as this protocol does not have any built in flow control mechanism.
  • The Nagle algorithm, named after John Nagle, proposes improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. In this technique, in congestion control in IP/TCP Internetworks (RFC 896) a “small packet problem” is described where an application repeatedly emits data in small chunks, frequently only 1 byte in size. Since TCP packets have a 40 byte headers (20 bytes for TCP, 20 bytes for IPv4), this results in a 41 byte packet for only 1 byte of useful information, which is a huge overhead. This situation often occurs in Telnet sessions, where most key presses generate a single byte of data that is transmitted immediately. Over slow network links, many such packets can be in transit at the same time, potentially leading to congestion collapse. Nagle's algorithm works by combining a number of small outgoing messages, and sending them all at once. Specifically, a the sender system or application should keep buffering its output until it has a full packet's worth of output, so that output can be sent all at once. This existing technique making use of Nagle algorithm is explained below.
  • A. For any TCP connection, there is at most one small packet that is not acknowledged by the receiver application or device. Unless this is acknowledged, the sender does not transmit any other small packet (having very few data bytes of useful information).
  • B. TCP collects these small packets and sends them out at once as one whole packet only after such acknowledgement is received. Therefore, as more acknowledgements arrive, more data packets are sent. On either WAN/MAN/LAN, The round trip time (RTT) value for a TCP connection normally ranges from 100 ms to 300 ms. This delay allows TCP to have enough time to collect small packets before next acknowledgement arrives.
  • Though the use of TCP with the Nagle algorithm benefits some types of data communications and transfers using TCP, this benefit does not extend to IPTV data and services where a multiple of a 300 ms delay could be crucial in a determination of good or bad user experience. Such delays are unacceptable for many applications.
  • Therefore, there exists a need for a new method or protocol for data packet transmission over a communication network that overcomes the drawbacks of TCP and UDP and provides speed, flow control and error correction mechanisms, with minimal network traffic overheads.
  • 3. SUMMARY OF THE INVENTION
  • In one aspect, the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising identifying a condition of the communication network between said sending and receiving nodes, identifying a condition of the receiving node, and adjusting the media data flow through said communication network based on the identified condition of the communication network and the identified condition of the receiving node.
  • In a further aspect the sending node is configured for encoding and streaming said media data to the receiving node based on a request for such data from the receiving node, and the receiving node is capable of decoding and playback of said media data.
  • In a further aspect the step of identifying the condition of the network comprises detecting the level of network traffic and determining whether the network between the sending node and the receiving is in a normal state or in a congested state, based on the detected level of network traffic; and
      • wherein the step of identifying the condition of the receiving node comprises determining whether the data buffer at the receiving node is at a safe level, unsafe level or critical level, the buffer being 80% or more full in the safe level, 20%-80% full in the unsafe level and 0%-20% full in the critical level; wherein said network and receiving node conditions are periodically monitored and communicated between the sending node and the receiving nodes at defined intervals.
  • In a further aspect, responsive to a request for media data from the receiving node, the present invention comprises
      • streaming the requested media data at an initial data streaming rate;
      • identifying a maximum data streaming rate supported by the receiver node;
      • identifying the condition of the network;
      • identifying the condition of the receiving node
      • if the network condition is identified as being normal and the condition of the buffer is at critical or unsafe level, then continuously increasing the rate of data streaming until said maximum rate is reached, or until the buffer reaches the safe level or until the network condition becomes congested.
  • In a further aspect, if during the step of continuously increasing the rate of data streaming, the buffer at the receiving node reached the safe level, then the method comprises adjusting the rate of data streaming to a rate that is equal to a draining rate of the buffer during playback.
  • In a further aspect, if during the step of continuously increasing the rate of data streaming, the network condition changes to “congested” and remains as congested for a first defined time period, the method comprises:
      • identifying the remaining playback time for the data left in the data buffer at the receiving node;
      • reducing the rate of data streaming to near zero or a calculated low rate of streaming, or completely suspending streaming of data from the server, until either the network condition becomes normal or if the identified remaining playback time reduces to 15 seconds or less.
  • In a further aspect, if the remaining playback time reduces to 15 seconds or less, the method comprises:
      • requesting the sending node to accept additional network communication links between the sending node and the receiving node;
      • determining the total number of additional links required to sustain real-time playback at the receiving node;
      • establishing the additional links by the sending node;
      • streaming the media data from the sending node across all established links evenly, such that if the condition of the network is identified as congested on a first link of the plurality of links, the media data is sent via the next available communication link.
  • In a further aspect the method comprises reordering of media data packets arriving at the receiving node out of sequence by making use of the identifier of the sequence in the header part of each media data frame.
  • In a further aspect the method further comprises:
      • identifying one or more additional sending nodes that are capable of streaming the requested media data to the receiving node;
      • establishing additional communication links to the receiving node by each of the sending node such that each sending node is capable of sending the media data event across the additional links.
  • In a further aspect a streaming application at the sending node is capable of adaptively encoding the media data to be streamed from the sending node according to a bit rate suitable for the identified buffer conditions of the buffer of the receiving node.
  • In another aspect, the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
      • responsive to a request for media data requested by the receiving node, determining if a copy of said media data is stored locally at the receiving node or stored on a memory device that is accessible to said receiving node; wherein if a local copy of the entire data file or parts of the data request is available is stored locally, accessing this copy and requesting streaming of only the missing parts from the sending node.
  • In a further aspect, if a local copy of the requested media data is not available at the receiving node, the method comprises the steps of:
  • identifying the conditions of the receiving node, including the screen size, resolution and capability of the display screen connected to said node;
  • selecting a bitrate fore streaming the media data according to the identified screen size, video resolution and capability supported by the display screen;
  • streaming the requested media data from the sending node using the selected bitrate or a higher bitrate at the outset of said streaming instead of commencing said streaming at the lowest available bitrate.
  • In a further aspect if the network condition is identified as being congested, then continuing said streaming at the current streaming rate by only streaming I-frames of the media data packet and not streaming B and P frames of said media data to the receiving node, until the network condition changes to normal, to ensure that the media data is continuously streamed for playback at the receiving node.
  • In a further aspect when the buffer is at a safe level, the method comprises:
      • identifying previously streamed segments stored in the data buffer having low video bitrates or partial GOPs in the buffer queue;
      • identifying the remaining playback time for the data left in the data buffer at the receiving node;
      • if the remaining playback time is more than 10 seconds, the identifying the current rate of streaming of media data from the sending node;
      • if the rate of current streaming is more than an average rate supported by the receiving node, then the method further comprises requesting the sending node to resend the existing frames with low video bit rates or partial GOPs with higher video bitrates.
  • In another aspect, the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
      • responsive to a request for media data requested by the receiving node, identifying a plurality of intermediate data giver nodes, each storing a local copy of the requested media data;
      • if a data giver node that is identified as a neighbour of the receiving node is one of the identified intermediate nodes , then obtaining the copy of the media data from this neighbour data giver node, said neighbour node being a peer node of said receiving node;
      • if no data giver is identified as being a neighbour of the receiving node is identified, then streaming the requested media data from the sending node.
  • In another aspect, the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
      • responsive to a request for media data requested by the receiving node, streaming the media data from the sending node at the currently available bitrate for a defined time period to detect the network bandwidth;
      • if said bandwidth is capable of supporting a higher video bitrate when compared to the current rate, the network is compared to switch to said higher video bitrate for continuous streaming.
  • In another aspect, the present invention provides a data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
      • responsive to a request for media data requested by the receiving node, streaming the media data from the sending node at the currently available bitrate for a defined time period to detect the network bandwidth;
      • identifying a plurality of group of pictures (GOP) for high motion video data that is to be streamed and inspecting the size and average bit rate for each GOP and the network conditions prior to said streaming;
      • if the average bit rate of a GOP is 30% less than the average bit rate of the currently streamed media, then the method comprises identifying said GOP as a low motion picture GOP and switching the current streaming bitrate to a lower bitrate for streaming the GOP at the current bit rate;
      • if the average bitrate of a GOP is 30% more than the average bit rate, then the method comprises identifying said GOP as a high motion picture GOP, and switching the current streaming bitrate to the highest available bitrate for streaming the GOP at the highest bit rate.
  • In a further aspect the sending node is an IPTV streaming server and the receiving node is a client device including a multimedia player.
  • In another aspect, the present invention provides a system for implementing the method as claimed in any one of the preceding claims comprising a sending node and a receiving node capable of communication via a communication network, the sending node having a streaming module capable of streaming multimedia data stored in a memory means of the sending node, and the receiving node capable of requesting a multimedia data to be streamed from the sending node for playback on a multimedia player incorporated in the receiving node.
  • 4. BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 and FIG. 2 show the frame structures for TCP and UDP, respectively.
  • FIG. 3 shows a flow chart depicting an exponential speed up mode for the data flow control method according to a first embodiment.
  • FIG. 4 shows a flow chart depicting an exponential back off mode for data flow control method according to the first embodiment.
  • FIG. 5 shows a flow chart depicting a linear trickle off mode for the data flow control method according to the first embodiment.
  • FIG. 6 shows a method of bitrate selection for a data sharing mode of the data flow control method according to a second embodiment.
  • FIG. 7 shows a method of adaptive bitrate selection for high quality video data playback for the data flow control method according to a third embodiment.
  • FIG. 8 shows a method of bitrate selection for based on resolution for the data flow control method according to the third embodiment.
  • FIG. 9 shows a flow chart depicting a method for a selective frame drop for the data flow control method according to the third embodiment.
  • FIGS. 10a and 10b show charts depicting viewing experience with and without the selective frame drop of FIG. 9, respectively.
  • FIG. 11 shows a flow chart depicting a method for allocation of bandwidth for high motion video frames for the data flow control method according to the third embodiment.
  • FIG. 12 shows a flow chart depicting a buffer repair mode for the data flow control method according to the third embodiment.
  • FIG. 13 shows a flow chart depicting the interaction between modes of the first, second and third embodiments.
  • FIG. 14 shows a flow chart depicting a method for adaptively enabling or disabling the Nagle algorithm according to the present invention.
  • FIGS. 15a and 15b show a table and graph depicting the performance test results with and without the use of the method of FIG. 14, respectively.
  • 5. DETAILED DESCRIPTION OF THE EMBODIMENTS
  • As data moves along a network, various attributes are added to the data file to create a frame. This process is called encapsulation. There are different methods of encapsulation depending on which protocol and topology is being used. As a result, the frame structures of data frames differ. FIG. 1 illustrates a TCP frame structure and FIG. 2 illustrates a UDP frame structure. The payload field in the shown frames contains the actual data. TCP has a more complex frame structure that UDP. This is largely due to TCP being a reliable connection-oriented protocol, as explained in the background section. The additional fields shown in FIG. 1 (when compared to the UDP fame shown in FIG. 2) are those needed to ensure the “guaranteed delivery” offered by TCP. Therefore TCP is a much slower data transmission protocol when compare to UDP, and with much larger overheads. This is especially so if TCP is combined with the use of the Nagle algorithm described in the background section.
  • The present invention provides a new data transmission protocol or data flow control method for use in the Internet protocol suite. Particularly, the present invention provides a plurality of flow mechanisms or modes for media data packet transmission, preferably video data transmission over a communication network that overcomes the drawbacks of TCP and UDP and provides speed, flow control and error correction mechanisms, with minimal network traffic overheads.
  • In one aspect, the present invention provides a data flow control method that handles data flow management on the application layer of the OSI model. Though the present invention is concerned with media data and specifically video data for IPTV services, a skilled person would easily understand that the present invention can be used for managing the flow of any type of data and information that can be transported over communication network such as the Internet.
  • The data flow control method according to a first embodiment of the present invention is based on monitoring one or more sending node or server side conditions (for instance, an IPTV provider's server for sending the data) as well as one or more receiving node or client side conditions (client device such as a player or a set-top box for receiving the data). The present invention facilitates communication for information and data exchange between the sending server & receiving client for communicating local network conditions at each end. Based on the conditions detected from both the client device and the server device, the method of data flow control according to the first embodiment is able to calculate and predict the network environment.
  • Upon a network condition being detected or notified to either the server or the client, the flow control method according to the present invention is capable of applying one or more data transmission modes or techniques (these modes are explained in detail below) to ensure that high quality video data can be streamed over unmanaged and/or fluctuated networks.
  • In another aspect, the data flow control method of the present invention is capable of consuming unused bandwidth (left over or wasted bandwidth) in the network for more efficient data transmissions by data sharing, local caching and data recycling.
  • In a further aspect, the data flow control of the present invention provided high video quality delivery and maintains smoothness of video playback on any network.
  • The data flow control method or protocol incorporates a combination of RTSP (Real-time Streaming Protocol) encapsulated over HTTP (Hypertext Transfer Protocol). The data flow control method according to the present invention is handled in the application layer. The method is capable of implementing one or more modules which reside on either the server side or the client side terminals, or both. The client and server nodes, equipped with the modules for implementing such flow controls constantly work together in collaboration to predict the network flow, adjust data flow, enhance video quality, navigate through various network routes to maintain a good IPTV user experience that conventional data transmission protocols such as TCP and UDP cannot offer.
  • Balancing between fast responses, smooth data flow and quality of data are some of the objectives of the present application. A summary of some of the advantages of the data flow control method according to the present invention is provided below. It is not essential that all of these advantages are achieved for a single transmission, as for a particular transmission one effect may be more important that other advantageous effects.
  • A. Send media data to the end users as fast as possible to ensure the buffer at the user device, i.e. the client system stays full and maintains smooth playback.
  • B. Detect congestion ahead of TCP and back-off (reduce or stop sending packets) immediately (not gradually like TCP), which will help to ease off congestion rapidly.
  • C. Efficient use of bandwidth by using a dynamic multiple link strategy (multiple network paths & routes) prior to establishing the session.
  • D. Detect and differentiate between physical network congestion vs. normal network congestion and apply a suitable control mode avoid self-competing.
  • E. Support adaptive bitrate streaming based on the network conditions.
  • F. Provide a high video experience by utilizing the network channel fully and giving priority to high bit rate video GOP (Group of pictures)
  • G. Provide buffer repair, such that when the client buffer is at a healthy stage, the data flow control method is configured to re-evaluate the video bitrates on the buffer and replace lower/poor quality segments with higher video quality. Such repair takes place safely and effectively only when network condition permits.
  • H. The flow control method is configured for recycling data by caching popular data on local storage devices to prevent repeated streaming from the server and is also configured to and also share locally cached data with peers.
  • I. Switch to P2P style of communication when condition permits. This is mainly used in VOD and Replay-TV scenarios
  • J. Co-exist with other kinds of service data flow, such as VoIP peacefully, i.e. with no packet collisions.
  • The application layer data flow control method according to the present invention comprises data flow control methods and video quality control methods.
  • According to a first embodiment of the present invention, data flow control methods or modes that are applied based on network conditions and buffer conditions are:
  • A. Exponential Speedup (sending data on an increasing rate manner)
  • B. Exponential Back Off (reduce data sending rate to near zero)
  • C. Linear or smooth Trickle Mode (sending data rate equal to the video playing rate)
  • D. Dynamic Multiple links (sending data in multiple TCP connections)
  • E. Adaptive Streaming considering network and player buffer conditions.
  • According to a second embodiment of the present invention, data flow control methods to achieve data sharing to improve overall network and streaming efficiency and reducing network resources are:
  • A. Data Recycling (Preserve and reuse cached data whenever possible)
  • B. Hybrid point-to-point (P2P) Streaming (Receive and share data with other clients on a controlled manner when the condition allows)
  • According to a third embodiment of the present invention, video quality flow control methods are:
  • A. Adaptive Bitrate Streaming based on a quality greedy proviso (ensure high quality video data are sent with highest priority).
  • B. Smart start video bit rate selection (dynamically select video best bitrates based on device resolution to improve user experience)
  • C. Video frame selective drop or frame bypass (maintaining video continuity by ignoring Non-I frame until network condition improved)
  • D. Motion Picture First (Allocation of bandwidth for High Motion video frames)
  • E. Low video quality buffer repair/replacement of poor video with higher bitrates (If the right condition occurs, go back to the buffer replace previous un-played low quality video (GOP) with higher video quality)
  • 5.1 First Embodiment
  • Data flow Control mechanisms based on network conditions as well as client or receiving node's buffer conditions.
  • Though TCP is adequate and reliable for video streaming, a good IPTV experience is one that is comparable with traditional Digital Cable TV, Satellite TV and Terrestrial TV. The expectations are good video quality, fast channel changing, immediate video acquisition and continuous streaming. In order to achieve this level, the present invention proposes a plurality of data flow control mechanism that can work in conjunction with TCP, the public network and navigate around the congested network segments. The following mechanisms or data flow control modes are different from the techniques applied by traditional TCP or UDP because they are based on a collaboration of network conditions when the data is streamed from a sending node as well as the conditions of the player or client buffer. Previous and existing systems do not have this collaboration and are reliant reporting of anomalies in the network. In the present invention, network conditions and buffer conditions can be obtained from the server (the sending node—this need not be the only or original source of the data and may also be an intermediate node storing the data file) or the client or end user receiving node/player, or by both nodes making use of information exchanges between them.
  • 5.1.1 Exponential Speed Up (Push Forward as Fast as Possible):
  • The exponential speed up data flow control mechanism of the first embodiment is shown in FIG. 3. The preferred steps of this mechanism are explained below:
  • 3a. Set an initial sending rate, init_rate to 1.6× ‘real-time playback’ bit rate of the VOD (video on demand) file, noted by tbitrate.
  • 3b. Set a maximum push rate, max_rate, to 0.625× ( 1/1.6) downlink bandwidth reported from player.
  • 3c. If max_rate is less than init_rate, then the flow control method sets the max_rate to init_rate.
  • 3d. Try to push forward media data at init_rate. If network is normal (no congestion), then try to push at speed 1.6× current speed, i.e. 1.6*init_rate which equals to 1.6*1.6*rt_bitrate.
  • 3e. Continue to increase push speed in an exponential manner until cache buffer on player side is 80%, the max_rate is reached or network becomes congested.
  • The above steps 3a-3e set out the main features of the exponential speed-up data flow control mode. The following steps explain mechanisms employed based on additional abnormal buffer conditions and network conditions and sets out the procedure for achieving efficient data flow following exponential speed up mode by interacting with other dataflow control mechanisms of the first embodiment.
  • 3f. If cache level is more than 95%, then turn into exponential back off process immediately (this is explained in 5.1.2 below).
  • 3g. Else if cache buffer is more than 80%, turn into Linear Trickle Mode (see 5.1.3 below) to push forward media data at 1×rt_bitrate.
  • 3h. If max_rate is reached, keep such rate until cache buffer is 80% or network becomes congestion.
  • 3i. When network becomes congested, if cache buffer reaches a critical level, the data flow control method then proceeds to Dynamic Multilink Process (see 5.1.4). Otherwise, the push (streaming) speed can be decreased to 0.625×( 1/1.6×) of current speed. If the network congestion prevails, the push speed may be decreased to 0.625×( 1/1.6×) current speed but no less than lx tbitrate. When there is buffering on player side i.e. the client device, multi-path/concurrent multiple routes process (see item 5.1.5) can be initiated.
  • 3j. When network recovers from congestion, the data flow control mechanism attempts to send at the last push speed and then proceeds to repeat step 3e-3j above.
  • 3k. After the flow control method returns from exponential back off (see 5.1.2), steps 3d-3j above can be repeated.
  • 5.1.2 Exponential Back Off
  • This back off mechanism of the flow control method of the first embodiment can be triggered upon detection of congestions/conditions of network or player buffer that matches a pre-set back-off criteria. The best solution to ease off congestion for IPTV packet data transmissions is to back-off or navigate using one or more different paths to avoid contributing to the existing network congestion and traffic. The exponential back off mode or mechanism of the data flow control method according to the present invention (also referred to as a friendly back-off mode) is triggered when congestion is detected. This mode will suspend all other flow control modes and reduce data sending rates to near zero or a “0.05×rt_birate” to yield bandwidth to other applications.
  • The exponential back off data flow control mechanism of the first embodiment is shown in FIG. 4. The preferred steps of this mechanism are explained below:
  • 4a. During the playback session, the server or a system having a streaming application or module will try to push forward media data according to the ‘Exponential Speed Up’ mechanism set out in item 5.1.1.
  • 4b. During the play back session, the client device or player will periodically report to the streaming application it's cached/buffer media data size and ‘real-time play back’ duration, i.e. the amount of playing time left in the buffer. The update is sent every 2 seconds to 5 seconds, depending on the RTP/RTCP (Real time control protocol) calculation used. The server or streaming application can record this information for later use.
  • 4c. If continuous network congestion is detected in the network path by the exponential speed up mechanism, the streaming server will stop sending data to the TCP layer for a time span, which for instance equals to ⅓ of the time (Real-Time Play Back) reported from player in step 4b.
  • The following steps are provided to show the working of this mechanism in combination with the other mechanism and data flow control modes of the present application.
  • 4d. After the delayed time in step c expires; the streaming application or module will try to push data at last push speed recorded by Exponential Speed Up mechanism (5.1.1 above) to compensate earlier delay loss in step 4c.
  • 4e. If step d fails due to network throughput, congestion or if player is not receiving all packets within a normal timeline, the streaming application will stop sending and recompile a new calculation based on the new ‘Real-Time Play Back’ reported from player. The process will continue to yield or free up network bandwidth until the ‘Real-Time Play Back’ is less than 15 seconds (or a defined critical level) or if network condition becomes normal, i.e. there is no congestions and the transmission occurs within a predicted or expected time and at an expected QoS level.
  • 4f. If the cache data left behind in the player is less than 15 seconds (critical level) because of our back off procedure, ‘Dynamic Multiple Link’ mechanism set out in 5.1.4 can be applied to compensate earlier delay/loss in a quick and efficient manner.
  • 4g. If network is resumed to normal conditions, the exponential back off mode can be exited and the exponential speed up mode 5.1.1 can be resumed.
  • 5.1.3 Linear Trickle Mode
  • The linear or smooth trickle mode of the data flow control method according to the first embodiment is triggered or applied when the cache buffer on player side or client terminal is at the safe level (80% or more of the buffer). The IPTV streaming module or application at the server node will enter the linear trickle mode at the safe level. In this mechanism, the streaming application will send media data at 1×rt_bitrate speed, which is equal to the draining speed of the cache buffer on player side when the data from the buffer is being used. This ensures that the buffer may be maintained at a safe level, i.e. 80%, to ensure smooth video data playback.
  • A flowchart depicting the linear trickle mode is seen in FIG. 5. In this figure, the data flow is initially shown to be in the exponential speed up mode in 5a. At step 5b, it is determined if the data cache level is more than 95% and if so, the exponential back-off mode is initiated in step 5c (see 5.1.2). If the cache level is determined to be 80% at step 5d, then at 5f the linear trickle mode is initialled. This determination at Step 5d can also be made after checking the network conditions in step 5e, as shown in FIG. 5.
  • 5.1.4 Dynamic Multiple Link Mechanism
  • Traditional and existing data transmissions are established as a unicast session between one client and one server over one TCP link. When this path between the client and server is blocked or congested, conventional technology will start to buffer data or to give up completely. Furthermore, even if the data is acquired from multiple sources via multiple TCP links, the following issues are encountered:
  • A. Packet re-ordering where one piece of data arrives out of sequence and must be discarded. This effect multiplies by the total of TCP links in used and the problem gets worse.
  • B. Packets are delayed and are ordered in the wrong sequence between multiple sources
  • C. Data may be not continuous after reassembly at the player side.
  • D. Preventing duplicate of data transmission and reassembling packets acquired via multiple sources.
  • The use of dynamic multiple links (DML) as a connection management mechanism within the data flow control method of the first embodiment is for establishing and maintaining a plurality of connections between the server side system and a client player or system based on network conditions as well as conditions in the client environment. The DML mechanism dynamically establishes multiple TCP connections between the server and the client to achieve higher network throughput. At the same time, the mechanism also utilises these multiple network paths to help ease off congestion and allow the player continuously maintain the video session without interruptions. This is useful when data is urgently needed to prevent video buffering effect for IPTV. This DML mechanism is based on information exchange and cooperation work between the client side and server such that when data is required urgently, a module (this may be a dedicated DML module or integrated with other devices) is capable of computing the total connections needed and to request the server side to accept new connections. The server side determines how many connections it will use depending on other factors and network conditions.
  • During a streaming session, the streaming application in the server will try to send data across all the available TCP links in an average manner, i.e. evenly. When one data segment cannot be sent out on one link because of network congestion, the server will try the next available link. This unselective sending policy will increase the whole throughput between server and client. It can also be used as an emergency buffer rescue weapon when we need to compete bandwidth resource with other cross traffic to maintain smooth playback.
  • This dynamic sending policy will increase the whole throughput between server and client. The media packets arriving at the client side via multiple links are usually shuffled and arrive out of order. Therefore, re-ordering is required at the client, which can be preferably based on the sequence number located in the RTP header. The client is preferably equipped with a module or application to deal with reordering the out of sequence the RTP packets arriving from different links and give priority to the out of order packets to ensure the buffer is cleanly arranged for continuous playback of the received data.
  • The use multiple paths or links must be applied with management and governance by the IPTV service provider and regulatory services so that this is a fair and friendly strategy for today's networks, especially with many media service provider and other types of data transmissions competing for bandwidth on the same channel. The fair use of the DML mechanism of the first embodiment can yield many benefits, for instance:
  • 1. Establish multiple links with the client/receiver device to gauge for possible alternate network paths to a single server streaming video data. One main TCP link, i.e. a master link, is used under normal conditions i.e. with normal traffic conditions. If the network throughput reduces during a streaming session, then the DML mechanism is configured to send traffic over other TCP links (slave links) was established at the beginning of the session. Therefore, the plurality of links is established before the data transmission takes placed based on network and client conditions and these links are used in a dynamic matter as traffic along a network channel changes.
  • 2. If the data flow conditions continue to deteriorate using multiple data links, this is indicative of either A—network congestion is caused by other traffic or B—network congestion is caused by the physical network path. The player/client side will be unable to determine if the reason is A or B, and even if determined, will be unable to react to such condition. Therefore, by making use of DML mechanism, the client can collaborate with the server in an attempt to use DML to achieve better throughput. If the data flow condition does not show improvement, then it may be assumed that condition B has occurred and the data flow method may choose to switch to a different mechanism or mode for dealing with the abnormal condition. For instance, an adaptive streaming strategy as set out in item 5.1.5 may be applied by the flow control. The DML mechanism of the first embodiment allows fully utilisation of bandwidth resource, and also treats other network traffic fairly.
  • 3. The use or non-use of DML mechanism is dynamically adjusted under predetermined conditions. For instance, in case of condition “A” above, a possible reaction of the data flow control method of the present invention is to use more links i.e. using the DML mechanism, to achieve extra TCP resources to rapidly fill the IPTV player buffer and exit the crowd in the network , as congestion can be eased by not joining existing traffic.
  • This mechanism is based on cooperation between both client and server. In an preferred model, based on network and client conditions, the client is responsible for establishing new links, following which he server could send media data across some or all of the available TCP links.
  • The above discussion relates to links between one server and one client. The following describes a further aspect of the dynamic multiple link mechanism for use with more than one server capable of streaming the required video data file.
  • When one or more users requests (from client players or devices such as a set top box connected to a display device, i.e. a television screen) for a video data stream, these requests are routing to the healthiest streaming servers, i.e. the plurality of servers that are best suited for delivering the requested file. Server health is assessed based on conditions such as server loads and ease of access to such data etc. Once the video starts streaming from the streaming application of the servers, DML mechanism is then used to establish all the possible route paths between the clients to specific number of streaming servers. The data flow control method according to the present invention applies multiple concurrent routing mechanisms based on the DML mechanism described above, when certain conditions are satisfied. Examples of these conditions are given below:
  • 1. When a player suffers buffering even after having applied DML strategy and is still not able to gain more data, this problem is predicted as a route congestion issue and the data flow control method of the present invention will check for other sources of the data that could more efficiently transmit to the player, before changing a flow control mode.
  • a. The player (client) will set up another connection to an alternative suggested streaming server based on information on the plurality of server available for use. Such information may be available in an index file or data structure and comprises information based geo-location, availability and available capacity of each server.
  • b. If the player can get smooth playback from this alternative server, then no further servers will need to be identified.
  • c. Or else, the player will try to identify additional streaming servers and will proceed to request concurrent data transfer from all the identified streaming servers holding the same data.
  • d. The player will continue to monitor the data effectiveness from all active sources and if one specific source is not performing as required, then it will stop the connection from this server and request the other concurrent streaming servers to alter the data flow pattern.
  • e. When the network on all paths to the players are not effective, then the dynamic link mechanism is used across all routes to ensure that the buffer reach to a level that is suitable for the smooth trickle mode explained in 5.1.3 above.
  • 2. When the data reaches 40% buffer level and is of a high video quality, then a hybrid Server to Client and Client to Client data transmission method may be applied by the flow control method of the present invention. This is explained in more detail in the second embodiment relating to data sharing techniques. Data givers (Server or other client devices) that have faster response times and a better network path will be chosen as the route for data distribution.
  • The multiple concurrent routing mechanisms making use of the DML mechanism of the data flow control mechanism of the present invention provides the flow control method of the present invention capability to navigate via multiple traffic routes and avoid congested segments dynamically based on network and client buffer conditions detected.
  • 5.1.5 Adaptive Streaming
  • Adaptive bitrate streaming is a technique used in streaming multimedia over computer networks. It works by detecting a user's bandwidth and CPU capacity in real time and adjusting the quality of a video stream accordingly. It requires the use of an encoder which can encode a single source video at multiple bit rates. The player client switches between streaming the different encodings depending on available resources. As a result; very little buffering, fast start time and a good experience for both high-end and low-end connections can be obtained for IPTV applications. Adaptive streaming is used nowadays in HLS or DASH video streaming service. These standard progressive downloads and switch streams are decisive in real-time based on the network flow. However, existing adaptive streaming techniques do not consider client player capability, buffer conditions or the video quality for playback at the client.
  • The data flow control method of the present invention proposes an adaptive streaming mechanism for switching video stream based on the network conditions and at the same time also considering highest video delivery. This is achieved by collaboration between the server side and the client side to receive information from relating to client side (player) conditions such as the buffer level, playback remaining time and the current sending speed. This enables a better “stream switch” decision. Therefore, by taking into consideration network conditions as well as buffer conditions, adaptive streaming according to the first embodiment of the present invention can deliver a high video quality output.
  • 5.2. Second Embodiment
  • Data flow control method according to the second embodiment of the present application is concerned with modes and mechanisms for data sharing, local data caching and reuse of such data to reduce network overheads. These are explained in detail below:
  • 5.2.1 Data Recycling Mode
  • Users sometimes request the same media data many times from one or more servers. For instance, users request streaming servers for their favourite song or a favourite video which they often watch many times. This behaviour causes unnecessary bandwidth usage and affects the overall internet ecosystem. Some experts predict that this type of unnecessary repeated consumption is accountable for 30% of bandwidth consumed every day. The same situation arises when one family or household purchases a new release movie but not able to make time to watch together. This leads to multiple viewing & streaming of the same movie and consumes unnecessary bandwidth and other network resources.
  • The data flow control mechanism of the present invention in the second embodiment overcomes this by automatically caching the last popular viewed content at the client/local device based on a pre-set storage space in the client device. The contents that are cached or removed from this storage can be based their popularity score. By doing this, popularly viewed contents reside on the device and can be re-viewed even if the device is not connected to the internet. This prevents unnecessary retransmission to conserve overall energy.
  • Data recycling mechanisms of the flow control method of the second embodiment is available for video on demand (VOD) or replay TV. In order to achieve this mode, a data caching policy module is implemented at the client end along with a RAM buffer size of 20 Megs and a local storage reserve of 2 GB (HDD) for instance. The data recycling mechanism includes rules and policies to specify that data delivered to the client device will be indexed, organized and recorded for later use.
  • FIG. 6 shows a flowchart depicting a data flow control mechanism with data recycling, such that a local cache is consulted before data is pulled from the server. Before a playback session is started, the data flow control method of the present invention first checks for the content at both local RAM and HDD storage. If there is a copy locally, this is played immediately. In some instances, only part of the popular content may be locally cached. In this case, the data recycling mechanism, at the time of playback is also configured to request the streaming server to start streaming any missing portion of the video file. The continuing portion of the data will be requested at the same bitrate level, and after the first few Group of Pictures GOP, streaming is resumed for the rest of the session. This method allows bandwidth to be efficiently utilised only when necessary and can achieve instant playback, which also improve user experience.
  • 5.2.2 Hybrid P2P Streaming
  • Point to point communication between servers or client devices, commonly referred to as P2P is not a new concept and it has been widely used in many applications over the Internet. However P2P policies are unsuitable for high quality video streaming for IPTV application. The data flow control methods of the present invention propose a “Hybrid P2P” streaming mechanism which works as a combination of Client to Server & Client to Client P2P basis. The use of one of these P2P methods is determined when the network is safe to share data with other peers, without impacting smooth video playback.
  • The hybrid P2P mechanism of the second embodiment initially involves requesting data from the streaming server as normal. Once the player buffer is at a safe level, then the hybrid P2P mechanism considers getting the data from a closest neighbouring client device (peer) rather than requesting the server for data. In order to co-exist with adaptive streaming strategy and provide the best quality (see 5.1.5), a high video bitrate is exchanged in the P2P system. This hybrid P2P streaming mode is often trigged when the buffer is healthy and data flow control method exits the hybrid P2P mode when the buffer is less than 15%.
  • When it comes to video quality, todays' IPTV users are expecting a lot more than just Standard Definition (SD) (480 pixels). Most of the contents produced today are in High Definition (HD) (720, 1080 pixels) quality and demanding for a new set of streaming requirements. These requirements include higher server specifications, multiple processing cores, larger bandwidth backbone and extensive I/O performance for network devices. Existing service providers using SD and also other ISPs are also required to invest more into network upgrades. Server hosting and delivery costs also increase as high quality video demands for more data usage. The hybrid P2P mechanism of the data flow control method of the present invention copes with these issues.
  • Hybrid P2P mechanism of data flow control is a data sharing concept involving a combination of a server sending data to clients, a client sharing data to other clients and a client sharing data to many clients. This can be viewed as a hierarchy tree structure, with server A being the original source of data , which provides it to client A, client A then provides this data to client nodes B, C D and so on. Thus, the source for a leaf node X can either be a streaming server or another leaf node that has the same data and is capable of providing this data to leaf node X.
  • Utilizing hybrid P2P under a normal network condition will eventually peak streaming at the highest video bitrate. When network resources and speed is good and a predefined time and if the condition remains good, the data flow control mechanism of the second embodiment switches the player to hybrid P2P. Clients devices that participate in P2P will need to be configured such that they can act as a “data giver” or a “data consumer” or both and this information can be stored in the backend systems and accessed when a data file that is also available in a data giver's device is requested by another client. When a client device initially streams a movie to the device, the movie information and the data blocks are recorded into a central database for future distribution guides. If another client participates in the hybrid P2P mode and requests for a particular content, the information in the databases will guide this client to those peers that have the content and are permitted as a “data givers”. If the new client's request found no match, the player will automatically exit hybrid P2P and resume data flow based on the other data sharing modes of the present invention as set out in the above embodiments of the present invention.
  • In a hybrid P2P network mechanism, one client device can share cached data with one or multiple clients within network and vice versa. With the use of hybrid P2P, the server side can save up to 90% of bandwidth and I/O resources. The more client devices are registered as a data giver in a P2P network, a lesser load is required on the server side. This allows IPTV service operators to reduce server hosting cost significantly.
  • 5.3 Third Embodiment
  • The third embodiment of the data flow control method of the present invention includes modes and mechanisms for Video Quality Control to ensure that the highest quality of video data is provided to I PTV end users. These mechanisms are explained below:
  • 5.3.1 Adaptive Bitrate Streaming for High Quality Video
  • This is derived from the adaptive bitrate streaming set out in item 5.1.5 in relation to the first embodiment. The adaptive streaming in the third embodiment is based on a video quality greedy policy. Adaptive streaming is a technique used in streaming multimedia over computer networks and functions by detecting a user's bandwidth and CPU capacity in real time, and adjusting the quality of a video stream accordingly. This mechanism requires the use of an encoder which can encode a single source video at multiple bit rates. The player client is capable of switching between streaming the different encodings depending on available resources. This results in very little buffering, fast start time and a good experience for both high-end and low-end connections.
  • A preferred mechanism for implementing adaptive data streaming for high quality video data is shown in FIG. 7 and is also explained below:
  • The following constant values are defined based on how many video streams are available. Suppose we have streamNr video streams, the following control parameters are provided:
  • QualityGreedyThreshSec 12 streamNr*2
  • QualityGreedyDurSec streamNr*2.5
  • NoLimitUpThreshSec 12 streamNr*8
  • NoLimitUpThreshSec means that the server can switch to upper bit rate level quality without any limitation.
  • QualityGreedyDurSec means the length of time to maintain the Quality Greedy Switch Policy. This policy will either keep or switch to upper bit rate level quality.
  • QualityGreedyThreshSec defines when to start Quality Greedy Switch Policy.
  • With these definitions, the adaptive switch process is described as follows in relation to FIG. 7:
  • 7a. During playback, if cache time on player side is less than 12 seconds; then the server will switch to the lowest bit rate.
  • 7b. Otherwise, if the cache time is less than QualityGreedyThreshSec, then we additional conditions are required to be checked. If a current sending operation is blocked due to network issues, and if the server sending speed is more than 1.6×rt_bitrate, the server will keep its current bit rate level, and let the ‘Exponential Speed Up’ mode in 5.1.1 control the speed adjustment. But if the server sending speed not more than 1.6×rt_bitrate, the server will switch to lower bit rate level quality.
  • 7c. When the cache time is less than QualityGreedyThreshSec and the sending operation is not blocked; then if the server sending speed is more than 1.0×rt_bitrate the server will switch to upper bit rate quality level. If the server sending speed is not more than 1.0×rt_bitrate, then the current video quality is maintained.
  • 7d. If, when the cache time is more than QualityGreedyThreshSec and less than NoLimitUpThreshSec, the server sending operation is blocked due to network issue;
  • the adaptive streaming mechanism maintains current video quality. If the sending operation is not blocked, this is switched to upper bit rate level quality.
  • 7e. When the cache time is more than NoLimitUpThreshSec, the server will switch to upper bit rate level quality until the highest level.
  • 7f. Whenever the highest bit rate quality level is not achieved, a maximum sending speed as 1.6×rt_bitrate is set. This will limit the low quality video time slot when network becomes good/normal again in the future.
  • 5.3.2 Start Bitrate Selection for High Quality Video Data
  • A start bitrate selection mechanism is proposed as part of the data flow control mechanism of the present invention. Before playback, the local storage or cache is checked. If there is a copy locally, the request is not streaming server and instead the player plays the local data immediately (similar to data recycling of 5.2.1). When all the local data has been consumed, the bitrate selection mechanism according to the third embodiment of the present invention proposes a method streaming at the same bit rate with the local file. The player side requests the appropriate files from the server and starts playing. The initial bitrate does not need to be the lowest video bitrate. There are many factors which determine which bitrate should be played. In the present embodiment, this depends on the resolution of the playback device screen. Ideally if this is a big screen TV, the lowest video bitrate is not suitable and could have a very bad video quality. There is a therefore a balance between fast start and the video quality to be struck in the bitrate selection mechanism of the data flow control method of the invention.
  • A preferred method for implementing start bitrate selection mechanism according to the third embodiment can be seen in FIG. 8. Nowadays, there are many different types of devices and screen size that has capability of displaying streaming video. Each device uniquely support a different video resolution and sending incorrect resolution size can cause un-viewable video or crashing of associated hardware. Therefore there is a requirement for resolution identification before requesting video from the server. In the present invention, this can be achieved by implementing a player type ID in the player software. When the player requests for video file, the data flow control method is configured for checking the device type ID and determining which video file is to be sent instead of always start sending with the lowest video bitrate. The start bitrate selection mechanism can provide noticeable results. For example when movies are played on a big screen TV, lower video bitrate often shows lots of flaws. Therefore, when the player type is identified as a TV, the data flow control means can start sending bitrates at the 2nd or 3rd highest level at the outset rather than the lowest bitrate.
  • 5.3.3 Selective Frame Drop Mode
  • During a live IPTV streaming session, the internet connection could sometimes fall below the lowest video bitrate and all of the video streaming mechanisms and modes that are/were applied may not be able to cope with the congestion. This event is rare but can cause the buffer to be emptied and video playback can be interrupted. Sometime a few kbps of data makes a difference between smooth playback and video buffering. When this condition occurs, a choice of either accepting the video buffering effect or providing other options for maintaining smooth playback is to be made by the data flow control means.
  • The selective frame drop mode of the present invention is depicted in FIG. 9. This mechanism is a dynamic procedure to “not” stream nond frames (aka B/P frames) within a video GOP. This will create a video jumping effect but at the same time it allows continuous streaming when only 20% of the required bandwidth is available. This is probably one form of acceptable effect during a bandwidth shortage period. Audio is not degraded or interrupted, which in most cases will be acceptable to users.
  • For example, smooth playback can be ensure by setting two dropping levels , A. dropping 50% B/P frames and B. dropping 100% B/P frames in one GOP (Group of Picture). This will temporally reduce 30%-80% bandwidth required and utilize this saving to transmit the remaining video frames and the audio to the player. During this time window, the video may present some skipping effect and is likely to remain this way until the network can recover from the severe temporally congestion.
  • This frame drop method acts as a final attempt, in case any of the above described modes fail and can ensure that continuous streaming is not interrupted. This ensure that the flow control mechanism can still provide quality continuous video data and can stream 200 kbps video files over a 100 kbps internet pipe for a short moment to maintain smooth playback . FIGS. 10a is an indication of the IPTV end user viewing experience when the selective frame drop mechanism is applied and FIG. 10b is an indication of the viewing experience without this mechanism. As depicted, a buffering effect is inevitable in FIG. 10 b.
  • 5.3.4 High Motion Picture First Policy
  • Video encoding can have different modes and filters to enhance video quality. The data flow control method according to the present invention provides a mechanism or policy for dealing with high motion picture frames to enhance viewing experience, provide the highest viewing quality and efficient manage network resources. In H.264 Codec, VBR (Variable Bitrate Rate) encoding is a mode that can yield good video quality output. This encoding generates a large GOP for those fast motion moving scenes and smaller GOP for those with less motion, and generates large GOP (Group of Pictures) and vice versa. Each time the flow control method processes big GOPs, it consumes more network resources which create network spiking or jittering. If this factor is not taken into consideration, the “fast motion pictures” scenes may trick the data flow protocol, being used into falsely switch from the current video quality to a next lower quality level. This is because the big GOP may falsely alert the adaptive streaming mechanism of the data flow control (set out in 5.3.1) to switch to a lower bitrate. This false trigger significantly impacts the viewing experience. To avoid this false alert, the present invention in a third embodiment proposes a data flow control mechanism implementing a policy described below and referred to a ‘high- motion pictures first’ or high-motion picture priority policy to obtain a better viewing experience under limited network conditions.
  • The high-motion picture first mechanism is set out in FIG. 11. At the beginning of the video session, the first minute or so is utilised by the data flow control method to gauge and detect network bandwidth. If it is determined that the bandwidth is adequate to sustain the highest video bitrates, incremental bitrate switching is stopped and the flow control jumps directly to the highest level. Selective GOP is also an important factor in enhancing video viewing experience of the data flow control mechanism of the present invention. Each GOP is inspected and their sizes are considered. If the GOP size is much bigger than the video bitrates, this translates into a high motion event. These big GOPs under low network environment can lead to buffering and also big GOPs under low encoding bitrate expose pixilation lead to poor video quality
  • The GOP size is calculated by the data flow control mechanism before sending the first packet of the moving picture in one GOP. If the average bit rate in this GOP is 30% less than the average bit rate of the current movie clip, this GOP is flagged as a “Low Motion Picture GOP”. If the average bit rate in one GOP is 30% more than the average bit rate of the current movie clip, this GOP is flagged as a “Fast Motion Picture GOP”.
  • The data flow control mechanism continues to monitor the network throughput by checking to see if the highest video bitrate is being streamed or not. If the stage of streaming is not at the highest bitrate, the network condition deemed to be poor and the data flow control method will not be able send data using the higher bitrate at all time. This condition would impact playback video quality significantly as a slight change of bandwidth would falsely tell the player to request a lower bitrate. To overcome this problem, the condition of the local buffer as well as the type of GOP that is being sent is identified. If the buffer is at the safe threshold and if the sending GOP is flagged as “Low Motion Picture GOP”, then the data flow control method switches to a lower bitrate to yield additional bandwidth for a “Fast Motion Picture GOP”. Thus the data flow control mechanism clocks down to a lower bitrate for those static or low motion picture GOPs to preserve bandwidth for the higher GOPs at a higher bitrate. As the result, a constant video quality as well as smooth streaming effect is maintained.
  • The high motion picture first policy allows the data flow control method to continue sending higher bitrates during a congestion time window to always allocate more bandwidth for those high motion picture GOPs.
  • 5.3.5 Buffer Enhancement and Repair
  • Dynamic adaptive streaming (in 5.3.1) and selection frame drop (5.3.1) function to maintain smooth streaming the combat internet bandwidth fluctuations and instability. However, these mechanisms can sometimes cause negative impacts such as visible video degrade or frame jumping in real-time, parallel with the network condition. These negative effects are considered unavoidable and current technologies do not address them. The data flow method of the present invention proposes a buffer repair or enhancement mechanism to monitor the network condition and buffer filling rates and to then predict how much time and speed is available to allow the flow control method to replace lower quality GOP in the buffer with a high quality GOP.
  • This mechanism of buffer repair improves video playback quality. As streaming takes place, the network fluctuates and so does video quality. In adaptive bitrate streaming, the buffer is segmented into multiple segments which consist of various video bitrates that form a continuous playback timeline. Some segment videos are of low bitrates which have a negative impact to the viewing experience. To address this problem, buffer repair mechanism of the data flow control is applied when the buffer reaches a safe level i.e. 80% full. During this mode, the flow control method is configured to check for previously streamed segments in the buffer that have low video bitrates and are still queuing for playing. The buffer repair mechanism is then configured request the segments to be replaced with higher video bitrates, before it turns to playback. This ensures that the first part of the buffer always has the highest video bitrate and playback with highest video quality.
  • The buffer repair or enhancement mechanism is shown in FIG. 12 and is explained in detail below:
  • 12a. During the playback session, the flow control mechanism ensures that the player maintains a one GOP queue which stores all the GOP data that will be sent to the video decoder.
  • 12b. The player monitors the GOP queue periodically. If the time span of this queue is less than 10 seconds, then no action is taken. Otherwise, the player will check whether there is any GOP in the queue having only part of B/P frames (Partial GOP). If there is Partial GOP not be sent to the decoder in 10 seconds, then the mechanism is configured to check current server sending speed. If server sending speed is less than or equal to 1.0×rt_bitrate, then no action is taken. Otherwise, if the sending speed is more than 1.0×re_bitrate, the flow control mechanism requests the server to resend that GOP with all frames at the same quality.
  • 12c. After receiving the GOP with all frames resent by server, player will then use this GOP to replace the Partial GOP in the GOP queue.
  • 12d. If there is no Partial GOP in the queue, and if the queue time span is less than 15 seconds, no action is taken.
  • 12e. Otherwise, the lowest quality GOP is identified in this queue and compared with the current receiving GOP quality. If the lowest quality GOP is higher than the current data, no action is taken. Or else if the server current sending speed is more than 1.0×rt_bitrate, player will request the server to resend this GOP at one level higher quality.
  • 12f. After receiving this GOP at one level higher quality resent by server, the data flow mechanism uses this GOP to replace old GOP in the GOP queue.
  • 5.4 Fourth Embodiment
  • The following mechanisms provide flow control techniques that can be applied to existing TCP data transmissions to provide an enhanced data flow control method according to a fourth embodiment of the present invention.
  • 5.4.1 Dynamic Nagle Algorithm
  • The Nagle Algorithm explained in the Background section 2 has a default (200+ms time delay) negative impact to IPTV services, especially when the users initiate interactive service such as channel changing, content queries, accounting etc. Therefore the present invention proposes a method to dynamically enable/disable Nagle algorithm based on the type of action and request to ensure the best effect can be achieved. Ideally, the flow control method of the fourth embodiment disables the Nagle algorithm when a command exchange between the user device and the server is detected. This eliminates at least 200 ms delay on the TCP transport layer. By doing this, it is possible to reduce the number of packets that are going to be injected into the network and also improve user interactive experience.
  • A preferred procedure for applying the dynamic Nagle algorithm application as explained above is shown in FIG. 13. Details of tests conducted when the Nagle algorithm was in an enabled, disabled and an adaptive state is shown in FIG. 14 a, with the different packet sending rate for each of the above mention states shown in FIG. 14b . These tests were carried out in a LAN environment.
  • 5.4.2 Amended Linux Controls
  • The following Linux controls can be applied to existing TCP to provide an enhanced data flow control according to the present invention.
  • A. net.ipv4.tcp_window_scaling=1
  • This parameter allows TCP to use big window size on receiver and sender. This will increase overall throughput.
  • B. net.ipv4.tcp_timestamps=1
  • This parameter allows TCP to use time stamp option in its header. This will help TCP to estimate the RTT value (round trip time)
  • C. net.ipv4.tcp_sack=1
  • This parameter allows TCP receiver to send selective acknowledgments to report multiple packet loss instead only one packet per acknowledgement. This will help the sender to retransmit the lost packets more quickly.
  • D. net.ipv4.tcp_congestion_control=cubic
  • This parameter is only valid for kernel 2.6.13 or later versions. It allows user to change the congestion control algorithm to get better performance of the special applications.
  • E. net.core.rmem_max/net.core.wmem_max
  • There parameters control the window size advertised by the sender and receiver. They will affect TCP throughput by limiting data on the fly in the network. This window size can be enlarged according to different network environment supplied.
  • F. The new kernel
  • From Linux 2.6.17 version onwards, the cwnd can be up to 4 MB. This will increase TCP throughput on high speed network.
  • 5.5 Interaction of the Data Flow Control Techniques jof the Present Invention
  • The interaction of the above described modes, policies , methods and mechanisms that make up the proposed data flow control protocol or method of the first, second and third embodiment of the present invention is shown in FIG. 13.
  • While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed, the novel devices, methods, and products described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit and scope of the invention. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope of the embodiments.

Claims (19)

1. A data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising identifying a condition of the communication network between said sending and receiving nodes, identifying a condition of the receiving node, and adjusting the media data flow through said communication network based on the identified condition of the communication network and the identified condition of the receiving node.
2. The method of claim 1 wherein the sending node is configured for encoding and streaming said media data to the receiving node based on a request for such data from the receiving node, and the receiving node is capable of decoding and playback of said media data.
3. The method of claim 2 wherein the step of identifying the condition of the network comprises detecting the level of network traffic and determining whether the network between the sending node and the receiving is in a normal state or in a congested state, based on the detected level of network traffic; and
wherein the step of identifying the condition of the receiving node comprises determining whether the data buffer at the receiving node is at a safe level, unsafe level or critical level, the buffer being 80% or more full in the safe level, 20%-80% full in the unsafe level and 0%-20% full in the critical level; wherein said network and receiving node conditions are periodically monitored and communicated between the sending node and the receiving nodes at defined intervals.
4. The method of claim 3 further comprising:
responsive to a request for media data from the receiving node, streaming the requested media data at an initial data streaming rate;
identifying a maximum data streaming rate supported by the receiver node;
identifying the condition of the network;
identifying the condition of the receiving node;
if the network condition is identified as being normal and the condition of the buffer is at critical or unsafe level, then continuously increasing the rate of data streaming until said maximum rate is reached, or until the buffer reaches the safe level or until the network condition becomes congested.
5. The method as claimed in claim 4 wherein, if during the step of continuously increasing the rate of data streaming, the buffer at the receiving node reached the safe level, then the method comprises adjusting the rate of data streaming to a rate that is equal to a draining rate of the buffer during playback.
6. The method of claim 4 further wherein, if during the step of continuously increasing the rate of data streaming, the network condition changes to congested and remains as congested for a first defined time period, the method comprises:
identifying the remaining playback time for the data left in the data buffer at the receiving node;
reducing the rate of data streaming to near zero or a calculated low rate of streaming, or completely suspending streaming of data from the server, until either the network condition becomes normal or if the identified remaining playback time reduces to 15 seconds or less.
7. The method as claimed in claim 6 wherein, if the remaining playback time reduces to 15 seconds or less, the method comprises:
requesting the sending node to accept additional network communication links between the sending node and the receiving node;
determing the total number of additional links required to sustain real-time playback at the receiving node;
establishing the additional links by the sending node;
streaming the media data from the sending node across all established links evenly, such that if the condition of the network is identified as congested on a first link of the plurality of links, the media data is sent via the next available communication link.
8. The method as claimed in claim 7 wherein the method further comprises reordering of media data packets arriving at the receiving node out of sequence by making use of the identifier of the sequence in the header part of each media data frame.
9. The method as claimed in claim 7 wherein the method further comprises:
identifying one or more additional sending nodes that are capable of streaming the requested media data to the receiving node;
establishing additional communication links to the receiving node by each of the sending node such that each sending node is capable of sending the media data event across the additional links.
10. The method as claimed in claim 3 wherein, wherein the a streaming application at the sending node is capable of adaptively encoding the media data to be streamed from the sending node according to a bit rate suitable for the identified buffer conditions of the buffer of the receiving node.
11. A data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
responsive to a request for media data requested by the receiving node, determining if a copy of said media data is stored locally at the receiving node or stored on a memory device that is accessible to said receiving node; wherein if a local copy of the entire data file or parts of the data request is available is stored locally, accessing this copy and requesting streaming of only the missing parts from the sending node.
12. The method as claimed in claim 11 further comprising, if a local copy of the requested media data is not available at the receiving node, the method comprises the steps of:
identifying the conditions of the receiving node, including the screen size, resolution and capability of the display screen connected to said node;
selecting a bitrate fore streaming the media data according to the identified screen size, video resolution and capability supported by the display screen;
streaming the requested media data from the sending node using the selected bitrate or a higher bitrate at the outset of said streaming instead of commencing said streaming at the lowest available bitrate.
13. The data flow control method as claimed in claim 3 wherein,
if the network condition is identified as being congested, then continuing said streaming at the current streaming rate by only streaming I- frames of the media data packet and not streaming B and P frames of said media data to the receiving node, until the
network condition changes to normal, to ensure that the media data is continuously streamed for playback at the receiving node.
14. The data flow control method as claimed in claim 3 wherein, when the buffer is at a safe level, the method comprises:
identifying previously streamed segments stored in the data buffer having low video bitrates or partial GOPs in the buffer queue;
identifying the remaining playback time for the data left in the data buffer at the receiving node;
if the remaining playback time is more than 10 seconds, then identifying the current rate of streaming of media data from the sending node;
if the rate of current streaming is more than an average rate supported by the receiving node, then the method further comprises requesting the sending node to resend the existing frames with low video bit rates or partial GOPs with higher video bitrates.
15. A data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
responsive to a request for media data requested by the receiving node, identifying a plurality of intermediate data giver nodes, each storing a local copy of the requested media data;
if a data giver node that is identified as a neighbor of the receiving node is one of the identified intermediate nodes, then obtaining the copy of the media data from this neighbor data giver node, said neighbor node being a peer node of said receiving node;
if no data giver is identified as being a neighbor of the receiving node is identified, then streaming the requested media data from the sending node.
16. A data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
responsive to a request for media data requested by the receiving node, streaming the media data from the sending node at the currently available bitrate for a defined time period to detect the network bandwidth;
if said bandwidth is capable of supporting a higher video bitrate when compared to the current rate, the network is compared to switch to said higher video bitrate for continuous streaming.
17. A data flow control method for transmission of media data from a sending node to a receiving node, the receiving node capable of playing said media data, over a communication network, the method comprising:
responsive to a request for media data requested by the receiving node, streaming the media data from the sending node at the currently available bitrate for a defined time period to detect the network bandwidth;
identifying a plurality of group of pictures (GOP) for high motion video data that is to be streamed and inspecting the size and average bit rate for each GOP and the network conditions prior to said streaming;
if the average bit rate of a GOP is 30% less than the average bit rate of the currently streamed media, then the method comprises identifying said GOP as a low motion picture GOP and switching the current streaming bitrate to a lower bitrate for streaming the GOP at the current bit rate;
if the average bitrate of a GOP is 30% more than the average bit rate, then the method comprises identifying said GOP as a high motion picture GOP, and switching the current streaming bitrate to the highest available bitrate for streaming the GOP at the highest bit rate.
18. The data flow control according to claim 1 wherein the sending node is an IPTV streaming server and the receiving node is a client device including a multimedia player.
19. A system for implementing the method as claimed in claim 1 comprising a sending node and a receiving node capable of communication via a communication network, the sending node having a streaming module capable of streaming multimedia data stored in a memory means of the sending node, and the receiving node capable of requesting a multimedia data to be streamed from the sending node for playback on a multimedia player incorporated in the receiving node.
US15/301,602 2014-04-03 2015-04-01 Data flow control method Abandoned US20170041238A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1406048.7A GB2524958A (en) 2014-04-03 2014-04-03 Data flow control method
GB1406048.7 2014-04-03
PCT/GB2015/051028 WO2015150814A1 (en) 2014-04-03 2015-04-01 Data flow control method

Publications (1)

Publication Number Publication Date
US20170041238A1 true US20170041238A1 (en) 2017-02-09

Family

ID=50776796

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/301,602 Abandoned US20170041238A1 (en) 2014-04-03 2015-04-01 Data flow control method
US15/301,589 Expired - Fee Related US10547883B2 (en) 2014-04-03 2015-04-01 Data flow control method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/301,589 Expired - Fee Related US10547883B2 (en) 2014-04-03 2015-04-01 Data flow control method and system

Country Status (7)

Country Link
US (2) US20170041238A1 (en)
EP (2) EP3138250A1 (en)
CN (2) CN106664255A (en)
CA (2) CA2981638A1 (en)
GB (2) GB2524958A (en)
PH (2) PH12016501946A1 (en)
WO (2) WO2015150812A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170251274A1 (en) * 2016-02-29 2017-08-31 Fuji Xerox Co., Ltd. Information processing apparatus and information processing method
US20180139162A1 (en) * 2015-10-16 2018-05-17 Satori Worldwide, Llc Systems and methods for transferring message data
US20190166057A1 (en) * 2017-11-30 2019-05-30 Comcast Cable Communications, Llc Assured Related Packet Transmission, Delivery and Processing
US20190182556A1 (en) * 2017-12-07 2019-06-13 At&T Intellectual Property I, L.P. Video optimization proxy system and method
US10700995B2 (en) * 2016-05-31 2020-06-30 Pango Inc. System and method for improving an aggregated throughput of simultaneous connections
CN111416830A (en) * 2020-03-27 2020-07-14 北京云端智度科技有限公司 Self-adaptive P2P streaming media data scheduling algorithm
CN111417031A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 File transmission method and device and electronic equipment
US10728138B2 (en) 2018-12-21 2020-07-28 At&T Intellectual Property I, L.P. Analytics enabled radio access network (RAN)- aware content optimization using mobile edge computing
CN112188218A (en) * 2020-09-24 2021-01-05 陈旻 Energy-saving video transmission system based on distributed source codes
CN112313918A (en) * 2018-10-02 2021-02-02 谷歌有限责任公司 Live streaming connector
US20210105536A1 (en) * 2016-07-29 2021-04-08 Rockwell Collins, Inc. In-flight entertainment systems and methods
US10999204B2 (en) * 2017-05-19 2021-05-04 Huawei Technologies Co., Ltd. System, apparatus, and method for traffic profiling for mobile video streaming
US11019127B1 (en) * 2019-07-25 2021-05-25 Amazon Technologies, Inc. Adaptive media fragment backfilling
US11076188B1 (en) * 2019-12-09 2021-07-27 Twitch Interactive, Inc. Size comparison-based segment cancellation
US11108993B2 (en) 2016-12-19 2021-08-31 Telicomm City Connect, Ltd. Predictive network management for real-time video with varying video and network conditions
CN113453024A (en) * 2020-03-25 2021-09-28 华为技术有限公司 Method, device and system for monitoring service
US11153581B1 (en) 2020-05-19 2021-10-19 Twitch Interactive, Inc. Intra-segment video upswitching with dual decoding
US11196790B2 (en) 2018-11-28 2021-12-07 Netflix, Inc. Techniques for encoding a media title while constraining quality variations
CN113852866A (en) * 2021-09-16 2021-12-28 珠海格力电器股份有限公司 Media stream processing method, device and system
CN114598428A (en) * 2022-05-10 2022-06-07 北京中科大洋科技发展股份有限公司 Redundancy flow pushing method based on SRT protocol
CN114650446A (en) * 2022-05-24 2022-06-21 苏州华兴源创科技股份有限公司 Multi-channel video data self-adaptive transmission method and device and computer equipment
CN116193202A (en) * 2022-12-05 2023-05-30 百鸟数据科技(北京)有限责任公司 Multichannel video observation system for field observation
US11736552B1 (en) * 2022-09-21 2023-08-22 Microsoft Technology Licensing, Llc Sender based adaptive bit rate control

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106572367A (en) * 2015-10-08 2017-04-19 中移(杭州)信息技术有限公司 Multimedia file obtaining method, device and system
US10306284B2 (en) * 2015-12-07 2019-05-28 Net Insight Intellectual Property Ab ABR adjustment for live OTT
CN105611328B (en) * 2015-12-25 2019-01-01 深圳Tcl新技术有限公司 Video data based on HLS Streaming Media accelerates method for down loading and device
US20180069909A1 (en) * 2016-09-08 2018-03-08 Sonic Ip, Inc. Systems and Methods for Adaptive Buffering for Digital Video Streaming
FR3060925B1 (en) * 2016-12-21 2020-02-21 Orange SERVER AND CLIENT SUITABLE FOR IMPROVING THE ROUNDTURN TIME OF AN HTTP REQUEST
US10334055B2 (en) 2017-02-01 2019-06-25 International Business Machines Corporation Communication layer with dynamic multi-session management
CN108881931B (en) 2017-05-16 2021-09-07 腾讯科技(深圳)有限公司 Data buffering method and network equipment
CN107317655A (en) * 2017-06-06 2017-11-03 努比亚技术有限公司 Transfer control method, system and the readable storage medium storing program for executing of screen prjection
TWI826387B (en) * 2017-09-08 2023-12-21 美商開放電視股份有限公司 Bitrate and pipeline preservation for content presentation
US10728140B2 (en) 2017-12-18 2020-07-28 At&T Intellectual Property I, L.P. Deadlock-free traffic rerouting in software-deifned networking networks
CN109995664B (en) 2017-12-29 2022-04-05 华为技术有限公司 Method, equipment and system for transmitting data stream
CN109218762B (en) * 2018-09-06 2019-11-26 百度在线网络技术(北京)有限公司 Multimedia resource playback method, device, computer equipment and storage medium
CN111245773B (en) * 2018-11-29 2023-04-18 厦门雅迅网络股份有限公司 Automobile Ethernet flow monitoring method, terminal equipment and storage medium
US10826649B1 (en) * 2018-12-19 2020-11-03 Marvell Asia Pte, Ltd. WiFi receiver architecture
US11330317B2 (en) * 2018-12-28 2022-05-10 Dish Network L.L.C. Methods and systems for discovery of a processing offloader
US11956665B2 (en) * 2019-02-01 2024-04-09 Telefonaktiebolaget Lm Ericsson (Publ) Detecting congestion at an intermediate IAB node
KR20200100387A (en) * 2019-02-18 2020-08-26 삼성전자주식회사 Method for controlling bitrate in realtime and electronic device thereof
CN110061925B (en) * 2019-04-22 2022-06-07 深圳市瑞云科技有限公司 Cloud server-based image congestion avoiding and transmission accelerating method
CN110365551B (en) * 2019-07-04 2021-05-07 杭州吉讯汇通科技有限公司 Network information acquisition method, device, equipment and medium
CN110647071B (en) * 2019-09-05 2021-08-27 华为技术有限公司 Method, device and storage medium for controlling data transmission
CN112969089B (en) * 2019-12-03 2022-07-12 华为技术有限公司 HTTP request transmission method and equipment
DE102019218827B3 (en) * 2019-12-04 2021-04-29 Wago Verwaltungsgesellschaft Mbh METHOD, DEVICE AND SYSTEM FOR OPTIMIZING DATA TRANSFER BETWEEN CONTROL DEVICES AND CLOUD SYSTEMS
FR3106029A1 (en) * 2020-01-02 2021-07-09 Orange A method of managing a progressive and adaptive download of digital content by a multimedia stream player terminal connected to a communication network, a management device, a multimedia stream player terminal and corresponding computer program.
RU2723908C1 (en) * 2020-02-14 2020-06-18 Общество с ограниченной ответственностью «Кодмастер» Data flow control system
CN112469006B (en) * 2020-11-17 2022-07-12 浙江大华技术股份有限公司 Data transmission method based on CPE system, terminal and computer readable storage medium
US20230345075A1 (en) * 2022-04-25 2023-10-26 Avago Technologies International Sales Pte. Limited Rebuffering reduction in adaptive bit-rate video streaming

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100455497B1 (en) * 1995-07-21 2004-12-30 코닌클리케 필립스 일렉트로닉스 엔.브이. Compressed television signal, method and apparatus for transmitting compressed television signal, method and apparatus for receiving compressed television signal
US6543053B1 (en) * 1996-11-27 2003-04-01 University Of Hong Kong Interactive video-on-demand system
US7240359B1 (en) * 1999-10-13 2007-07-03 Starz Entertainment, Llc Programming distribution system
WO2001089160A1 (en) * 2000-05-18 2001-11-22 British Telecommunications Public Limited Company Communications network
GB0018119D0 (en) * 2000-07-24 2000-09-13 Nokia Networks Oy Flow control
US7274661B2 (en) * 2001-09-17 2007-09-25 Altera Corporation Flow control method for quality streaming of audio/video/media over packet networks
US7218610B2 (en) * 2001-09-27 2007-05-15 Eg Technology, Inc. Communication system and techniques for transmission from source to destination
US20030074456A1 (en) * 2001-10-12 2003-04-17 Peter Yeung System and a method relating to access control
US8200747B2 (en) * 2002-07-12 2012-06-12 Hewlett-Packard Development Company, L.P. Session handoff of segmented media data
US7047310B2 (en) * 2003-02-25 2006-05-16 Motorola, Inc. Flow control in a packet data communication system
US20150341812A1 (en) * 2003-08-29 2015-11-26 Ineoquest Technologies, Inc. Video quality monitoring
US7096741B2 (en) * 2004-07-14 2006-08-29 Jds Uniphase Corporation Method and system for reducing operational shock sensitivity of MEMS devices
US8934533B2 (en) 2004-11-12 2015-01-13 Pelco, Inc. Method and apparatus for controlling a video surveillance display
US8218439B2 (en) * 2004-11-24 2012-07-10 Sharp Laboratories Of America, Inc. Method and apparatus for adaptive buffering
US20060150225A1 (en) * 2005-01-05 2006-07-06 Microsoft Corporation Methods and systems for retaining and displaying pause buffer indicia across channel changes
CN101305612B (en) * 2005-08-12 2010-10-20 诺基亚西门子通信有限责任两合公司 A multi-source and resilient video on demand streaming system for a peer-to-peer subscriber community
US7512700B2 (en) * 2005-09-30 2009-03-31 International Business Machines Corporation Real-time mining and reduction of streamed data
US20070171830A1 (en) 2006-01-26 2007-07-26 Nokia Corporation Apparatus, method and computer program product providing radio network controller internal dynamic HSDPA flow control using one of fixed or calculated scaling factors
US8806045B2 (en) * 2006-09-01 2014-08-12 Microsoft Corporation Predictive popular content replication
CN100589440C (en) * 2006-10-18 2010-02-10 中国科学院自动化研究所 A network congestion control system and method for Internet
US7733808B2 (en) * 2006-11-10 2010-06-08 Microsoft Corporation Peer-to-peer aided live video sharing system
WO2009005747A1 (en) * 2007-06-28 2009-01-08 The Trustees Of Columbia University In The City Of New York Set-top box peer-assisted video-on-demand
US7991904B2 (en) * 2007-07-10 2011-08-02 Bytemobile, Inc. Adaptive bitrate management for streaming media over packet networks
US8346959B2 (en) * 2007-09-28 2013-01-01 Sharp Laboratories Of America, Inc. Client-controlled adaptive streaming
US7975282B2 (en) * 2007-11-01 2011-07-05 Sharp Laboratories Of America, Inc. Distributed cache algorithms and system for time-shifted, and live, peer-to-peer video streaming
EP2056247A1 (en) * 2007-11-02 2009-05-06 Alcatel Lucent Guaranteed quality multimedia service over managed peer-to-peer network or NGN
US8917598B2 (en) * 2007-12-21 2014-12-23 Qualcomm Incorporated Downlink flow control
US9047236B2 (en) * 2008-06-06 2015-06-02 Amazon Technologies, Inc. Client side stream switching
EP2300928B1 (en) * 2008-06-06 2017-03-29 Amazon Technologies, Inc. Client side stream switching
CN102106113B (en) 2008-07-28 2014-06-11 万特里克斯公司 Data streaming through time-varying transport media
US9380091B2 (en) * 2012-06-12 2016-06-28 Wi-Lan Labs, Inc. Systems and methods for using client-side video buffer occupancy for enhanced quality of experience in a communication network
US20130290492A1 (en) * 2009-06-12 2013-10-31 Cygnus Broadband, Inc. State management for video streaming quality of experience degradation control and recovery using a video quality metric
CN101588595B (en) * 2009-07-07 2012-01-25 董志 Method for dynamically regulating data transfer rate in wireless application service system
CN101662676B (en) * 2009-09-30 2011-09-28 四川长虹电器股份有限公司 Processing method for streaming media buffer
US8719879B2 (en) * 2010-06-11 2014-05-06 Kuautli Media Investment Zrt. Method and apparatus for content delivery
US8667166B2 (en) * 2010-11-02 2014-03-04 Net Power And Light, Inc. Method and system for resource-aware dynamic bandwidth control
US8687491B2 (en) * 2011-04-05 2014-04-01 Vss Monitoring, Inc. Systems, apparatus, and methods for managing an overflow of data packets received by a switch
US9344494B2 (en) * 2011-08-30 2016-05-17 Oracle International Corporation Failover data replication with colocation of session state data
WO2013087793A1 (en) 2011-12-14 2013-06-20 Tp Vision Holding B.V. Streaming video data having adaptable bit rate
US9450997B2 (en) 2012-02-27 2016-09-20 Qualcomm Incorporated Dash client and receiver with request cancellation capabilities
EP2665239B1 (en) * 2012-05-14 2016-08-31 Alcatel Lucent An adaptive streaming aware networks node, client and method with priority marking
US9629025B2 (en) * 2013-05-03 2017-04-18 Blackberry Limited Controlling data offload in response to feedback information
US9386308B2 (en) 2013-07-16 2016-07-05 Cisco Technology, Inc. Quality optimization with buffer and horizon constraints in adaptive streaming
US9124947B2 (en) * 2013-09-04 2015-09-01 Arris Enterprises, Inc. Averting ad skipping in adaptive bit rate systems
US9813470B2 (en) * 2014-04-07 2017-11-07 Ericsson Ab Unicast ABR streaming

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Amazon WO 2009/149100 A1 *
Millar US 2006/0104345 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180139162A1 (en) * 2015-10-16 2018-05-17 Satori Worldwide, Llc Systems and methods for transferring message data
US20170251274A1 (en) * 2016-02-29 2017-08-31 Fuji Xerox Co., Ltd. Information processing apparatus and information processing method
US10382832B2 (en) * 2016-02-29 2019-08-13 Fuji Xerox Co., Ltd. Information processing apparatus and information processing method
US10700995B2 (en) * 2016-05-31 2020-06-30 Pango Inc. System and method for improving an aggregated throughput of simultaneous connections
US20210105536A1 (en) * 2016-07-29 2021-04-08 Rockwell Collins, Inc. In-flight entertainment systems and methods
US11108993B2 (en) 2016-12-19 2021-08-31 Telicomm City Connect, Ltd. Predictive network management for real-time video with varying video and network conditions
US10999204B2 (en) * 2017-05-19 2021-05-04 Huawei Technologies Co., Ltd. System, apparatus, and method for traffic profiling for mobile video streaming
US11736406B2 (en) * 2017-11-30 2023-08-22 Comcast Cable Communications, Llc Assured related packet transmission, delivery and processing
US20190166057A1 (en) * 2017-11-30 2019-05-30 Comcast Cable Communications, Llc Assured Related Packet Transmission, Delivery and Processing
US10764650B2 (en) * 2017-12-07 2020-09-01 At&T Intellectual Property I, L.P. Video optimization proxy system and method
US11343586B2 (en) * 2017-12-07 2022-05-24 At&T Intellectual Property I, L.P. Video optimization proxy system and method
US20190182556A1 (en) * 2017-12-07 2019-06-13 At&T Intellectual Property I, L.P. Video optimization proxy system and method
US20220248105A1 (en) * 2017-12-07 2022-08-04 At&T Intellectual Property I, L.P. Video optimization proxy system and method
CN112313918A (en) * 2018-10-02 2021-02-02 谷歌有限责任公司 Live streaming connector
US11196790B2 (en) 2018-11-28 2021-12-07 Netflix, Inc. Techniques for encoding a media title while constraining quality variations
US11196791B2 (en) * 2018-11-28 2021-12-07 Netflix, Inc. Techniques for encoding a media title while constraining quality variations
US11677797B2 (en) 2018-11-28 2023-06-13 Netflix, Inc. Techniques for encoding a media title while constraining quality variations
US11296977B2 (en) 2018-12-21 2022-04-05 At&T Intellectual Property I, L.P. Analytics enabled radio access network (RAN)-aware content optimization using mobile edge computing
US10728138B2 (en) 2018-12-21 2020-07-28 At&T Intellectual Property I, L.P. Analytics enabled radio access network (RAN)- aware content optimization using mobile edge computing
US11019127B1 (en) * 2019-07-25 2021-05-25 Amazon Technologies, Inc. Adaptive media fragment backfilling
US11076188B1 (en) * 2019-12-09 2021-07-27 Twitch Interactive, Inc. Size comparison-based segment cancellation
CN113453024A (en) * 2020-03-25 2021-09-28 华为技术有限公司 Method, device and system for monitoring service
CN111416830A (en) * 2020-03-27 2020-07-14 北京云端智度科技有限公司 Self-adaptive P2P streaming media data scheduling algorithm
CN111417031A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 File transmission method and device and electronic equipment
US11153581B1 (en) 2020-05-19 2021-10-19 Twitch Interactive, Inc. Intra-segment video upswitching with dual decoding
CN112188218A (en) * 2020-09-24 2021-01-05 陈旻 Energy-saving video transmission system based on distributed source codes
CN113852866A (en) * 2021-09-16 2021-12-28 珠海格力电器股份有限公司 Media stream processing method, device and system
CN114598428A (en) * 2022-05-10 2022-06-07 北京中科大洋科技发展股份有限公司 Redundancy flow pushing method based on SRT protocol
CN114650446A (en) * 2022-05-24 2022-06-21 苏州华兴源创科技股份有限公司 Multi-channel video data self-adaptive transmission method and device and computer equipment
US11736552B1 (en) * 2022-09-21 2023-08-22 Microsoft Technology Licensing, Llc Sender based adaptive bit rate control
CN116193202A (en) * 2022-12-05 2023-05-30 百鸟数据科技(北京)有限责任公司 Multichannel video observation system for field observation

Also Published As

Publication number Publication date
CA2981638A1 (en) 2015-10-08
EP3138250A1 (en) 2017-03-08
WO2015150814A1 (en) 2015-10-08
GB201406048D0 (en) 2014-05-21
PH12016501948A1 (en) 2017-07-24
CN106664255A (en) 2017-05-10
US10547883B2 (en) 2020-01-28
GB201418455D0 (en) 2014-12-03
WO2015150812A1 (en) 2015-10-08
GB2524958A (en) 2015-10-14
GB2524855A (en) 2015-10-07
GB2524855B (en) 2017-03-29
US20170188056A1 (en) 2017-06-29
EP3149904A1 (en) 2017-04-05
PH12016501946A1 (en) 2017-02-06
CN106537856A (en) 2017-03-22
CN106537856B (en) 2020-03-27
CA2981646A1 (en) 2015-10-08

Similar Documents

Publication Publication Date Title
US20170041238A1 (en) Data flow control method
US11563788B2 (en) Multipath data streaming over multiple networks
EP2088731B1 (en) Network communication data processing method, network communication system and client
Thomas et al. Enhancing MPEG DASH performance via server and network assistance
US20100228862A1 (en) Multi-tiered scalable media streaming systems and methods
KR20120008526A (en) Fast channel change handling of late multicast join
Afzal et al. A holistic survey of wireless multipath video streaming
CA2897772A1 (en) Multipath data streaming over multiple wireless networks
US20200120152A1 (en) Edge node control
Afzal et al. A holistic survey of multipath wireless video streaming
Bouten et al. A multicast-enabled delivery framework for QoE assurance of over-the-top services in multimedia access networks
Clayman et al. The future of media streaming systems: transferring video over new IP
CN106792216B (en) Streaming media reading method in distributed file system and server
Zhang et al. An online learning based path selection for multipath real‐time video transmission in overlay network
GB2539335A (en) Data flow control method and system
Hodroj et al. Enhancing dynamic adaptive streaming over http for multi-homed users using a multi-armed bandit algorithm
EP4002793B1 (en) Method and controller for audio and/or video content delivery
Ahsan Video Streaming Transport: Measurements and Advances
Yang Deliver multimedia streams with flexible qos via a multicast dag
Chen et al. QoS of mobile real-time streaming adapted to bandwidth
Chakareski et al. Adaptive p2p video streaming via packet labeling
Palawan Scalable video transportation using look ahead scheduling
Arsan An integrated software architecture for bandwidth adaptive video streaming
Ramaboli Concurrent multipath transmission to improve performance for multi-homed devices in heterogeneous networks
Bortoleto et al. Large-scale media delivery using a semi-reliable multicast protocol

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORBITAL MULTI MEDIA HOLDINGS CORPORATION, VIRGIN I

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DO, MANH HUNG PETER;CAO, SHUXUN;SIGNING DATES FROM 20161225 TO 20161226;REEL/FRAME:041189/0521

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION