US20130339519A1 - Systems and Methods for Performing Localized Server-Side Monitoring in a Content Delivery Network - Google Patents
Systems and Methods for Performing Localized Server-Side Monitoring in a Content Delivery Network Download PDFInfo
- Publication number
- US20130339519A1 US20130339519A1 US13/527,397 US201213527397A US2013339519A1 US 20130339519 A1 US20130339519 A1 US 20130339519A1 US 201213527397 A US201213527397 A US 201213527397A US 2013339519 A1 US2013339519 A1 US 2013339519A1
- Authority
- US
- United States
- Prior art keywords
- content
- end user
- server
- monitoring
- pop
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 168
- 238000000034 method Methods 0.000 title claims abstract description 128
- 238000012546 transfer Methods 0.000 claims description 26
- 230000005540 biological transmission Effects 0.000 claims description 10
- 230000003247 decreasing effect Effects 0.000 claims description 9
- 230000006835 compression Effects 0.000 claims description 7
- 238000007906 compression Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 71
- 238000002347 injection Methods 0.000 abstract description 4
- 239000007924 injection Substances 0.000 abstract description 4
- 239000003795 chemical substances by application Substances 0.000 description 65
- 230000008569 process Effects 0.000 description 62
- 238000005457 optimization Methods 0.000 description 42
- 239000010410 layer Substances 0.000 description 23
- 238000007726 management method Methods 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 7
- 235000008694 Humulus lupulus Nutrition 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000003139 buffering effect Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0888—Throughput
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0823—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
- H04L41/083—Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
Definitions
- the present invention relates to monitoring network performance and, more specifically, to performing localized server-side performance monitoring in a content delivery network.
- the data networks collectively forming the Internet are becoming or already are the primary means for communication, commerce, as well as accessing news, music, videos, applications, games, and other content. However at times, access to such content is delayed as a result of over-loaded links, downed links, limited bandwidth, congestion, or other lack of resources in the intervening links between a source providing the content and a destination requesting and receiving the content.
- a CDN accelerates the delivery of content by reducing the distance that content travels in order to reach a destination.
- the CDN strategically locates surrogate origin servers, also referred to as caching servers or edge servers, at various points-of-presence (PoPs) that are geographically proximate to large numbers of content consumers.
- the CDN then utilizes a traffic management system to route requests for content hosted by the CDN to the edge server that can optimally deliver the requested content to the content consumer.
- optimal delivery of content refers to the most efficient available means with which content can be delivered from a server to an end user machine over a data network.
- Optimal delivery of content can be quantified in terms of latency, jitter, packet loss, distance, and overall end user experience.
- Determination of the optimal edge server may be based on geographic proximity to the content consumer as well as other factors such as load, capacity, and responsiveness of the edge servers.
- the optimal edge server delivers the requested content to the content consumer in a manner that is more efficient than when origin servers of the content provider deliver the requested content.
- a CDN may locate edge servers in Los Angeles, Dallas, and New York. These edge servers may cache content that is published by a particular content provider with an origin server in Miami. When a content consumer in San Francisco submits a request for the published content, the CDN will deliver the content from the Los Angeles edge server on behalf of the content provider as opposed to the much greater distance that would be required when delivering the content from the origin server in Miami. In this manner, the CDN reduces the latency, jitter, and amount of buffering that is experienced by the content consumer.
- the edge server can further improve on the end user experience by adaptively adjusting the content that is being delivered to the end user. This may include reducing the bitrate of a media stream (e.g., video) being delivered to an end user when the path to the end user is congested or the performance of the path is otherwise degraded. In so doing, a lower quality stream is delivered to the end user.
- the lower quality stream ensures that the end user enjoys an uninterrupted experience (by avoiding dropped frames, repeated buffering, etc.).
- the bitrate can be increased in order to deliver a higher quality stream to the end user when the path from the edge server to end user becomes less congested. Similar adaptive techniques are applicable to other forms of content besides media content (e.g., music and video).
- the edge server can further improve the end user experience by adaptively scaling images.
- the edge server can improve the end user experience by passing a lower resolution copy or more compressed version of a requested image to the end user, thereby enabling the end user to receive the image quicker than if a higher resolution copy or less compressed version of the requested image were to be passed.
- the edge server can improve the end user experience using server-side bandwidth throttling, whereby the server throttles or slows the rate at which it sends content beyond ordinary flow control mechanisms in the protocol stack or in the data network.
- CDN edge server needs to be aware of the performance of the underlying data network that links the edge server to the various end users.
- CDNs either utilize existing network performance monitoring tools or have developed their own systems and methods in order to monitor network performance.
- the Keynote system involves deployment of various agents across the Internet.
- the agents emulate end users and periodically request (e.g., every ten minutes) and download content from one or more of the CDN edge servers.
- the Keynote system agents then measure various metrics related to the delivery of that content.
- such systems do not provide accurate performance measurements because the agents do not request and download content from the same network locations as the actual end users.
- the performance measurements obtained from the Keynote system do not accurately reflect the network performance that end users experience. More specifically, the Keynote system is unable to measure performance along all network links connecting the end users to the CDN edge servers. Also, such a system does not provide real-time measurements.
- the network measurements can be up to 9 minutes and 59 seconds stale when measurements are taken every 10 minutes.
- the system injects additional traffic into the network.
- This additional traffic is manifested in the form of the requests that are issued by the agents to the edge servers and the responses that the edge servers issue in turn to the system agents.
- This additional traffic adds to the traffic that is actually requested by and delivered to various end users.
- the result is increased network congestion and increased load on the edge servers which now have to respond to the monitoring agents in addition to the requests that are submitted by various end users.
- specialized packets are injected in the network for the sole purpose of performance monitoring.
- One method to improve upon the accuracy of the Keynote system is to take measurements directly from the end users that request content from the CDN. This also involves injecting additional traffic into the network. For example, pinging an end user by sending one or more Internet Control Message Protocol (ICMP) packets to determine a round-trip time to the end user.
- ICMP Internet Control Message Protocol
- Such techniques while accurate in the resulting measurements, add overhead at the server performing the measurements as well as additional traffic load on the data network. While a single ICMP packet is insignificant in consuming server resources and slowing down a network, thousands of such packets continually being sent out from multiple monitoring points (e.g., edge servers) can result in a measurable amount of performance degradation.
- these measurements suffer from staleness as they are often conducted on a periodic basis.
- Such measurements can be taken in real-time. For example, before responding to a user request for content, pinging the end user. However, this introduces unnecessary delay when actually responding to the end user.
- CDNs and network performance monitoring tools have resorted to using so called “client-side” techniques. These techniques usually involve end users performing measurements for the benefit of the CDN or monitoring tool.
- the CDN may inject a script or set of instructions in the content that is delivered to the end users.
- the script or set of instructions cause the end users to measure the performance relating to the receipt of content from the CDN whether that content is the content requested by the end users or some token object.
- the script or set of instructions then cause the end users to report those measurements to the CDN or the monitoring tool.
- Such techniques may be performed covertly without the end users' knowledge, thereby surfacing issues related to privacy and trust.
- end users are made aware of such techniques, most disapprove or disallow execution on their devices as they do not want any unnecessary software from running on their devices, especially when such software is executed for the benefit of some third party.
- CDN content delivery network
- PoP Point-of-Presence
- the infrastructure of a distributed platform such as a CDN, provides the ideal deployment of servers to achieve these and other objects.
- the CDN includes various PoPs having one or more edge servers.
- the edge servers of each PoP are proximally located to end users of one or more specific geographic regions. Content requests originating from a geographic region are typically resolved to the PoP that is proximate to that geographic region, thus enabling the edge servers of that PoP to deliver the requested content to the end users originating the content requests.
- some embodiments enhance each PoP of the CDN with at least one monitoring agent and a database.
- the monitoring agent measures the performance that is associated with delivering content from one or more edge servers of the PoP to various end users that are routed to that PoP.
- the monitoring agent measures outgoing traffic flows at the applications layer (i.e., Layer 7) so as to measure the effective rate at which the content is sent while obfuscating the underlying the lower layer flow control mechanisms.
- the measurements are real-time and accurately reflect performance experienced by the end user by virtue of the measurements being taken as content is transferred from the edge server to the end user. Measurement accuracy is further realized based on the geographic proximity of the PoP to the end user.
- This proximity eliminates many of the links or hops that act as variables affecting network performance along the network path connecting the edge server to the end user. This proximity also allows measurements that were taken for a first end user in a geographic region to be overwritten by measurements that are taken for a second end user in the geographic region without loss of accuracy. This is a result of the localization of the PoP to one or more proximal geographic regions which causes content to traverse substantially all if not all of the same network links or hops in order to reach the end users of a particular geographic region.
- the monitoring agent stores the derived measurements to the database.
- the measurements stored to the database are then made accessible to each edge server operating within the same PoP as the database.
- the edge servers use the measurements to then optimize outgoing traffic flows.
- optimization of a traffic flow involves pre-optimization and/or re-optimization of the traffic flow.
- An edge server performs pre-optimization of a traffic flow to a first end user based on measurements taken for a second end user that is within the same geographic region as the first end user when no prior measurements have been taken for the first end user or when the prior measurements taken for the end user have exceeded a specified time-to-live. Since all edge servers of a PoP are usually never idle at the same time, there will be at least one real-time measurement to one end user of a specific geographic region that can be used to pre-optimize traffic flows for other end users within that specific geographic region.
- the monitoring agent monitors the outgoing traffic flow to the first end user and the edge server re-optimizes the traffic flow based on measurements the monitoring agent derives for the first end user.
- optimization e.g., pre-optimization and re-optimization
- a traffic flow involves selecting an encoding, bitrate, compression, file size, or other variant of content. Optimization may also involve server-side bandwidth throttling.
- optimization is performed by comparing the real-time measurements against one or more established thresholds.
- a real-time measurement surpasses a first threshold, the quality of the content being delivered may be lowered in order to accommodate for worsened network conditions.
- a real-time measurement surpasses a second threshold, the quality of the content being delivered may be improved in order to accommodate for better network conditions.
- the thresholds can be based on an expected set of results or against previously logged performance measurements.
- the expected set of results may include expected transfer rates that are determined based on the network provider the end user uses to access the CDN.
- the expected transfer rates may be an expected transfer rate that an end user is likely to receive during ordinary loads when connected to a tower of the cellular data network, a wireless ISP, or overloaded broadband network.
- the expected transfer rates may also partly or wholly be determined on the geographic location of the end user.
- FIG. 1 presents an exemplary CDN infrastructure that includes a distributed set of edge servers, traffic management servers, and an administrative server.
- FIG. 2 illustrates the enhancements to a PoP of a distributed platform that enable localized and real-time server-side performance monitoring in accordance with some embodiments.
- FIG. 3 presents a process performed by the monitoring agent to monitor network performance from an edge server to a particular end user in accordance with some embodiments.
- FIG. 4 presents a process for using the derived server-side measurements (i.e., scores) of the monitoring agent to perform re-optimization by optimizing content as it is being delivered from a server to an end user in real-time in accordance with some embodiments.
- FIG. 5 presents a process for pre-optimizing content based on the server-side monitoring process described with reference to FIG. 3 in accordance with some embodiments, whereby content is optimized prior to the first packet of such content being sent.
- FIG. 6 presents a message exchange diagram to summarize traffic flow optimization using the localized and real-time server-side monitoring systems and methods in accordance with some embodiments.
- FIG. 7 conceptually illustrates the localized and real-time server-side performance monitoring system operating in the context of a wireless data network.
- FIG. 8 illustrates a computer system or server with which some embodiments are implemented.
- the embodiments set forth herein provide localized and real-time server-side network performance monitoring systems and methods.
- Various advantages of these systems and methods are achieved by leveraging the distributed architecture of a content delivery network (CDN), namely the distributed allocation of edge servers of the CDN.
- CDN content delivery network
- FIG. 1 presents an exemplary CDN infrastructure that includes a distributed set of edge servers 110 , traffic management servers 120 , and an administrative server 130 .
- the figure also illustrates the interactions that CDN customers including content providers have with the CDN and the interactions that content consumers or end users have with the CDN.
- Each edge server of the set of edge servers 110 may represent a single physical machine or a cluster of machines that serves content on behalf of different content providers to end users.
- the cluster of machines may include a server farm for a geographically proximate set of physically separate machines or a set of virtual machines that execute over partitioned sets of resources of one or more physically separate machines.
- the set of edge servers 110 are distributed across different edge regions of the Internet to facilitate the “last mile” delivery of content.
- Each cluster of servers at a particular region may represent a point-of-presence (PoP) of the CDN, wherein an end user is typically routed to the closest PoP in order to download content from the CDN. In this manner, content traverses fewer hops before arriving at the end user, thereby resulting in less latency and an improved overall end user experience.
- PoP point-of-presence
- the traffic management servers 120 route end users, and more specifically, end user issued requests for content to the one or more edge servers 110 .
- Different CDN implementations utilize different traffic management schemes to achieve such routing to the optimal edge server.
- the traffic management scheme performs Anycast routing to identify a server from the set of servers 110 that can optimally serve requested content to a particular end user requesting the content.
- the traffic management servers 120 can include different combinations of Domain Name System (DNS) servers, load balancers, and routers performing Anycast or Border Gateway Protocol (BGP) routing.
- DNS Domain Name System
- BGP Border Gateway Protocol
- the administrative server 130 may include a central server of the CDN or a distributed set of interoperating servers that perform the configuration control and reporting functionality of the CDN.
- Content providers register with the administrative server 130 in order to access services and functionality of the CDN. Once registered, content providers can interface with the administrative server 130 to specify a configuration, upload content, and view performance reports.
- the administrative server 130 also aggregates statistics data from each server of the set of edge servers 110 and processes the statistics to produce usage and performance reports.
- the distributed architecture of the CDN is an ideal platform from which to perform localized and real-time server-side performance monitoring.
- the allocation of PoPs to different geographic regions provides an ideal partitioning of CDN resources that can be adapted to monitor end users in a decentralized fashion.
- Each PoP of the CDN is deployed to an edge of the network.
- a network edge is the primary point of exchange for requests and content that is passed between end users at one or more geographic regions and the larger external data network or Internet.
- the traffic management functionality of the CDN ordinarily ensures that the end users at an edge of a network or geographic region are served by edge servers of a specific PoP.
- the traffic management functionality can utilize Anycast routing or Domain Name System (DNS) resolution to ensure that end users are served by edge servers of the PoP that is geographically closest to them.
- CDN architecture provides a logical partitioning of the entire set of end users into smaller subsets of related end users, whereby a subset of end users is related primarily by geographic region. This partitioning is also manifested in the allocation of IP addresses to each subset of end users.
- each end user from the subset of end users operating from within a particular geographic region is assigned an IP address that is within a particular subnet.
- these end users are routed to a particular PoP from a network that is assigned a specific Autonomous System (AS) number.
- AS Autonomous System
- the systems and methods advocated herein leverage the distributed allocation of PoPs in order to decentralize the task of monitoring all end users interfacing with the CDN and to localize the monitoring on a per PoP or per geographic regional basis.
- each PoP is locally responsible for obtaining and updating performance measurements for those end users that are serviced by that PoP. This greatly reduces the number of end users that any given PoP monitors. This also eliminates the taking of redundant measurements, whereby two or more endpoints from within the CDN are used to monitor a single end user endpoint.
- the monitoring is performed over the actual pathways that connect the end users to the CDN, thereby accurately measuring the performance that the end users experience.
- the systems and methods utilize server-side monitoring techniques that derive network performance measurements based on existing traffic flows from an edge server to a particular end user. These server-side techniques do not involve the injection of any additional traffic beyond that which is requested and delivered to the end users. Such server-side techniques are also able to derive performance measurements without requiring active interaction with the end user. Furthermore, real-time monitoring is achieved as a result of monitoring the outbound traffic flows as they are sent.
- some embodiments incorporate at least one monitoring agent and at least one database to each PoP of the distributed platform.
- the same monitoring agent therefore performs server-side performance monitoring for each server of the PoP.
- at least one edge server at each PoP of the CDN is enhanced with a monitoring agent. In this configuration, the monitoring agent performs server-side performance monitoring of the content that is sent by the enhanced server to any end user.
- the monitoring agent monitors traffic flows from an edge server at the applications layer (i.e., Layer 7) of the Open Systems Interconnect (OSI) model.
- OSI Open Systems Interconnect
- Monitoring at the applications layer obfuscates the lower layer flow controls while still allowing the monitoring agent to obtain an effective server-side transfer rate for the content exiting the edge server.
- the performance measurements obtained for a specific end user are quantified into a single metric, such as a numeric score.
- the measurements or scores are then used to optimize traffic flows that are disseminated to the end users that are serviced by the PoP from which the measurements are taken.
- Traffic flow optimization involves adjusting the content that is delivered on the basis of the current network conditions as reflected in the real-time performance measurements.
- optimization includes selection of an encoding, bitrate, compression, file size, or other variant of the content.
- the bandwidth required to transfer the content from the server to an end user can be adjusted to accommodate measured changes in the performance of the data network over which the content is passed.
- Optimization may also involve server-side bandwidth throttling. Optimization ensures that end users receive a seamless experience irrespective of the real-time performance of the data network.
- Some embodiments support pre-optimization and re-optimization of a traffic flow (i.e., delivery of content).
- Pre-optimization involves optimizing content prior to the first packet of the content being sent. This ensures a seamless and optimized end user experience from the start which is in contrast to many existing adaptive streaming techniques that start with a high quality setting for the content and then scale back the quality setting for the content based on subsequently measured network performance parameters. Conversely, some adaptive streaming techniques start with a low quality setting for the content and then gradually scale the quality up until it meets the available bandwidth. In any case, current adaptive streaming techniques do not involve pre-optimization.
- Pre-optimization is based on a prior measurement of network performance.
- the prior measurement may have been taken for the same end user that is to receive the content or for another end user that is within the same geographic region as the end user that is to receive the content.
- Pre-optimization may be conducted using a measurement that is taken for a different end user, because that end user will be in the same geographic region as the one receiving the content based on the above described partitioning of end users through the distributed allocation of PoPs.
- the network path from the PoP or more specifically, an edge server in the PoP, to any end user within the same geographic region will by substantially the same, if not exactly the same.
- the network path will consist of nearly all or all of the same links or hops that must be traversed in order to deliver the content from the edge server to any end user within the same geographic region. Accordingly, a measurement taken for a first end user in the geographic region will accurately reflect the performance that a second end user in the same geographic region will experience when receiving content from the same PoP of the CDN. The prior measurement is compared to one or more specified performance thresholds. This comparison determines how to optimize the content before sending the content to the requesting end user.
- the edge server can optimize the content by selecting a variant of the content that requires less bandwidth to deliver, wherein the selected variant can include higher compression, lower bitrate encoding, and lower resolution as some examples. Once an optimized variant of the content is selected, the transmission of the content to the requesting end user can begin.
- Re-optimization involves real-time optimization of content or optimizing content as it is sent.
- the monitoring agent begins monitoring a traffic flow once an edge server begins transmitting content to an end user.
- the monitoring agent takes real-time measurements of the outgoing traffic flow.
- the edge server obtains the real-time measurements and processes them in order to determine how to optimize the outgoing traffic flow as it is being sent. Specifically, the obtained measurements are compared against one or more specified performance thresholds.
- the content is then optimized as necessary by continuing to send the same variant of the content or by selecting different variants (e.g., compression, encoding, resolution, etc.) of the content to send.
- the re-optimization techniques set forth herein differ from those implemented by existing adaptive streaming techniques in that the re-optimization of some embodiments does not involve any end user feedback or the introduction of any specialized packets. Rather, re-optimization is wholly performed based on the rate at which an edge server sends packets.
- Lossy networks are those networks that experience high latency and high amounts of packet loss.
- Data networks operated by wireless service providers such as 3G and 4G data networks of Verizon, AT&T, and Sprint are examples of some such lossy networks.
- the localized and real-time server-side performance monitoring systems and methods are described with reference to components of a CDN.
- these systems and methods are similarly applicable to any server that hosts and delivers content to a set of end users irrespective of whether the server operates as part of a CDN. Therefore, the systems and methods described herein are not limited solely to implementation in a CDN, though the distributed platform of the CDN is discussed as a platform to maximize the benefits of the systems and methods.
- the localized and real-time server-side monitoring systems and methods can be embedded within such a distributed platform with minimal modification to the existing infrastructure.
- the systems and methods can be implemented by incorporating at least one monitoring agent and at least one database to each PoP of the distributed platform and by minimally modifying operation of one or more edge servers of each particular PoP to optimize their outgoing traffic flows based on network performance measurements that are derived by the monitoring agent embedded in that particular PoP.
- FIG. 2 illustrates the enhancements to a PoP of a distributed platform that enable localized and real-time server-side performance monitoring in accordance with some embodiments.
- FIG. 2 illustrates a PoP having three edge servers 210 , 215 , and 220 that host content on behalf of various content providers and that deliver the hosted content to various end users that are located in one or more regions that are geographically proximate to the PoP.
- the monitoring agent 230 Also illustrated within the PoP is the monitoring agent 230 and the network performance database 240 .
- the monitoring agent 230 is provided access to each of the edge servers 210 , 215 , and 220 .
- this access allows the monitoring agent 230 to perform server-side monitoring of the outgoing traffic flows from each of the edge servers 210 , 215 , and 220 .
- Other embodiments may use a single monitoring agent to perform server-side monitoring of outgoing traffic flows from a single edge server in the PoP.
- the results of the server-side monitoring are stored to the database 240 .
- the edge servers 210 , 215 , and 220 then retrieve the monitoring results from the network performance database 240 in order to optimize the outgoing traffic flows.
- the monitoring agent is a software module that is encoded as a set of computer executable instructions.
- the set of computer executable instructions are stored to a non-transitory computer-readable medium of an edge server or a separate virtual or physical machine that is collocated in a PoP with one or more edge servers.
- the monitoring agent 220 is illustrated in FIG. 2 as a separate machine from each of the edge servers 210 , 215 , and 220 , the monitoring agent can be integrated as part of the core caching functions of each of the edge servers 210 , 215 , and 220 so as to yield an enhanced edge server that is operable to perform both caching functionality and the localized and real-time server-side monitoring in accordance with some embodiments.
- Various hardware for the machine on which the monitoring agent executes is described in the section entitled “Server System”.
- the monitoring agent is provided access to the protocol stack of the edge server. This access allows the monitoring agent to monitor packets that are received by and sent from the edge server. By monitoring these packets, the monitoring agent is able to derive server-side measurements that detail network performance from the edge server to the end user.
- the monitoring agent is configured to monitor application layer packets passing through the protocol stacks. This is referred to as Layer 7 monitoring, wherein the seventh layer is in reference to the seventh layer or application layer of the OSI model.
- FIG. 3 presents a process 300 performed by the monitoring agent to monitor network performance from an edge server to a particular end user in accordance with some embodiments.
- the process 300 begins when the monitoring agent detects (at 310 ) a request for content from the particular end user.
- a request may be encoded as an application layer HyperText Transfer Protocol (HTTP) GET request packet, though the monitoring agent can be configured to detect other requests for content whether at the application layer or other layers in the protocol stack.
- HTTP HyperText Transfer Protocol
- the process extracts (at 320 ) an identifier identifying the end user that submits the request for content.
- the identifier is ordinarily included within the header of the request packet.
- One common identifier is the IP address of the end user as encoded within the source IP address header field of an HTTP GET request packet.
- the process may extract additional identifiers that further identify the requesting end user or the region from which the request originates. Such additional identifiers include the “user agent” or autonomous system (AS) number.
- AS autonomous system
- the process monitors (at 330 ) the outgoing packets from the server. More specifically, the process monitors the effective rate at which the packets are sent. As earlier noted, this includes monitoring the effective rate at which application layer packets, such as HTTP packets, are sent from the edge server. Monitoring the effective rate of application layer packets provides an accurate measure of the network performance to the end user while obfuscating from the underlying network flow control mechanisms in the protocol stack that regulate the effective rate for the application layer packets. For instance, the Transmission Control Protocol (TCP) is a reliable transport protocol that can be used to transfer application layer packets from a source to a destination.
- TCP Transmission Control Protocol
- TCP sends out a first set of packets and awaits acknowledgement of one or more of those packets before sending out any additional packets.
- the underlying TCP controls the effective rate at which application layer packets are sent from the edge server to the end user.
- the effective rate of outgoing packets sent from the edge server to an end user is based on one or more different performance metrics. These performance metrics can include latency, throughput, and packet loss as some examples that collectively can determine the effective rate of transfer. It should be noted that by monitoring the effective rate of the application layer packets, the monitoring agent is able to perform a non-intrusive form of server-side monitoring that obtains real-time performance measurements without injection of any specialized monitoring packets.
- quantification involves computing a single score from various measurements obtained as a result of the monitoring. This may include computing a single score to represent the effective rate of outgoing packets from the edge server to a specific end user over a five second duration. This may also include computing a single score based on throughput, bandwidth, and latency measurements that collectively comprise the effective rate of the outgoing packets.
- the single score is used to reduce the amount of storage that is required to store the performance measurements at the network performance database without losing accuracy of the measurements. In addition to the reduction in the storage requirements, the single score reduces the overhead associated with reading and writing the network performance data to the network performance data. Such efficiency is needed in order to support real-time updating of scores when actively monitoring several thousand end users that may be serviced by a single PoP.
- the process logs (at 350 ) the quantified score in association with the extracted identifier and a timestamp.
- the score, identifier, and timestamp are logged to the network performance database.
- the identifier serves to associate the monitored results or quantified score to a particular end user and more generally, to a geographic region in which the end user associated with the identifier is located and other end users having similar identifiers are located (e.g., IP addresses within the same subnet).
- the timestamp is a freshness indicator that is used to preserve the real-time freshness of the monitored results and used to ensure that outgoing traffic flows are not optimized based on stale performance data. Though process 300 is shown to terminate after step 350 , it is often the case that at least steps 330 - 350 of the process are continually repeated until the outgoing traffic flow being monitored is complete.
- the scores logged to the network performance database are utilized by the edge servers within the same PoP as the monitoring agent to optimize outgoing traffic flows. This promotes the sharing of derived scores between edge servers such that when a network performance score is computed for content that is sent from a first edge server of a PoP to a first end user, that score can be used to optimize content that is sent from a second edge server of the same PoP to the first end user. Also, that same score can be used to optimize content that is sent from the second edge server of the same PoP to a second end user that is in the same geographic region as the first end user with the network path from the second edge server to the second end user being the same or consisting of substantially the same links or hops as the network path from the second edge server to the first end user.
- outgoing traffic flows are pre-optimized and re-optimized, wherein pre-optimization involves optimizing content prior to the first packet of that content being sent from the edge server to an end user, and wherein re-optimization involves optimizing content as it is being sent from the edge server to an end user.
- FIGS. 4 and 5 below describe the modified operation of the CDN edge servers to leverage the logged scores in order to optimize outgoing traffic flows in accordance with some embodiments.
- FIG. 4 presents a process 400 for using the derived server-side measurements (i.e., scores) of the monitoring agent to perform re-optimization by optimizing content as it is being delivered from a server to an end user in real-time in accordance with some embodiments.
- Process 400 can be performed by the same machine performing process 300 when the monitoring agent is integrated as part of the edge server performing content delivery.
- process 400 can be performed by an edge server that is collocated in the same PoP as the machine running the monitoring agent and performing process 300 .
- Process 400 is performed after the edge server has sent at least the first packet for content requested by an end user.
- the process performs (at 410 ) a lookup to the network performance database using the identifier of the requesting end user. This lookup may be performed by the edge server at specified intervals when it is sending content to one or more end users.
- the identifier is typically the IP address assigned to the end user device that submits the content request.
- the edge server will have extracted this identifier from the initial content request of the end user.
- a real-time measurement in the form of a quantified score will exist in the database because the monitoring agent will begin monitoring the server-side performance once the server begins transmitting content to the requesting end user. Accordingly, the process receives (at 420 ) a score quantifying real-time network performance from the edge server to the requesting end user.
- the process checks (at 430 ) the time-to-live parameter for the received score to ensure that the score received during the current pass through process 400 is not stale or one that was previously used. This check can be performed by simply determining if a specified amount of time has passed since the score was logged to the network performance database or by comparing the time-to-live parameter for the current score to one received during a previous pass through process 400 . This latter point is better illustrated with an exemplary reference to a second pass through the process 400 . During the second pass through the process 400 , the process compares the time-to-live parameter for the score received during the second pass with a time-to-live parameter for a score that was received during a first pass.
- the score i.e., performance measurement
- the database runs a routine to delete, remove, or overwrite any stored scores that exceed the time-to-live parameters such that all scores stored to the database are ensured to be real-time relevant.
- the edge server need not perform the real-time relevancy check.
- the process determines (at 460 ) if the server is continuing to send content to the end user. If not, the process ends. Otherwise, the process reverts to step 410 to perform another lookup to the network performance database for an updated real-time performance score.
- the process compares (at 440 ) the received score to at least one defined threshold and dynamically optimizes (at 450 ) the transmission of the content in real-time based on the comparison.
- a first baseline threshold may be defined to determine when the resources needed to deliver the content exceed those that are currently available. When this first baseline threshold is met, the process optimizes the transmission of the content by reducing the resources that are needed to deliver the content to the end user, thereby decreasing the likelihood of packet loss, buffering, and other performance degradations that would hinder the end user experience.
- a second baseline threshold may be defined to determine when there are sufficient unused resources in the network. When this second baseline threshold is met, the process optimizes the transmission of the content by increasing the quality of the content being passed to the end user, thereby providing a richer end user experience. Additional thresholds may be set and compared against to provide a gradual optimization of the content.
- the baseline thresholds are set by the edge server operator or the CDN operator based on expected network performance. For example, an initial set of performance measurements are taken when the network is known to not be congested and these measurements are then set as the baseline values for the thresholds.
- the baseline thresholds are determined from historic performance measurements that the monitoring agent takes based on previous content delivered to one or more end users of a geographic region. For example, a particular end user requests and receives content from a specific PoP of the CDN and the content is delivered with an average latency of 10 ms at 100 kilobits per second. The baseline threshold can then be derived from these averages.
- Common optimization techniques that can be used by the edge server include adaptively increasing or decreasing the bitrate for content being sent to an end user based on different encodings of the same content, increasing or decreasing resolution of the content, increasing or decreasing the amount by which the content being sent is compressed, increasing or decreasing the rate used to send the content, adding or removing objects from the content being sent, or other adjustments to the quality of the content.
- Each such technique alters the amount of bandwidth that is required to send content, thereby enabling content to be delivered faster when there is less bandwidth available and enabling content to be delivered with better quality when there is more bandwidth available.
- each edge server stores different variants of the same content, wherein each variant may include a different bitrate encode, compression level, resolution, or other variant. Also, the edge server may choose to send different versions of the same website (e.g., a full version of a website as compared to a mobile version of the website).
- An ongoing session may include, for example, a media stream that includes streaming or recorded video and/or audio or server-side execution of an application or game, as well as hosting and serving a series of websites or website content that are sequentially or iteratively accessed.
- the process determines (at 460 ) whether the server is still sending content to the end user. If not, the process ends. Otherwise, the process reverts to step 410 .
- the systems and methods perform server-side monitoring to adjust content delivery in real-time, whereby the server-side monitoring is based only on the traffic that is sent from the server to the end user without the need for specialized monitoring packets and without the need for specialized monitoring of the end user response to the outgoing content.
- FIG. 5 presents a process 500 for pre-optimizing content based on the server-side monitoring process described with reference to FIG. 3 in accordance with some embodiments, whereby content is optimized prior to the first packet of such content being sent.
- Process 500 can be performed by the same machine performing process 300 when the monitoring agent is integrated as part of the edge server performing content delivery.
- process 500 can be performed by an edge server that is collocated in the same PoP as the machine running the monitoring agent.
- Process 500 is performed by an edge server whenever the edge server receives a request to initiate the delivery of content to an end user and prior to dissemination of the first packet of the requested content. Accordingly, the process begins by receiving (at 510 ) a request for content. The following steps of process 500 can be performed in parallel with the server processing the request in order to identify where the requested content is stored (e.g., in cache, on disk, at a remote origin server, etc.).
- the process parses (at 520 ) the request to extract an identifier that identifies the end user submitting the request.
- the identifier is the IP address assigned to the end user device submitting the request.
- the identifier additionally or alternatively includes an AS number, user agent, etc.
- the process performs (at 530 ) a lookup to the network performance database.
- the lookup identifies any measurements or scores that are derived to measure the network performance to the end user identified by the extracted identifier.
- the lookup also identifies any measurements/scores that are derived for other end users that are related to the requesting end user.
- the relation between end users is determined from the IP addressing that is assigned to the end user devices. For instance, blocks of IP addresses are normally assigned to devices that are geographically proximate to one another. Such IP address blocks are assigned by Internet Service Providers (ISPs) to end users operating within the same or proximate network access service areas.
- ISPs Internet Service Providers
- the relation between end users is determined based on AS number. End users that are routed from the same autonomous system normally gain access through the same network access service area.
- the process obtains (at 540 ) one or more scores quantifying network performance measurements from the network performance database.
- the process filters (at 550 ) the scores based on freshness as determined from the timestamp associated with each score and the specified time-to-live parameters. This ensures that the pre-optimization of the traffic flows is based on real-time data whether such data is derived for the end user that is to receive the requested content or for other related end users that are within the same geographic region as the end user that is to receive the requested content.
- the filtered scores are then used to optimize (at 560 ) the delivery of the requested content prior to the first packet of the requested content being sent. Optimization is based on comparing the filtered scores to one or more specified thresholds. The relative comparison of the filtered scores to the specified thresholds determines if the network is congested or otherwise underperforming such that the bandwidth requirements for the content to be delivered should be reduced or if the network has available bandwidth that can support higher quality variants of the requested content. As earlier noted, different content delivery optimizations can be made based on the type of the requested content. For media content, the process can select one of several encodings of the media content based on the filtered scores. The server can then send the selected optimized encoding without having to obtain a measurement directly from the specific end user before beginning the transmission.
- the process can select one of several resolutions or levels of compression for the image based on the filtered scores. For other content, the process can select whether to send a full copy of the content, a compressed version of the content, or an incomplete set of the content with extraneous objects omitted to conserve bandwidth based on the filtered scores. As a result of this pre-optimization, end users are less likely to experience buffering when starting playback of media content and are less likely to experience changes in quality at the start of playing media content.
- the process begins monitoring (at 570 ) the real-time performance of the network, derives (at 580 ) updated real-time scores to quantify the network performance to the actual end user that receives the content, and re-optimizes (at 590 ) the outgoing traffic flow based on the updated real-time scores as per the process 400 described above with reference to FIG. 4 .
- server-side monitoring to optimize outgoing traffic flows from the beginning to the end of the traffic flow.
- server-side monitoring is non-intrusive in that the monitoring is performed without the introduction of specialized monitoring packets by basing the monitoring solely on the content that is requested and sent from the server to an end user or other end users related to the requesting end user.
- FIG. 6 presents a message exchange diagram to summarize traffic flow optimization using the localized and real-time server-side monitoring systems and methods in accordance with some embodiments.
- the figure illustrates a PoP 610 of a distributed platform that is tasked with delivering content to end users 633 and 636 that are located in region 630 .
- the PoP 610 includes a monitoring agent 620 , a first edge server 623 , and a second edge server 626 .
- the diagram commences with the first edge server 623 sending (at 640 ) content to the first end user 633 .
- the monitoring agent 620 performs server-side monitoring of the first edge server 623 by continually monitoring (at 643 ) the outgoing application layer packets that are sent from the first edge server 623 to the first end user 633 .
- the monitoring agent 620 computes (at 646 ) one or more scores to quantify the network performance based on the monitoring of the outgoing packets.
- the scores provide a collective quantification for the performance of the network links connecting the first edge server 623 to the first end user 633 and more generally, for the performance of the network links connecting the PoP 610 to the geographic region 630 .
- content that is sent to any end user in the geographic region 630 will have to traverse the same network links as the content being sent from the first edge server 623 to the first end user 633 such that the computed network performance scores have application not only to the first end user 633 , but any end user operating in that region 630 .
- the first edge server 623 obtains (at 650 ) the computed scores from the monitoring agent 620 or an associated database. Filtering of these scores is not necessary as these scores were computed in real-time which may be indicated using a flag or other meta-data or when the database automatically removes or overwrites scores that have exceeded a specified time-to-live. If filtering is to be performed, a comparison of the score's timestamp to a time-to-live parameter will reveal if the score is useable or is stale and should be discarded.
- the first edge server 623 uses the real-time relevant scores to optimize the sending of the content to the end user 633 .
- the first edge server 623 compares the scores against one or more baseline thresholds to determine if the network is congested such that the quality of the content being sent has to be reduced in order to preserve bandwidth or if there is available bandwidth that can be used to improve the quality of the content being sent.
- the first edge server 623 selects (at 653 ) a different variant for the content being sent and the first edge server 623 resumes (at 656 ) sending the remainder of the content based on the newly selected variant.
- the first edge server 623 may have previously selected a 16 Kbps encoding of an audio stream, but the derived performance measurements reveal an effective transfer rate of 40 Kpbs to the first end user 623 .
- the first edge server 623 can then select a different variant for the audio stream which is encoded at 32 Kbps and resume sending that higher quality stream to the first end user 623 while staying within the limits of the network and while improving the end user experience. Further re-optimizations may be made during the continued transfer of the content.
- the second edge server 626 receives a request for content from the second end user 636 .
- the second edge server 626 extracts (at 663 ) one or more identifiers from the request. These identifiers identify the geographic region in which the second end user 636 is located. More specifically, the identifier may be an IP address that can be mapped to a particular ISP and ultimately, to the specific region 630 serviced by that ISP. Similarly, the identifier may be an Autonomous System (AS) number that identifies the specific region 630 in which the second end user 636 is located. Further still, the identifier may be mapped to a subnet that identifies a geographic region.
- AS Autonomous System
- the second edge server 626 queries (at 666 ) the monitoring agent 620 or an associated database based on the one or more extracted identifiers to obtain any performance scores quantifying network performance to the identified service region 630 .
- the monitoring agent 620 has recently computed scores quantifying the network performance from the first edge server 623 to the first end user 633 . Since the first end user 633 and the second end user 636 are located in the same geographic region, the network links from the PoP 610 to each of the end users 633 and 636 will be substantially the same, therefore enabling the scores that were derived for the content sent to the first end user 633 to be used for pre-optimizing the content that is to be sent to the second end user 636 .
- the second edge server 626 obtains (at 670 ) the scores and filters (at 673 ) the scores for real-time relevancy. This includes discarding any scores that have an associated timestamp that is exceeds a specified time-to-live parameter. The second edge server 626 then selects (at 676 ) a variant of the requested content based on the filtered scores and begins sending (at 680 ) the selected variant to the second end user 636 . In this manner, the content requested by the second end user 636 is pre-optimized based on scores quantifying network performance of content delivered to different end users in the same geographic region as the second end user 636 .
- the monitoring agent 620 monitors (at 683 ) the outgoing application layer packets and computes a score (at 686 ) to quantify the network performance from the second edge server 626 to the second end user 636 .
- the newly computed score is obtained (at 690 ) by the second edge server 626 and used to optimize (at 693 ) the content being sent to the second end user by selecting another variant of the content when necessary.
- the second edge server 626 may begin sending the audio stream at 32 Kbps based on the earlier measurements derived for content delivery to the first end user 633 .
- new measurements are taken while delivering the requested content to the second end user 636 to reveal that the network condition has degraded to provide an effective transfer rate of 28 Kbps such that the second edge server 626 can re-optimize the audio stream and select the lower quality 16 Kbps variant.
- FIG. 7 conceptually illustrates the localized and real-time server-side performance monitoring system operating in the context of a wireless data network.
- the wireless data network includes wireless nodes 710 and 715 that produce wireless service regions 720 and 725 . These wireless nodes 710 and 715 include one or more cellular towers and connecting base stations. For a Universal Mobile Telecommunications System (UMTS) data network, the wireless nodes 710 and 715 include one or more Node-Bs and one or more Radio Network Controllers (RNCs) connecting the service regions 720 and 725 to a core network of the wireless service provider. End user subscribers located within the service regions 720 and 725 can use their wireless devices to send and receive content from an external data network, such as the Internet.
- the core network 730 may include one or more Serving GPRS Support Nodes (SGSNs) and one or more Gateway GPRS Support Nodes (GGSNs). Though only two wireless nodes 710 and 715 are shown, the core network 730 can connect several additional wireless nodes to the external data network and can thus experience large traffic loads.
- SGSNs Serving GPRS Support Nodes
- GGSNs Gateway GPRS Support Nodes
- the PoP 740 is ideally positioned to optimize the traffic flows passing between the core network 730 and the external data network.
- the PoP 740 includes a set of edge servers 750 and a monitoring agent 760 .
- the monitoring agent 760 monitors the outgoing traffic flows from the PoP 740 to any end user in the service regions 720 and 725 and the monitoring agent 760 computes scores quantifying network performance to these service regions 720 and 725 .
- the set of edge servers 750 can then optimize the content that they send to any end user in these service regions 720 and 725 irrespective of whether the scores were computed for that end user or a different end user, because the network links connecting the PoP 740 to the service regions 720 and 725 will remain the same. In this manner, the PoP 740 sends continually optimized content that is adjusted based on real-time network conditions of the service regions 720 and 725 . Moreover, such monitoring and optimization of the wireless service regions 720 and 725 occurs without introducing any monitoring packets or other packets beyond those satisfying content requests of the end users.
- the set of edge servers 750 can select a variant of requested content that minimizes the bandwidth required to send that content to the service region 720 .
- identifying end users in the service region 720 can be predicated based on IP addresses, subnets, or AS numbers.
- the PoP itself is limited to servicing end users within one or more neighboring service regions such that if one service region is congested, then it is likely that the neighboring service regions are also subject to the same congestion.
- the PoP 740 can adjust its traffic flows to the service regions 720 and 725 in real-time by decreasing the bandwidth required for those traffic flows to ensure that all end users receive an uninterrupted experience. Conversely, when either service region 720 or 725 experiences low loads, the PoP 740 can adjust its traffic flows to the service regions 720 and 725 in real-time by increasing the bandwidth required for those traffic flows to provide the end users with a richer experience.
- Non-transitory computer-readable storage medium also referred to as computer-readable medium.
- these instructions are executed by one or more computational element(s) (such as processors or other computational elements like ASICs and FPGAs), they cause the computational element(s) to perform the actions indicated in the instructions.
- Server, computer, and computing machine are meant in their broadest sense and may include any electronic device with a processor that executes instructions stored on computer-readable media or that are obtained remotely over a network connection. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc.
- a server is identified as a component of the embodied invention, it is understood that the server may be a single physical machine, or a cluster of multiple physical machines performing related functions, or virtualized servers co-resident on a single physical machine, or various combinations of the above.
- FIG. 8 illustrates a computer system or server with which some embodiments are implemented.
- a computer system includes various types of computer-readable mediums and interfaces for various other types of computer-readable mediums that implement the server-side monitoring systems and methods (i.e., monitoring agent, edge server, edge server enhanced with a monitoring agent, etc.) described above.
- Computer system 800 includes a bus 805 , a processor 810 , a system memory 815 , a read-only memory 820 , a permanent storage device 825 , input devices 830 , and output devices 835 .
- the bus 805 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 800 .
- the bus 805 communicatively connects the processor 810 with the read-only memory 820 , the system memory 815 , and the permanent storage device 825 . From these various memory units, the processor 810 retrieves instructions to execute and data to process in order to execute the processes of the invention.
- the processor 810 is a processing device such as a central processing unit, integrated circuit, graphical processing unit, etc.
- the read-only-memory (ROM) 820 stores static data and instructions that are needed by the processor 810 and other modules of the computer system.
- the permanent storage device 825 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 800 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 825 .
- the system memory 815 is a read-and-write memory device. However, unlike the storage device 825 , the system memory is a volatile read-and-write memory, such as random access memory (RAM).
- RAM random access memory
- the system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the processes are stored in the system memory 815 , the permanent storage device 825 , and/or the read-only memory 820 .
- the bus 805 also connects to the input and output devices 830 and 835 .
- the input devices enable the user to communicate information and select commands to the computer system.
- the input devices 830 include, but are not limited to, alphanumeric keypads (including physical keyboards and touchscreen keyboards) and pointing devices (also called “cursor control devices”).
- the input devices 830 also include, but are not limited to, audio input devices (e.g., microphones, MIDI musical instruments, etc.).
- the output devices 835 display images generated by the computer system.
- the output devices include, but are limited to, printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
- CTR cathode ray tubes
- LCD liquid crystal displays
- bus 805 also couples computer 800 to a network 865 through a network adapter (not shown).
- the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet.
- LAN local area network
- WAN wide area network
- Intranet an Intranet
- the computer system 800 may include one or more of a variety of different computer-readable media.
- Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, ZIP® disks, read-only and recordable blu-ray discs, any other optical or magnetic media, and floppy disks.
- RAM random access memory
- ROM read-only compact discs
- CD-R recordable compact discs
- CD-RW rewritable compact discs
- CD-RW read-only digital versatile discs
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- The present invention relates to monitoring network performance and, more specifically, to performing localized server-side performance monitoring in a content delivery network.
- The data networks collectively forming the Internet are becoming or already are the primary means for communication, commerce, as well as accessing news, music, videos, applications, games, and other content. However at times, access to such content is delayed as a result of over-loaded links, downed links, limited bandwidth, congestion, or other lack of resources in the intervening links between a source providing the content and a destination requesting and receiving the content.
- Contributing to this slowdown are increasing numbers of users having numerous network enabled devices (e.g., desktops, laptops, tablets, smartphones, etc.), each of which are provided ever faster interfaces with which to consume content. Also contributing to the slowdown are increasing amounts of new and/or feature-rich content that requires greater bandwidth for delivery. In other words, there is both an increase in the demand for content as well as an increase in supply of consumable content.
- To counteract this slowdown, network operators have deployed data networks having greater bandwidth as well as more powerful and/or efficient networking resources. This is nowhere more evident than in the rapid evolution of cellular data networks. Within a relatively short time frame, these data networks have evolved from 2G (e.g., General Packet Radio Service (GPRS) and Enhanced Data Rates for GSM Evolution (EDGE)), to 3G (e.g., High Speed Packet Access (HSPA)), to the current 4G (e.g., Long Term Evolution (LTE)) data networks. Still, there is a need to more efficiently deliver the content as the supply and demand for content outpaces network evolution and the exorbitant costs of continual network evolution have slowed down the network evolution relative to the growth of the supply and demand for content. To that end, content delivery networks (CDNs) have been deployed throughout the Internet infrastructure.
- A CDN accelerates the delivery of content by reducing the distance that content travels in order to reach a destination. The CDN strategically locates surrogate origin servers, also referred to as caching servers or edge servers, at various points-of-presence (PoPs) that are geographically proximate to large numbers of content consumers. The CDN then utilizes a traffic management system to route requests for content hosted by the CDN to the edge server that can optimally deliver the requested content to the content consumer. As used hereafter optimal delivery of content refers to the most efficient available means with which content can be delivered from a server to an end user machine over a data network. Optimal delivery of content can be quantified in terms of latency, jitter, packet loss, distance, and overall end user experience.
- Determination of the optimal edge server may be based on geographic proximity to the content consumer as well as other factors such as load, capacity, and responsiveness of the edge servers. The optimal edge server delivers the requested content to the content consumer in a manner that is more efficient than when origin servers of the content provider deliver the requested content. For example, a CDN may locate edge servers in Los Angeles, Dallas, and New York. These edge servers may cache content that is published by a particular content provider with an origin server in Miami. When a content consumer in San Francisco submits a request for the published content, the CDN will deliver the content from the Los Angeles edge server on behalf of the content provider as opposed to the much greater distance that would be required when delivering the content from the origin server in Miami. In this manner, the CDN reduces the latency, jitter, and amount of buffering that is experienced by the content consumer.
- The edge server can further improve on the end user experience by adaptively adjusting the content that is being delivered to the end user. This may include reducing the bitrate of a media stream (e.g., video) being delivered to an end user when the path to the end user is congested or the performance of the path is otherwise degraded. In so doing, a lower quality stream is delivered to the end user. The lower quality stream ensures that the end user enjoys an uninterrupted experience (by avoiding dropped frames, repeated buffering, etc.). The bitrate can be increased in order to deliver a higher quality stream to the end user when the path from the edge server to end user becomes less congested. Similar adaptive techniques are applicable to other forms of content besides media content (e.g., music and video). For instance, the edge server can further improve the end user experience by adaptively scaling images. Here again, when the path to the end user is congested or otherwise limited, the edge server can improve the end user experience by passing a lower resolution copy or more compressed version of a requested image to the end user, thereby enabling the end user to receive the image quicker than if a higher resolution copy or less compressed version of the requested image were to be passed. Further still, the edge server can improve the end user experience using server-side bandwidth throttling, whereby the server throttles or slows the rate at which it sends content beyond ordinary flow control mechanisms in the protocol stack or in the data network.
- To facilitate any form of adaptive content delivery or server-side bandwidth throttling, the CDN edge server needs to be aware of the performance of the underlying data network that links the edge server to the various end users. CDNs either utilize existing network performance monitoring tools or have developed their own systems and methods in order to monitor network performance.
- One such network performance monitoring tool is the Keynote system. The Keynote system involves deployment of various agents across the Internet. The agents emulate end users and periodically request (e.g., every ten minutes) and download content from one or more of the CDN edge servers. The Keynote system agents then measure various metrics related to the delivery of that content. However, such systems do not provide accurate performance measurements because the agents do not request and download content from the same network locations as the actual end users. As a result, the performance measurements obtained from the Keynote system do not accurately reflect the network performance that end users experience. More specifically, the Keynote system is unable to measure performance along all network links connecting the end users to the CDN edge servers. Also, such a system does not provide real-time measurements. For instance, the network measurements can be up to 9 minutes and 59 seconds stale when measurements are taken every 10 minutes. Lastly, the system injects additional traffic into the network. This additional traffic is manifested in the form of the requests that are issued by the agents to the edge servers and the responses that the edge servers issue in turn to the system agents. This additional traffic adds to the traffic that is actually requested by and delivered to various end users. The result is increased network congestion and increased load on the edge servers which now have to respond to the monitoring agents in addition to the requests that are submitted by various end users. In other words, specialized packets are injected in the network for the sole purpose of performance monitoring.
- One method to improve upon the accuracy of the Keynote system is to take measurements directly from the end users that request content from the CDN. This also involves injecting additional traffic into the network. For example, pinging an end user by sending one or more Internet Control Message Protocol (ICMP) packets to determine a round-trip time to the end user. Such techniques, while accurate in the resulting measurements, add overhead at the server performing the measurements as well as additional traffic load on the data network. While a single ICMP packet is insignificant in consuming server resources and slowing down a network, thousands of such packets continually being sent out from multiple monitoring points (e.g., edge servers) can result in a measurable amount of performance degradation. Moreover, these measurements suffer from staleness as they are often conducted on a periodic basis. Such measurements can be taken in real-time. For example, before responding to a user request for content, pinging the end user. However, this introduces unnecessary delay when actually responding to the end user.
- Still some CDNs and network performance monitoring tools have resorted to using so called “client-side” techniques. These techniques usually involve end users performing measurements for the benefit of the CDN or monitoring tool. The CDN may inject a script or set of instructions in the content that is delivered to the end users. The script or set of instructions cause the end users to measure the performance relating to the receipt of content from the CDN whether that content is the content requested by the end users or some token object. The script or set of instructions then cause the end users to report those measurements to the CDN or the monitoring tool. Such techniques may be performed covertly without the end users' knowledge, thereby surfacing issues related to privacy and trust. When end users are made aware of such techniques, most disapprove or disallow execution on their devices as they do not want any unnecessary software from running on their devices, especially when such software is executed for the benefit of some third party.
- Accordingly, there is a need for improved systems and methods with which to monitor network performance. There is a need to conduct such monitoring based on existing traffic flows without the introduction of additional traffic, wherein such additional traffic is for the purpose of facilitating network performance monitoring. There is a need to perform such monitoring in real-time without sacrificing accuracy in measuring performance to the end user. Moreover, such monitoring should be based on “server-side” techniques that allow such monitoring to occur without active involvement of the end user. There is also a need to leverage the results from such monitoring in order to further optimize content delivery as provided by a content delivery network.
- It is an object of the embodiments described herein to provide systems and methods for performing localized and real-time server-side network performance monitoring. It is further an object for these systems and methods to leverage the distributed architecture of a content delivery network (CDN) so as to perform distributed monitoring with each Point-of-Presence (PoP) of the CDN responsible for monitoring performance to a localized set of end users. It is further an object for these systems and methods to leverage existing traffic flows from a server to a particular end user in order to perform real-time server-side network performance monitoring without the injection of specialized monitoring packets and without active involvement of the end user in deriving the performance measurements. It is further an object to utilize the performance measurements to optimize delivery of existing and future traffic flows to the end user.
- The infrastructure of a distributed platform, such as a CDN, provides the ideal deployment of servers to achieve these and other objects. The CDN includes various PoPs having one or more edge servers. The edge servers of each PoP are proximally located to end users of one or more specific geographic regions. Content requests originating from a geographic region are typically resolved to the PoP that is proximate to that geographic region, thus enabling the edge servers of that PoP to deliver the requested content to the end users originating the content requests.
- To leverage the distributed architecture of the CDN, some embodiments enhance each PoP of the CDN with at least one monitoring agent and a database. The monitoring agent measures the performance that is associated with delivering content from one or more edge servers of the PoP to various end users that are routed to that PoP. In some embodiments, the monitoring agent measures outgoing traffic flows at the applications layer (i.e., Layer 7) so as to measure the effective rate at which the content is sent while obfuscating the underlying the lower layer flow control mechanisms. The measurements are real-time and accurately reflect performance experienced by the end user by virtue of the measurements being taken as content is transferred from the edge server to the end user. Measurement accuracy is further realized based on the geographic proximity of the PoP to the end user. This proximity eliminates many of the links or hops that act as variables affecting network performance along the network path connecting the edge server to the end user. This proximity also allows measurements that were taken for a first end user in a geographic region to be overwritten by measurements that are taken for a second end user in the geographic region without loss of accuracy. This is a result of the localization of the PoP to one or more proximal geographic regions which causes content to traverse substantially all if not all of the same network links or hops in order to reach the end users of a particular geographic region.
- The monitoring agent stores the derived measurements to the database. The measurements stored to the database are then made accessible to each edge server operating within the same PoP as the database. The edge servers use the measurements to then optimize outgoing traffic flows.
- In some embodiments, optimization of a traffic flow involves pre-optimization and/or re-optimization of the traffic flow. An edge server performs pre-optimization of a traffic flow to a first end user based on measurements taken for a second end user that is within the same geographic region as the first end user when no prior measurements have been taken for the first end user or when the prior measurements taken for the end user have exceeded a specified time-to-live. Since all edge servers of a PoP are usually never idle at the same time, there will be at least one real-time measurement to one end user of a specific geographic region that can be used to pre-optimize traffic flows for other end users within that specific geographic region. Once the pre-optimized traffic flow to the first end user begins, the monitoring agent monitors the outgoing traffic flow to the first end user and the edge server re-optimizes the traffic flow based on measurements the monitoring agent derives for the first end user.
- In some embodiments, optimization (e.g., pre-optimization and re-optimization) of a traffic flow involves selecting an encoding, bitrate, compression, file size, or other variant of content. Optimization may also involve server-side bandwidth throttling.
- In some embodiments, optimization is performed by comparing the real-time measurements against one or more established thresholds. When a real-time measurement surpasses a first threshold, the quality of the content being delivered may be lowered in order to accommodate for worsened network conditions. Alternatively, when a real-time measurement surpasses a second threshold, the quality of the content being delivered may be improved in order to accommodate for better network conditions. The thresholds can be based on an expected set of results or against previously logged performance measurements. The expected set of results may include expected transfer rates that are determined based on the network provider the end user uses to access the CDN. For instance, when the enhanced edge server is deployed as part of a cellular data network, the expected transfer rates may be an expected transfer rate that an end user is likely to receive during ordinary loads when connected to a tower of the cellular data network, a wireless ISP, or overloaded broadband network. The expected transfer rates may also partly or wholly be determined on the geographic location of the end user.
- In order to achieve a better understanding of the nature of the present invention, preferred embodiments for the localized and real-time server-side network performance monitoring systems and methods will now be described, by way of example only, with reference to the accompanying drawings in which:
-
FIG. 1 presents an exemplary CDN infrastructure that includes a distributed set of edge servers, traffic management servers, and an administrative server. -
FIG. 2 illustrates the enhancements to a PoP of a distributed platform that enable localized and real-time server-side performance monitoring in accordance with some embodiments. -
FIG. 3 presents a process performed by the monitoring agent to monitor network performance from an edge server to a particular end user in accordance with some embodiments. -
FIG. 4 presents a process for using the derived server-side measurements (i.e., scores) of the monitoring agent to perform re-optimization by optimizing content as it is being delivered from a server to an end user in real-time in accordance with some embodiments. -
FIG. 5 presents a process for pre-optimizing content based on the server-side monitoring process described with reference toFIG. 3 in accordance with some embodiments, whereby content is optimized prior to the first packet of such content being sent. -
FIG. 6 presents a message exchange diagram to summarize traffic flow optimization using the localized and real-time server-side monitoring systems and methods in accordance with some embodiments. -
FIG. 7 conceptually illustrates the localized and real-time server-side performance monitoring system operating in the context of a wireless data network. -
FIG. 8 illustrates a computer system or server with which some embodiments are implemented. - In the following detailed description, numerous details, examples, and embodiments for the localized and real-time server-side network performance monitoring systems and methods are set forth and described. As one skilled in the art would understand in light of the present description, the systems and methods are not limited to the embodiments set forth, and the systems and methods may be practiced without some of the specific details and examples discussed. Also, reference is made to the accompanying figures, which illustrate specific embodiments in which the systems and methods can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments herein described.
- I. Overview
- The embodiments set forth herein provide localized and real-time server-side network performance monitoring systems and methods. Various advantages of these systems and methods are achieved by leveraging the distributed architecture of a content delivery network (CDN), namely the distributed allocation of edge servers of the CDN. Thus, to aid in the discussion that is to follow, an introduction to the distributed architecture of a typical CDN is now provided.
-
FIG. 1 presents an exemplary CDN infrastructure that includes a distributed set ofedge servers 110,traffic management servers 120, and anadministrative server 130. The figure also illustrates the interactions that CDN customers including content providers have with the CDN and the interactions that content consumers or end users have with the CDN. - Each edge server of the set of
edge servers 110 may represent a single physical machine or a cluster of machines that serves content on behalf of different content providers to end users. The cluster of machines may include a server farm for a geographically proximate set of physically separate machines or a set of virtual machines that execute over partitioned sets of resources of one or more physically separate machines. The set ofedge servers 110 are distributed across different edge regions of the Internet to facilitate the “last mile” delivery of content. Each cluster of servers at a particular region may represent a point-of-presence (PoP) of the CDN, wherein an end user is typically routed to the closest PoP in order to download content from the CDN. In this manner, content traverses fewer hops before arriving at the end user, thereby resulting in less latency and an improved overall end user experience. - The
traffic management servers 120 route end users, and more specifically, end user issued requests for content to the one ormore edge servers 110. Different CDN implementations utilize different traffic management schemes to achieve such routing to the optimal edge server. As one example, the traffic management scheme performs Anycast routing to identify a server from the set ofservers 110 that can optimally serve requested content to a particular end user requesting the content. It should be apparent that thetraffic management servers 120 can include different combinations of Domain Name System (DNS) servers, load balancers, and routers performing Anycast or Border Gateway Protocol (BGP) routing. - The
administrative server 130 may include a central server of the CDN or a distributed set of interoperating servers that perform the configuration control and reporting functionality of the CDN. Content providers register with theadministrative server 130 in order to access services and functionality of the CDN. Once registered, content providers can interface with theadministrative server 130 to specify a configuration, upload content, and view performance reports. Theadministrative server 130 also aggregates statistics data from each server of the set ofedge servers 110 and processes the statistics to produce usage and performance reports. - As noted above, the distributed architecture of the CDN is an ideal platform from which to perform localized and real-time server-side performance monitoring. Specifically, the allocation of PoPs to different geographic regions provides an ideal partitioning of CDN resources that can be adapted to monitor end users in a decentralized fashion. Each PoP of the CDN is deployed to an edge of the network. A network edge is the primary point of exchange for requests and content that is passed between end users at one or more geographic regions and the larger external data network or Internet. The traffic management functionality of the CDN ordinarily ensures that the end users at an edge of a network or geographic region are served by edge servers of a specific PoP. For instance, the traffic management functionality can utilize Anycast routing or Domain Name System (DNS) resolution to ensure that end users are served by edge servers of the PoP that is geographically closest to them. In so doing, the CDN architecture provides a logical partitioning of the entire set of end users into smaller subsets of related end users, whereby a subset of end users is related primarily by geographic region. This partitioning is also manifested in the allocation of IP addresses to each subset of end users. Ordinarily, each end user from the subset of end users operating from within a particular geographic region is assigned an IP address that is within a particular subnet. Also, these end users are routed to a particular PoP from a network that is assigned a specific Autonomous System (AS) number. These and other addressing parameters can be used to identify end users operating within the same geographic region.
- The systems and methods advocated herein leverage the distributed allocation of PoPs in order to decentralize the task of monitoring all end users interfacing with the CDN and to localize the monitoring on a per PoP or per geographic regional basis. In so doing, each PoP is locally responsible for obtaining and updating performance measurements for those end users that are serviced by that PoP. This greatly reduces the number of end users that any given PoP monitors. This also eliminates the taking of redundant measurements, whereby two or more endpoints from within the CDN are used to monitor a single end user endpoint. Moreover, the monitoring is performed over the actual pathways that connect the end users to the CDN, thereby accurately measuring the performance that the end users experience.
- To further reduce the overhead on each PoP when deriving the localized network performance measurements, the systems and methods utilize server-side monitoring techniques that derive network performance measurements based on existing traffic flows from an edge server to a particular end user. These server-side techniques do not involve the injection of any additional traffic beyond that which is requested and delivered to the end users. Such server-side techniques are also able to derive performance measurements without requiring active interaction with the end user. Furthermore, real-time monitoring is achieved as a result of monitoring the outbound traffic flows as they are sent.
- To implement such localized and real-time network performance monitoring systems and methods, some embodiments, incorporate at least one monitoring agent and at least one database to each PoP of the distributed platform. The same monitoring agent therefore performs server-side performance monitoring for each server of the PoP. In some other embodiments, at least one edge server at each PoP of the CDN is enhanced with a monitoring agent. In this configuration, the monitoring agent performs server-side performance monitoring of the content that is sent by the enhanced server to any end user.
- In some embodiments, the monitoring agent monitors traffic flows from an edge server at the applications layer (i.e., Layer 7) of the Open Systems Interconnect (OSI) model. Monitoring at the applications layer obfuscates the lower layer flow controls while still allowing the monitoring agent to obtain an effective server-side transfer rate for the content exiting the edge server.
- In some embodiments, the performance measurements obtained for a specific end user are quantified into a single metric, such as a numeric score. The measurements or scores are then used to optimize traffic flows that are disseminated to the end users that are serviced by the PoP from which the measurements are taken.
- Traffic flow optimization involves adjusting the content that is delivered on the basis of the current network conditions as reflected in the real-time performance measurements. Depending on the type of content, optimization includes selection of an encoding, bitrate, compression, file size, or other variant of the content. In so doing, the bandwidth required to transfer the content from the server to an end user can be adjusted to accommodate measured changes in the performance of the data network over which the content is passed. Optimization may also involve server-side bandwidth throttling. Optimization ensures that end users receive a seamless experience irrespective of the real-time performance of the data network.
- Some embodiments support pre-optimization and re-optimization of a traffic flow (i.e., delivery of content). Pre-optimization involves optimizing content prior to the first packet of the content being sent. This ensures a seamless and optimized end user experience from the start which is in contrast to many existing adaptive streaming techniques that start with a high quality setting for the content and then scale back the quality setting for the content based on subsequently measured network performance parameters. Conversely, some adaptive streaming techniques start with a low quality setting for the content and then gradually scale the quality up until it meets the available bandwidth. In any case, current adaptive streaming techniques do not involve pre-optimization.
- Pre-optimization is based on a prior measurement of network performance. The prior measurement may have been taken for the same end user that is to receive the content or for another end user that is within the same geographic region as the end user that is to receive the content. Pre-optimization may be conducted using a measurement that is taken for a different end user, because that end user will be in the same geographic region as the one receiving the content based on the above described partitioning of end users through the distributed allocation of PoPs. As a result, the network path from the PoP or more specifically, an edge server in the PoP, to any end user within the same geographic region will by substantially the same, if not exactly the same. In other words, the network path will consist of nearly all or all of the same links or hops that must be traversed in order to deliver the content from the edge server to any end user within the same geographic region. Accordingly, a measurement taken for a first end user in the geographic region will accurately reflect the performance that a second end user in the same geographic region will experience when receiving content from the same PoP of the CDN. The prior measurement is compared to one or more specified performance thresholds. This comparison determines how to optimize the content before sending the content to the requesting end user. For example, when the comparison reveals that the network path is congested, the edge server can optimize the content by selecting a variant of the content that requires less bandwidth to deliver, wherein the selected variant can include higher compression, lower bitrate encoding, and lower resolution as some examples. Once an optimized variant of the content is selected, the transmission of the content to the requesting end user can begin.
- Re-optimization involves real-time optimization of content or optimizing content as it is sent. In some embodiments, the monitoring agent begins monitoring a traffic flow once an edge server begins transmitting content to an end user. The monitoring agent takes real-time measurements of the outgoing traffic flow. The edge server obtains the real-time measurements and processes them in order to determine how to optimize the outgoing traffic flow as it is being sent. Specifically, the obtained measurements are compared against one or more specified performance thresholds. The content is then optimized as necessary by continuing to send the same variant of the content or by selecting different variants (e.g., compression, encoding, resolution, etc.) of the content to send. As with pre-optimization, the re-optimization techniques set forth herein differ from those implemented by existing adaptive streaming techniques in that the re-optimization of some embodiments does not involve any end user feedback or the introduction of any specialized packets. Rather, re-optimization is wholly performed based on the rate at which an edge server sends packets.
- While the presented systems and methods are applicable to any data network, they are especially well-suited for optimizing traffic flows sent over lossy networks. Lossy networks are those networks that experience high latency and high amounts of packet loss. Data networks operated by wireless service providers, such as 3G and 4G data networks of Verizon, AT&T, and Sprint are examples of some such lossy networks.
- For exemplary purposes and for purposes of simplicity, the localized and real-time server-side performance monitoring systems and methods are described with reference to components of a CDN. However, these systems and methods are similarly applicable to any server that hosts and delivers content to a set of end users irrespective of whether the server operates as part of a CDN. Therefore, the systems and methods described herein are not limited solely to implementation in a CDN, though the distributed platform of the CDN is discussed as a platform to maximize the benefits of the systems and methods.
- II. Server-Side Monitoring
- By leveraging the deployed distributed infrastructure of a distributed platform (e.g., a CDN), the localized and real-time server-side monitoring systems and methods can be embedded within such a distributed platform with minimal modification to the existing infrastructure. Specifically, the systems and methods can be implemented by incorporating at least one monitoring agent and at least one database to each PoP of the distributed platform and by minimally modifying operation of one or more edge servers of each particular PoP to optimize their outgoing traffic flows based on network performance measurements that are derived by the monitoring agent embedded in that particular PoP.
-
FIG. 2 illustrates the enhancements to a PoP of a distributed platform that enable localized and real-time server-side performance monitoring in accordance with some embodiments.FIG. 2 illustrates a PoP having threeedge servers monitoring agent 230 and thenetwork performance database 240. Themonitoring agent 230 is provided access to each of theedge servers monitoring agent 230 to perform server-side monitoring of the outgoing traffic flows from each of theedge servers database 240. Theedge servers network performance database 240 in order to optimize the outgoing traffic flows. - In some embodiments, the monitoring agent is a software module that is encoded as a set of computer executable instructions. The set of computer executable instructions are stored to a non-transitory computer-readable medium of an edge server or a separate virtual or physical machine that is collocated in a PoP with one or more edge servers. Accordingly, even though the
monitoring agent 220 is illustrated inFIG. 2 as a separate machine from each of theedge servers edge servers - To allow the monitoring agent to perform server-side monitoring of an edge server, the monitoring agent is provided access to the protocol stack of the edge server. This access allows the monitoring agent to monitor packets that are received by and sent from the edge server. By monitoring these packets, the monitoring agent is able to derive server-side measurements that detail network performance from the edge server to the end user. For simplicity and for abstraction from the underlying networking mechanisms, the monitoring agent is configured to monitor application layer packets passing through the protocol stacks. This is referred to as Layer 7 monitoring, wherein the seventh layer is in reference to the seventh layer or application layer of the OSI model.
-
FIG. 3 presents aprocess 300 performed by the monitoring agent to monitor network performance from an edge server to a particular end user in accordance with some embodiments. Theprocess 300 begins when the monitoring agent detects (at 310) a request for content from the particular end user. Such a request may be encoded as an application layer HyperText Transfer Protocol (HTTP) GET request packet, though the monitoring agent can be configured to detect other requests for content whether at the application layer or other layers in the protocol stack. - Next, the process extracts (at 320) an identifier identifying the end user that submits the request for content. The identifier is ordinarily included within the header of the request packet. One common identifier is the IP address of the end user as encoded within the source IP address header field of an HTTP GET request packet. Optionally, the process may extract additional identifiers that further identify the requesting end user or the region from which the request originates. Such additional identifiers include the “user agent” or autonomous system (AS) number.
- As the server begins to pass content back to the end user in response to the request, the process monitors (at 330) the outgoing packets from the server. More specifically, the process monitors the effective rate at which the packets are sent. As earlier noted, this includes monitoring the effective rate at which application layer packets, such as HTTP packets, are sent from the edge server. Monitoring the effective rate of application layer packets provides an accurate measure of the network performance to the end user while obfuscating from the underlying network flow control mechanisms in the protocol stack that regulate the effective rate for the application layer packets. For instance, the Transmission Control Protocol (TCP) is a reliable transport protocol that can be used to transfer application layer packets from a source to a destination. To ensure reliable transport, TCP sends out a first set of packets and awaits acknowledgement of one or more of those packets before sending out any additional packets. In this manner, the underlying TCP controls the effective rate at which application layer packets are sent from the edge server to the end user.
- In some embodiments, the effective rate of outgoing packets sent from the edge server to an end user is based on one or more different performance metrics. These performance metrics can include latency, throughput, and packet loss as some examples that collectively can determine the effective rate of transfer. It should be noted that by monitoring the effective rate of the application layer packets, the monitoring agent is able to perform a non-intrusive form of server-side monitoring that obtains real-time performance measurements without injection of any specialized monitoring packets.
- The process quantifies (at 340) the results of the monitoring performed at
step 330. In some embodiments, quantification involves computing a single score from various measurements obtained as a result of the monitoring. This may include computing a single score to represent the effective rate of outgoing packets from the edge server to a specific end user over a five second duration. This may also include computing a single score based on throughput, bandwidth, and latency measurements that collectively comprise the effective rate of the outgoing packets. The single score is used to reduce the amount of storage that is required to store the performance measurements at the network performance database without losing accuracy of the measurements. In addition to the reduction in the storage requirements, the single score reduces the overhead associated with reading and writing the network performance data to the network performance data. Such efficiency is needed in order to support real-time updating of scores when actively monitoring several thousand end users that may be serviced by a single PoP. - The process logs (at 350) the quantified score in association with the extracted identifier and a timestamp. In some embodiments, the score, identifier, and timestamp are logged to the network performance database. The identifier serves to associate the monitored results or quantified score to a particular end user and more generally, to a geographic region in which the end user associated with the identifier is located and other end users having similar identifiers are located (e.g., IP addresses within the same subnet). The timestamp is a freshness indicator that is used to preserve the real-time freshness of the monitored results and used to ensure that outgoing traffic flows are not optimized based on stale performance data. Though
process 300 is shown to terminate afterstep 350, it is often the case that at least steps 330-350 of the process are continually repeated until the outgoing traffic flow being monitored is complete. - III. Optimization
- The scores logged to the network performance database are utilized by the edge servers within the same PoP as the monitoring agent to optimize outgoing traffic flows. This promotes the sharing of derived scores between edge servers such that when a network performance score is computed for content that is sent from a first edge server of a PoP to a first end user, that score can be used to optimize content that is sent from a second edge server of the same PoP to the first end user. Also, that same score can be used to optimize content that is sent from the second edge server of the same PoP to a second end user that is in the same geographic region as the first end user with the network path from the second edge server to the second end user being the same or consisting of substantially the same links or hops as the network path from the second edge server to the first end user.
- In some embodiments, outgoing traffic flows are pre-optimized and re-optimized, wherein pre-optimization involves optimizing content prior to the first packet of that content being sent from the edge server to an end user, and wherein re-optimization involves optimizing content as it is being sent from the edge server to an end user.
FIGS. 4 and 5 below describe the modified operation of the CDN edge servers to leverage the logged scores in order to optimize outgoing traffic flows in accordance with some embodiments. -
FIG. 4 presents aprocess 400 for using the derived server-side measurements (i.e., scores) of the monitoring agent to perform re-optimization by optimizing content as it is being delivered from a server to an end user in real-time in accordance with some embodiments.Process 400 can be performed by the samemachine performing process 300 when the monitoring agent is integrated as part of the edge server performing content delivery. Alternatively,process 400 can be performed by an edge server that is collocated in the same PoP as the machine running the monitoring agent and performingprocess 300.Process 400 is performed after the edge server has sent at least the first packet for content requested by an end user. - As the edge server sends the content to the requesting end user, the process performs (at 410) a lookup to the network performance database using the identifier of the requesting end user. This lookup may be performed by the edge server at specified intervals when it is sending content to one or more end users. The identifier is typically the IP address assigned to the end user device that submits the content request. The edge server will have extracted this identifier from the initial content request of the end user. A real-time measurement in the form of a quantified score will exist in the database because the monitoring agent will begin monitoring the server-side performance once the server begins transmitting content to the requesting end user. Accordingly, the process receives (at 420) a score quantifying real-time network performance from the edge server to the requesting end user.
- The process checks (at 430) the time-to-live parameter for the received score to ensure that the score received during the current pass through
process 400 is not stale or one that was previously used. This check can be performed by simply determining if a specified amount of time has passed since the score was logged to the network performance database or by comparing the time-to-live parameter for the current score to one received during a previous pass throughprocess 400. This latter point is better illustrated with an exemplary reference to a second pass through theprocess 400. During the second pass through theprocess 400, the process compares the time-to-live parameter for the score received during the second pass with a time-to-live parameter for a score that was received during a first pass. If these time-to-live parameters specify the same value, then the score (i.e., performance measurement) has not been updated, is thus stale, and no further optimization should be made based on the stale score. If the parameters differ, it is an indication that theprocess 300 has logged an updated real-time score to the network performance database such that the score received during the current pass can be used to re-optimize the content being sent. In some embodiments, the database runs a routine to delete, remove, or overwrite any stored scores that exceed the time-to-live parameters such that all scores stored to the database are ensured to be real-time relevant. In some such embodiments, the edge server need not perform the real-time relevancy check. - Accordingly, if the received score is determined (at 430) to be stale, the process then determines (at 460) if the server is continuing to send content to the end user. If not, the process ends. Otherwise, the process reverts to step 410 to perform another lookup to the network performance database for an updated real-time performance score.
- If the received score is determined (at 430) to be an updated real-time score, the process compares (at 440) the received score to at least one defined threshold and dynamically optimizes (at 450) the transmission of the content in real-time based on the comparison. For instance, a first baseline threshold may be defined to determine when the resources needed to deliver the content exceed those that are currently available. When this first baseline threshold is met, the process optimizes the transmission of the content by reducing the resources that are needed to deliver the content to the end user, thereby decreasing the likelihood of packet loss, buffering, and other performance degradations that would hinder the end user experience. A second baseline threshold may be defined to determine when there are sufficient unused resources in the network. When this second baseline threshold is met, the process optimizes the transmission of the content by increasing the quality of the content being passed to the end user, thereby providing a richer end user experience. Additional thresholds may be set and compared against to provide a gradual optimization of the content.
- In some embodiments, the baseline thresholds are set by the edge server operator or the CDN operator based on expected network performance. For example, an initial set of performance measurements are taken when the network is known to not be congested and these measurements are then set as the baseline values for the thresholds. In some embodiments, the baseline thresholds are determined from historic performance measurements that the monitoring agent takes based on previous content delivered to one or more end users of a geographic region. For example, a particular end user requests and receives content from a specific PoP of the CDN and the content is delivered with an average latency of 10 ms at 100 kilobits per second. The baseline threshold can then be derived from these averages.
- Common optimization techniques that can be used by the edge server include adaptively increasing or decreasing the bitrate for content being sent to an end user based on different encodings of the same content, increasing or decreasing resolution of the content, increasing or decreasing the amount by which the content being sent is compressed, increasing or decreasing the rate used to send the content, adding or removing objects from the content being sent, or other adjustments to the quality of the content. Each such technique alters the amount of bandwidth that is required to send content, thereby enabling content to be delivered faster when there is less bandwidth available and enabling content to be delivered with better quality when there is more bandwidth available. For example, when streaming media content to the end user, the process optimizes the transmission of the content by increasing or decreasing the quality of the media content in response to the received score by sending the media content using one of several different encodings with each encoding having a different bitrate. This is known as adaptive streaming. As another example, the resolution of images can be increased or decreased in response to the monitored network performance. Accordingly, each edge server stores different variants of the same content, wherein each variant may include a different bitrate encode, compression level, resolution, or other variant. Also, the edge server may choose to send different versions of the same website (e.g., a full version of a website as compared to a mobile version of the website).
- Re-optimization is particularly applicable to ongoing sessions between the server and the end user. An ongoing session may include, for example, a media stream that includes streaming or recorded video and/or audio or server-side execution of an application or game, as well as hosting and serving a series of websites or website content that are sequentially or iteratively accessed.
- After optimizing the transmission of the content, the process determines (at 460) whether the server is still sending content to the end user. If not, the process ends. Otherwise, the process reverts to step 410. In this manner, the systems and methods perform server-side monitoring to adjust content delivery in real-time, whereby the server-side monitoring is based only on the traffic that is sent from the server to the end user without the need for specialized monitoring packets and without the need for specialized monitoring of the end user response to the outgoing content.
-
FIG. 5 presents aprocess 500 for pre-optimizing content based on the server-side monitoring process described with reference toFIG. 3 in accordance with some embodiments, whereby content is optimized prior to the first packet of such content being sent.Process 500 can be performed by the samemachine performing process 300 when the monitoring agent is integrated as part of the edge server performing content delivery. Alternatively,process 500 can be performed by an edge server that is collocated in the same PoP as the machine running the monitoring agent. -
Process 500 is performed by an edge server whenever the edge server receives a request to initiate the delivery of content to an end user and prior to dissemination of the first packet of the requested content. Accordingly, the process begins by receiving (at 510) a request for content. The following steps ofprocess 500 can be performed in parallel with the server processing the request in order to identify where the requested content is stored (e.g., in cache, on disk, at a remote origin server, etc.). - The process parses (at 520) the request to extract an identifier that identifies the end user submitting the request. In some embodiments, the identifier is the IP address assigned to the end user device submitting the request. In some embodiments, the identifier additionally or alternatively includes an AS number, user agent, etc.
- Using the one or more extracted identifiers, the process performs (at 530) a lookup to the network performance database. The lookup identifies any measurements or scores that are derived to measure the network performance to the end user identified by the extracted identifier. The lookup also identifies any measurements/scores that are derived for other end users that are related to the requesting end user. In some embodiments, the relation between end users is determined from the IP addressing that is assigned to the end user devices. For instance, blocks of IP addresses are normally assigned to devices that are geographically proximate to one another. Such IP address blocks are assigned by Internet Service Providers (ISPs) to end users operating within the same or proximate network access service areas. In some embodiments, the relation between end users is determined based on AS number. End users that are routed from the same autonomous system normally gain access through the same network access service area.
- The process obtains (at 540) one or more scores quantifying network performance measurements from the network performance database. The process filters (at 550) the scores based on freshness as determined from the timestamp associated with each score and the specified time-to-live parameters. This ensures that the pre-optimization of the traffic flows is based on real-time data whether such data is derived for the end user that is to receive the requested content or for other related end users that are within the same geographic region as the end user that is to receive the requested content.
- The filtered scores are then used to optimize (at 560) the delivery of the requested content prior to the first packet of the requested content being sent. Optimization is based on comparing the filtered scores to one or more specified thresholds. The relative comparison of the filtered scores to the specified thresholds determines if the network is congested or otherwise underperforming such that the bandwidth requirements for the content to be delivered should be reduced or if the network has available bandwidth that can support higher quality variants of the requested content. As earlier noted, different content delivery optimizations can be made based on the type of the requested content. For media content, the process can select one of several encodings of the media content based on the filtered scores. The server can then send the selected optimized encoding without having to obtain a measurement directly from the specific end user before beginning the transmission. For image content, the process can select one of several resolutions or levels of compression for the image based on the filtered scores. For other content, the process can select whether to send a full copy of the content, a compressed version of the content, or an incomplete set of the content with extraneous objects omitted to conserve bandwidth based on the filtered scores. As a result of this pre-optimization, end users are less likely to experience buffering when starting playback of media content and are less likely to experience changes in quality at the start of playing media content.
- Once the edge server begins dissemination of the pre-optimized content, the process begins monitoring (at 570) the real-time performance of the network, derives (at 580) updated real-time scores to quantify the network performance to the actual end user that receives the content, and re-optimizes (at 590) the outgoing traffic flow based on the updated real-time scores as per the
process 400 described above with reference toFIG. 4 . - In this manner, systems and methods are provided to use server-side monitoring to optimize outgoing traffic flows from the beginning to the end of the traffic flow. Moreover, such server-side monitoring is non-intrusive in that the monitoring is performed without the introduction of specialized monitoring packets by basing the monitoring solely on the content that is requested and sent from the server to an end user or other end users related to the requesting end user.
-
FIG. 6 presents a message exchange diagram to summarize traffic flow optimization using the localized and real-time server-side monitoring systems and methods in accordance with some embodiments. The figure illustrates aPoP 610 of a distributed platform that is tasked with delivering content to endusers 633 and 636 that are located inregion 630. ThePoP 610 includes amonitoring agent 620, afirst edge server 623, and asecond edge server 626. - The diagram commences with the
first edge server 623 sending (at 640) content to the first end user 633. During this time, themonitoring agent 620 performs server-side monitoring of thefirst edge server 623 by continually monitoring (at 643) the outgoing application layer packets that are sent from thefirst edge server 623 to the first end user 633. Themonitoring agent 620 computes (at 646) one or more scores to quantify the network performance based on the monitoring of the outgoing packets. - The scores provide a collective quantification for the performance of the network links connecting the
first edge server 623 to the first end user 633 and more generally, for the performance of the network links connecting thePoP 610 to thegeographic region 630. In other words, content that is sent to any end user in thegeographic region 630 will have to traverse the same network links as the content being sent from thefirst edge server 623 to the first end user 633 such that the computed network performance scores have application not only to the first end user 633, but any end user operating in thatregion 630. - While still sending the content, the
first edge server 623 obtains (at 650) the computed scores from themonitoring agent 620 or an associated database. Filtering of these scores is not necessary as these scores were computed in real-time which may be indicated using a flag or other meta-data or when the database automatically removes or overwrites scores that have exceeded a specified time-to-live. If filtering is to be performed, a comparison of the score's timestamp to a time-to-live parameter will reveal if the score is useable or is stale and should be discarded. - The
first edge server 623 uses the real-time relevant scores to optimize the sending of the content to the end user 633. In accordance with processes described above, thefirst edge server 623 compares the scores against one or more baseline thresholds to determine if the network is congested such that the quality of the content being sent has to be reduced in order to preserve bandwidth or if there is available bandwidth that can be used to improve the quality of the content being sent. Based on the analysis of the obtained scores relative to the baseline thresholds, thefirst edge server 623 selects (at 653) a different variant for the content being sent and thefirst edge server 623 resumes (at 656) sending the remainder of the content based on the newly selected variant. For instance, thefirst edge server 623 may have previously selected a 16 Kbps encoding of an audio stream, but the derived performance measurements reveal an effective transfer rate of 40 Kpbs to thefirst end user 623. Thefirst edge server 623 can then select a different variant for the audio stream which is encoded at 32 Kbps and resume sending that higher quality stream to thefirst end user 623 while staying within the limits of the network and while improving the end user experience. Further re-optimizations may be made during the continued transfer of the content. - At 660, the
second edge server 626 receives a request for content from thesecond end user 636. Before responding to the request, thesecond edge server 626 extracts (at 663) one or more identifiers from the request. These identifiers identify the geographic region in which thesecond end user 636 is located. More specifically, the identifier may be an IP address that can be mapped to a particular ISP and ultimately, to thespecific region 630 serviced by that ISP. Similarly, the identifier may be an Autonomous System (AS) number that identifies thespecific region 630 in which thesecond end user 636 is located. Further still, the identifier may be mapped to a subnet that identifies a geographic region. - The
second edge server 626 queries (at 666) themonitoring agent 620 or an associated database based on the one or more extracted identifiers to obtain any performance scores quantifying network performance to the identifiedservice region 630. In this example, themonitoring agent 620 has recently computed scores quantifying the network performance from thefirst edge server 623 to the first end user 633. Since the first end user 633 and thesecond end user 636 are located in the same geographic region, the network links from thePoP 610 to each of theend users 633 and 636 will be substantially the same, therefore enabling the scores that were derived for the content sent to the first end user 633 to be used for pre-optimizing the content that is to be sent to thesecond end user 636. - The
second edge server 626 obtains (at 670) the scores and filters (at 673) the scores for real-time relevancy. This includes discarding any scores that have an associated timestamp that is exceeds a specified time-to-live parameter. Thesecond edge server 626 then selects (at 676) a variant of the requested content based on the filtered scores and begins sending (at 680) the selected variant to thesecond end user 636. In this manner, the content requested by thesecond end user 636 is pre-optimized based on scores quantifying network performance of content delivered to different end users in the same geographic region as thesecond end user 636. As thesecond edge server 626 sends (at 680) the content to thesecond end user 636, themonitoring agent 620 monitors (at 683) the outgoing application layer packets and computes a score (at 686) to quantify the network performance from thesecond edge server 626 to thesecond end user 636. The newly computed score is obtained (at 690) by thesecond edge server 626 and used to optimize (at 693) the content being sent to the second end user by selecting another variant of the content when necessary. For instance, thesecond edge server 626 may begin sending the audio stream at 32 Kbps based on the earlier measurements derived for content delivery to the first end user 633. Then, new measurements are taken while delivering the requested content to thesecond end user 636 to reveal that the network condition has degraded to provide an effective transfer rate of 28 Kbps such that thesecond edge server 626 can re-optimize the audio stream and select the lower quality 16 Kbps variant. - The above described systems and methods are applicable to any data network, but are especially well-suited for optimizing traffic flows sent over lossy data networks. Lossy data networks are those data networks that experience high latency and high amounts of packet loss. Data networks operated by wireless service providers, such as 3G and 4G data networks of Verizon, AT&T, and Sprint are examples of some such lossy data networks.
FIG. 7 conceptually illustrates the localized and real-time server-side performance monitoring system operating in the context of a wireless data network. - The wireless data network includes
wireless nodes wireless service regions wireless nodes wireless nodes service regions service regions core network 730 may include one or more Serving GPRS Support Nodes (SGSNs) and one or more Gateway GPRS Support Nodes (GGSNs). Though only twowireless nodes core network 730 can connect several additional wireless nodes to the external data network and can thus experience large traffic loads. - By locating a
CDN PoP 740 adjacent to thecore network 730 of wireless service provider, thePoP 740 is ideally positioned to optimize the traffic flows passing between thecore network 730 and the external data network. Specifically, thePoP 740 includes a set ofedge servers 750 and amonitoring agent 760. Themonitoring agent 760 monitors the outgoing traffic flows from thePoP 740 to any end user in theservice regions monitoring agent 760 computes scores quantifying network performance to theseservice regions edge servers 750 can then optimize the content that they send to any end user in theseservice regions PoP 740 to theservice regions PoP 740 sends continually optimized content that is adjusted based on real-time network conditions of theservice regions wireless service regions monitoring agent 760 detects thatservice region 720 is congested, the set ofedge servers 750 can select a variant of requested content that minimizes the bandwidth required to send that content to theservice region 720. As noted above, identifying end users in theservice region 720 can be predicated based on IP addresses, subnets, or AS numbers. Moreover, the PoP itself is limited to servicing end users within one or more neighboring service regions such that if one service region is congested, then it is likely that the neighboring service regions are also subject to the same congestion. - Thus, when either
service region PoP 740 can adjust its traffic flows to theservice regions service region PoP 740 can adjust its traffic flows to theservice regions - IV. Server System
- Many of the above-described processes and components are implemented as software processes that are specified as a set of instructions recorded on non-transitory computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more computational element(s) (such as processors or other computational elements like ASICs and FPGAs), they cause the computational element(s) to perform the actions indicated in the instructions. Server, computer, and computing machine are meant in their broadest sense and may include any electronic device with a processor that executes instructions stored on computer-readable media or that are obtained remotely over a network connection. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. Further, wherever a server is identified as a component of the embodied invention, it is understood that the server may be a single physical machine, or a cluster of multiple physical machines performing related functions, or virtualized servers co-resident on a single physical machine, or various combinations of the above.
-
FIG. 8 illustrates a computer system or server with which some embodiments are implemented. Such a computer system includes various types of computer-readable mediums and interfaces for various other types of computer-readable mediums that implement the server-side monitoring systems and methods (i.e., monitoring agent, edge server, edge server enhanced with a monitoring agent, etc.) described above.Computer system 800 includes abus 805, aprocessor 810, asystem memory 815, a read-only memory 820, apermanent storage device 825,input devices 830, andoutput devices 835. - The
bus 805 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of thecomputer system 800. For instance, thebus 805 communicatively connects theprocessor 810 with the read-only memory 820, thesystem memory 815, and thepermanent storage device 825. From these various memory units, theprocessor 810 retrieves instructions to execute and data to process in order to execute the processes of the invention. Theprocessor 810 is a processing device such as a central processing unit, integrated circuit, graphical processing unit, etc. - The read-only-memory (ROM) 820 stores static data and instructions that are needed by the
processor 810 and other modules of the computer system. Thepermanent storage device 825, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when thecomputer system 800 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as thepermanent storage device 825. - Other embodiments use a removable storage device (such as a flash drive) as the permanent storage device Like the
permanent storage device 825, thesystem memory 815 is a read-and-write memory device. However, unlike thestorage device 825, the system memory is a volatile read-and-write memory, such as random access memory (RAM). The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the processes are stored in thesystem memory 815, thepermanent storage device 825, and/or the read-only memory 820. - The
bus 805 also connects to the input andoutput devices input devices 830 include, but are not limited to, alphanumeric keypads (including physical keyboards and touchscreen keyboards) and pointing devices (also called “cursor control devices”). Theinput devices 830 also include, but are not limited to, audio input devices (e.g., microphones, MIDI musical instruments, etc.). Theoutput devices 835 display images generated by the computer system. The output devices include, but are limited to, printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). - Finally, as shown in
FIG. 8 ,bus 805 also couplescomputer 800 to anetwork 865 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. - As mentioned above, the
computer system 800 may include one or more of a variety of different computer-readable media. Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, ZIP® disks, read-only and recordable blu-ray discs, any other optical or magnetic media, and floppy disks. - While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/527,397 US8626910B1 (en) | 2012-06-19 | 2012-06-19 | Systems and methods for performing localized server-side monitoring in a content delivery network |
US14/147,117 US8959212B2 (en) | 2012-06-19 | 2014-01-03 | Systems and methods for performing localized server-side monitoring in a content delivery network |
US14/570,025 US9794152B2 (en) | 2012-06-19 | 2014-12-15 | Systems and methods for performing localized server-side monitoring in a content delivery network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/527,397 US8626910B1 (en) | 2012-06-19 | 2012-06-19 | Systems and methods for performing localized server-side monitoring in a content delivery network |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/147,117 Continuation US8959212B2 (en) | 2012-06-19 | 2014-01-03 | Systems and methods for performing localized server-side monitoring in a content delivery network |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130339519A1 true US20130339519A1 (en) | 2013-12-19 |
US8626910B1 US8626910B1 (en) | 2014-01-07 |
Family
ID=49756976
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/527,397 Active 2032-08-29 US8626910B1 (en) | 2012-06-19 | 2012-06-19 | Systems and methods for performing localized server-side monitoring in a content delivery network |
US14/147,117 Active US8959212B2 (en) | 2012-06-19 | 2014-01-03 | Systems and methods for performing localized server-side monitoring in a content delivery network |
US14/570,025 Active 2032-07-26 US9794152B2 (en) | 2012-06-19 | 2014-12-15 | Systems and methods for performing localized server-side monitoring in a content delivery network |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/147,117 Active US8959212B2 (en) | 2012-06-19 | 2014-01-03 | Systems and methods for performing localized server-side monitoring in a content delivery network |
US14/570,025 Active 2032-07-26 US9794152B2 (en) | 2012-06-19 | 2014-12-15 | Systems and methods for performing localized server-side monitoring in a content delivery network |
Country Status (1)
Country | Link |
---|---|
US (3) | US8626910B1 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140143375A1 (en) * | 2012-04-27 | 2014-05-22 | F5 Networks, Inc. | Methods for optimizing service of content requests and devices thereof |
US20140244648A1 (en) * | 2013-02-27 | 2014-08-28 | Pavlov Media, Inc. | Geographical data storage assignment based on ontological relevancy |
US20140282422A1 (en) * | 2013-03-12 | 2014-09-18 | Netflix, Inc. | Using canary instances for software analysis |
US20140379901A1 (en) * | 2013-06-25 | 2014-12-25 | Netflix, Inc. | Progressive deployment and termination of canary instances for software analysis |
US20150172354A1 (en) * | 2013-12-17 | 2015-06-18 | Limelight Networks, Inc. | Content-delivery transfer for cooperative delivery systems |
US20150365454A1 (en) * | 2014-06-17 | 2015-12-17 | Qualcomm Incorporated | Media processing services on an access node |
US20160065532A1 (en) * | 2014-08-29 | 2016-03-03 | Google Inc. | Systems and methods for adaptive associative routing for mobile messaging |
WO2016055893A1 (en) * | 2014-10-09 | 2016-04-14 | Telefonaktiebolaget L M Ericsson (Publ) | Method, traffic monitor (tm), request router (rr) and system for monitoring a content delivery network (cdn) |
US20160182639A1 (en) * | 2014-12-17 | 2016-06-23 | University-Industry Cooperation Group Of Kyung-Hee University | Internet of things network system using fog computing network |
US20160344791A1 (en) * | 2015-05-20 | 2016-11-24 | Microsoft Technology Limited, Llc | Network node bandwidth management |
US20170201571A1 (en) * | 2015-09-10 | 2017-07-13 | Vimmi Communications Ltd. | Content delivery network |
US9786014B2 (en) | 2013-06-07 | 2017-10-10 | Google Inc. | Earnings alerts |
US20170366426A1 (en) * | 2016-06-15 | 2017-12-21 | Algoblu Holdings Limited | Dynamic switching between edge nodes in autonomous network system |
US20180039682A1 (en) * | 2016-08-02 | 2018-02-08 | Blackberry Limited | Electronic device and method of managing data transfer |
US10002058B1 (en) * | 2014-11-26 | 2018-06-19 | Intuit Inc. | Method and system for providing disaster recovery services using elastic virtual computing resources |
US20180227187A1 (en) * | 2017-02-03 | 2018-08-09 | Prysm, Inc. | Automatic Network Connection Sharing Among Multiple Streams |
US10050912B2 (en) | 2014-10-27 | 2018-08-14 | At&T Intellectual Property I, L.P. | Subscription-based media push service |
US20180295063A1 (en) * | 2017-04-10 | 2018-10-11 | Verizon Digital Media Services Inc. | Automated Steady State Traffic Management |
EP3281120A4 (en) * | 2015-04-06 | 2018-11-07 | Level 3 Communications, LLC | Server side content delivery network quality of service |
US20180367822A1 (en) * | 2017-06-18 | 2018-12-20 | Cisco Technology, Inc. | Abr streaming of panoramic video |
US10182098B2 (en) | 2017-01-31 | 2019-01-15 | Wipro Limited | Method and system for proactively selecting a content distribution network (CDN) for delivering content |
US10187319B1 (en) * | 2013-09-10 | 2019-01-22 | Instart Logic, Inc. | Automatic configuration generation for a proxy optimization server for optimizing the delivery of content of a web publisher |
US10187317B1 (en) | 2013-11-15 | 2019-01-22 | F5 Networks, Inc. | Methods for traffic rate control and devices thereof |
US10230566B1 (en) | 2012-02-17 | 2019-03-12 | F5 Networks, Inc. | Methods for dynamically constructing a service principal name and devices thereof |
WO2020050790A1 (en) * | 2018-09-05 | 2020-03-12 | Medianova Internet Hizmetleri Ve Ticaret Anonim Sirketi | Parametric parsing based routing system in content delivery networks |
CN111064997A (en) * | 2018-10-16 | 2020-04-24 | 深圳市云帆加速科技有限公司 | Resource pre-distribution method and device |
WO2020263198A1 (en) * | 2019-06-26 | 2020-12-30 | Medianova Internet Hizmetleri Ve Ticaret Anonim Sirketi | Performance enhanced cdn service |
US10938768B1 (en) * | 2015-10-28 | 2021-03-02 | Reputation.Com, Inc. | Local content publishing |
US10951688B2 (en) | 2013-02-27 | 2021-03-16 | Pavlov Media, Inc. | Delegated services platform system and method |
US11184433B2 (en) | 2020-03-31 | 2021-11-23 | Microsoft Technology Licensing, Llc | Container mobility based on border gateway protocol prefixes |
US11409834B1 (en) * | 2018-06-06 | 2022-08-09 | Meta Platforms, Inc. | Systems and methods for providing content |
US20220360512A1 (en) * | 2021-04-15 | 2022-11-10 | At&T Intellectual Property I, L.P. | System for addition and management of ad-hoc network attached compute (nac) resources |
US20230334017A1 (en) * | 2014-10-17 | 2023-10-19 | Zestfinance, Inc. | Api for implementing scoring functions |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9742858B2 (en) | 2011-12-23 | 2017-08-22 | Akamai Technologies Inc. | Assessment of content delivery services using performance measurements from within an end user client application |
US9537973B2 (en) * | 2012-11-01 | 2017-01-03 | Microsoft Technology Licensing, Llc | CDN load balancing in the cloud |
US9374276B2 (en) | 2012-11-01 | 2016-06-21 | Microsoft Technology Licensing, Llc | CDN traffic management in the cloud |
US9509797B1 (en) | 2012-12-21 | 2016-11-29 | Emc Corporation | Client communication over fibre channel using a block device access model |
US9473591B1 (en) | 2012-12-21 | 2016-10-18 | Emc Corporation | Reliable server transport over fibre channel using a block device access model |
US9514151B1 (en) | 2012-12-21 | 2016-12-06 | Emc Corporation | System and method for simultaneous shared access to data buffers by two threads, in a connection-oriented data proxy service |
US9237057B1 (en) | 2012-12-21 | 2016-01-12 | Emc Corporation | Reassignment of a virtual connection from a busiest virtual connection or locality domain to a least busy virtual connection or locality domain |
US9591099B1 (en) | 2012-12-21 | 2017-03-07 | EMC IP Holding Company LLC | Server connection establishment over fibre channel using a block device access model |
US9563423B1 (en) | 2012-12-21 | 2017-02-07 | EMC IP Holding Company LLC | System and method for simultaneous shared access to data buffers by two threads, in a connection-oriented data proxy service |
US9647905B1 (en) * | 2012-12-21 | 2017-05-09 | EMC IP Holding Company LLC | System and method for optimized management of statistics counters, supporting lock-free updates, and queries for any to-the-present time interval |
US9407601B1 (en) | 2012-12-21 | 2016-08-02 | Emc Corporation | Reliable client transport over fibre channel using a block device access model |
US9270786B1 (en) | 2012-12-21 | 2016-02-23 | Emc Corporation | System and method for proxying TCP connections over a SCSI-based transport |
US9232000B1 (en) | 2012-12-21 | 2016-01-05 | Emc Corporation | Method and system for balancing load across target endpoints on a server and initiator endpoints accessing the server |
US9473590B1 (en) | 2012-12-21 | 2016-10-18 | Emc Corporation | Client connection establishment over fibre channel using a block device access model |
US9473589B1 (en) | 2012-12-21 | 2016-10-18 | Emc Corporation | Server communication over fibre channel using a block device access model |
US9531765B1 (en) | 2012-12-21 | 2016-12-27 | Emc Corporation | System and method for maximizing system data cache efficiency in a connection-oriented data proxy service |
US9712427B1 (en) | 2012-12-21 | 2017-07-18 | EMC IP Holding Company LLC | Dynamic server-driven path management for a connection-oriented transport using the SCSI block device model |
JP6020146B2 (en) * | 2012-12-26 | 2016-11-02 | 富士通株式会社 | Information processing apparatus, information processing method, and information processing program |
US10284439B2 (en) * | 2013-12-02 | 2019-05-07 | Google Llc | Method for measuring end-to-end internet application performance |
US20150287099A1 (en) | 2014-04-07 | 2015-10-08 | Google Inc. | Method to compute the prominence score to phone numbers on web pages and automatically annotate/attach it to ads |
US11115529B2 (en) | 2014-04-07 | 2021-09-07 | Google Llc | System and method for providing and managing third party content with call functionality |
US10241982B2 (en) * | 2014-07-30 | 2019-03-26 | Hewlett Packard Enterprise Development Lp | Modifying web pages based upon importance ratings and bandwidth |
US9935864B2 (en) * | 2014-09-30 | 2018-04-03 | Splunk Inc. | Service analyzer interface |
US9146962B1 (en) | 2014-10-09 | 2015-09-29 | Splunk, Inc. | Identifying events using informational fields |
US9158811B1 (en) | 2014-10-09 | 2015-10-13 | Splunk, Inc. | Incident review interface |
US11087263B2 (en) | 2014-10-09 | 2021-08-10 | Splunk Inc. | System monitoring with key performance indicators from shared base search of machine data |
US10505825B1 (en) | 2014-10-09 | 2019-12-10 | Splunk Inc. | Automatic creation of related event groups for IT service monitoring |
US10536353B2 (en) | 2014-10-09 | 2020-01-14 | Splunk Inc. | Control interface for dynamic substitution of service monitoring dashboard source data |
US10193775B2 (en) | 2014-10-09 | 2019-01-29 | Splunk Inc. | Automatic event group action interface |
US9245057B1 (en) | 2014-10-09 | 2016-01-26 | Splunk Inc. | Presenting a graphical visualization along a time-based graph lane using key performance indicators derived from machine data |
US10305758B1 (en) | 2014-10-09 | 2019-05-28 | Splunk Inc. | Service monitoring interface reflecting by-service mode |
US11455590B2 (en) | 2014-10-09 | 2022-09-27 | Splunk Inc. | Service monitoring adaptation for maintenance downtime |
US9210056B1 (en) | 2014-10-09 | 2015-12-08 | Splunk Inc. | Service monitoring interface |
US9760240B2 (en) | 2014-10-09 | 2017-09-12 | Splunk Inc. | Graphical user interface for static and adaptive thresholds |
US10209956B2 (en) | 2014-10-09 | 2019-02-19 | Splunk Inc. | Automatic event group actions |
US11755559B1 (en) | 2014-10-09 | 2023-09-12 | Splunk Inc. | Automatic entity control in a machine data driven service monitoring system |
US11671312B2 (en) | 2014-10-09 | 2023-06-06 | Splunk Inc. | Service detail monitoring console |
US9146954B1 (en) | 2014-10-09 | 2015-09-29 | Splunk, Inc. | Creating entity definition from a search result set |
US11200130B2 (en) | 2015-09-18 | 2021-12-14 | Splunk Inc. | Automatic entity control in a machine data driven service monitoring system |
US10417108B2 (en) | 2015-09-18 | 2019-09-17 | Splunk Inc. | Portable control modules in a machine data driven service monitoring system |
US10417225B2 (en) | 2015-09-18 | 2019-09-17 | Splunk Inc. | Entity detail monitoring console |
US9491059B2 (en) | 2014-10-09 | 2016-11-08 | Splunk Inc. | Topology navigator for IT services |
US10951501B1 (en) * | 2014-11-14 | 2021-03-16 | Amazon Technologies, Inc. | Monitoring availability of content delivery networks |
US10198155B2 (en) | 2015-01-31 | 2019-02-05 | Splunk Inc. | Interface for automated service discovery in I.T. environments |
US9934020B2 (en) * | 2015-03-10 | 2018-04-03 | International Business Machines Corporation | Intelligent mobile application update |
US9952851B2 (en) * | 2015-03-10 | 2018-04-24 | International Business Machines Corporation | Intelligent mobile application update |
US10942960B2 (en) | 2016-09-26 | 2021-03-09 | Splunk Inc. | Automatic triage model execution in machine data driven monitoring automation apparatus with visualization |
US10942946B2 (en) | 2016-09-26 | 2021-03-09 | Splunk, Inc. | Automatic triage model execution in machine data driven monitoring automation apparatus |
US10469424B2 (en) * | 2016-10-07 | 2019-11-05 | Google Llc | Network based data traffic latency reduction |
US11093518B1 (en) | 2017-09-23 | 2021-08-17 | Splunk Inc. | Information technology networked entity monitoring with dynamic metric and threshold selection |
US11106442B1 (en) | 2017-09-23 | 2021-08-31 | Splunk Inc. | Information technology networked entity monitoring with metric selection prior to deployment |
US11159397B2 (en) | 2017-09-25 | 2021-10-26 | Splunk Inc. | Lower-tier application deployment for higher-tier system data monitoring |
TR201811297A2 (en) * | 2018-08-03 | 2018-08-27 | Medianova Internet Hizmetleri Ve Ticaret Anonim Sirketi | System used to improve the quality that CDN companies give users and optimize resource usage |
CN112013756B (en) * | 2020-08-27 | 2022-05-17 | 桂林电子科技大学 | Double-layer baseline slope deformation monitoring method |
US11356725B2 (en) * | 2020-10-16 | 2022-06-07 | Rovi Guides, Inc. | Systems and methods for dynamically adjusting quality levels for transmitting content based on context |
US11676072B1 (en) | 2021-01-29 | 2023-06-13 | Splunk Inc. | Interface for incorporating user feedback into training of clustering model |
US12107823B2 (en) * | 2022-08-02 | 2024-10-01 | Centurylink Intellectual Property | Systems and methods to provide dynamic capacity adjustments in different anycast regions |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5732219A (en) | 1995-03-17 | 1998-03-24 | Vermeer Technologies, Inc. | Computer system and computer-implemented process for remote editing of computer files |
US5848396A (en) | 1996-04-26 | 1998-12-08 | Freedom Of Information, Inc. | Method and apparatus for determining behavioral profile of a computer user |
US6076113A (en) | 1997-04-11 | 2000-06-13 | Hewlett-Packard Company | Method and system for evaluating user-perceived network performance |
US6006260A (en) | 1997-06-03 | 1999-12-21 | Keynote Systems, Inc. | Method and apparatus for evalutating service to a user over the internet |
US6078956A (en) * | 1997-09-08 | 2000-06-20 | International Business Machines Corporation | World wide web end user response time monitor |
US6108703A (en) | 1998-07-14 | 2000-08-22 | Massachusetts Institute Of Technology | Global hosting system |
JP3602972B2 (en) | 1998-07-28 | 2004-12-15 | 富士通株式会社 | Communication performance measuring device and its measuring method |
US6269401B1 (en) | 1998-08-28 | 2001-07-31 | 3Com Corporation | Integrated computer system and network performance monitoring |
US20010010059A1 (en) | 1998-10-28 | 2001-07-26 | Steven Wesley Burman | Method and apparatus for determining travel time for data sent between devices connected to a computer network |
US6446028B1 (en) * | 1998-11-25 | 2002-09-03 | Keynote Systems, Inc. | Method and apparatus for measuring the performance of a network based application program |
US6601098B1 (en) | 1999-06-07 | 2003-07-29 | International Business Machines Corporation | Technique for measuring round-trip latency to computing devices requiring no client-side proxy presence |
ATE420512T1 (en) | 1999-10-22 | 2009-01-15 | Nomadix Inc | SYSTEM AND METHOD FOR DYNAMIC SUBSCRIBER-BASED BANDWIDTH MANAGEMENT IN A COMMUNICATIONS NETWORK |
US6901051B1 (en) | 1999-11-15 | 2005-05-31 | Fujitsu Limited | Server-based network performance metrics generation system and method |
US6415368B1 (en) | 1999-12-22 | 2002-07-02 | Xerox Corporation | System and method for caching |
US6763380B1 (en) * | 2000-01-07 | 2004-07-13 | Netiq Corporation | Methods, systems and computer program products for tracking network device performance |
US6701363B1 (en) | 2000-02-29 | 2004-03-02 | International Business Machines Corporation | Method, computer program product, and system for deriving web transaction performance metrics |
US20020099816A1 (en) | 2000-04-20 | 2002-07-25 | Quarterman John S. | Internet performance system |
US7150011B2 (en) * | 2000-06-20 | 2006-12-12 | Interuniversitair Microelektronica Centrum (Imec) | Virtual hardware machine, methods, and devices |
US6909693B1 (en) | 2000-08-21 | 2005-06-21 | Nortel Networks Limited | Performance evaluation and traffic engineering in IP networks |
JP3606188B2 (en) * | 2000-10-18 | 2005-01-05 | 日本電気株式会社 | Communication packet priority class setting control method and system, apparatus used therefor, and recording medium |
US20020083188A1 (en) | 2000-11-02 | 2002-06-27 | Webtrends Corporation | Method for determining web page loading and viewing times |
US20020059458A1 (en) | 2000-11-10 | 2002-05-16 | Deshpande Sachin G. | Methods and systems for scalable streaming of images with server-side control |
US20020169868A1 (en) | 2001-04-20 | 2002-11-14 | Lopke Michael S. | Interactive remote monitoring of client page render times on a per user basis |
US6763321B2 (en) * | 2001-06-22 | 2004-07-13 | Sun Microsystems, Inc. | Method and apparatus to facilitate measurement of quality-of-service performance of a network server |
US20030046383A1 (en) * | 2001-09-05 | 2003-03-06 | Microsoft Corporation | Method and system for measuring network performance from a server |
US20030115421A1 (en) | 2001-12-13 | 2003-06-19 | Mchenry Stephen T. | Centralized bounded domain caching control system for network edge servers |
US20030221000A1 (en) * | 2002-05-16 | 2003-11-27 | Ludmila Cherkasova | System and method for measuring web service performance using captured network packets |
US7216165B2 (en) | 2003-02-04 | 2007-05-08 | Hewlett-Packard Development Company, L.P. | Steaming media quality assessment system |
US8639796B2 (en) | 2004-12-16 | 2014-01-28 | Hewlett-Packard Development Company, L.P. | Monitoring the performance of a streaming media server using server-side and client-side measurements |
US8750158B2 (en) | 2006-08-22 | 2014-06-10 | Centurylink Intellectual Property Llc | System and method for differentiated billing |
US7779146B2 (en) * | 2006-11-09 | 2010-08-17 | Sharp Laboratories Of America, Inc. | Methods and systems for HTTP streaming using server-side pacing |
US7640358B2 (en) * | 2006-11-09 | 2009-12-29 | Sharp Laboratories Of America, Inc. | Methods and systems for HTTP streaming using an intelligent HTTP client |
US8209728B2 (en) | 2007-08-31 | 2012-06-26 | At&T Intellectual Property I, L.P. | System and method of delivering video content |
US7908362B2 (en) | 2007-12-03 | 2011-03-15 | Velocix Ltd. | Method and apparatus for the delivery of digital data |
CN102171664B (en) | 2008-08-06 | 2014-12-03 | 莫维克网络公司 | Content caching in the radio access network (RAN) |
US8180896B2 (en) * | 2008-08-06 | 2012-05-15 | Edgecast Networks, Inc. | Global load balancing on a content delivery network |
WO2010045109A1 (en) | 2008-10-17 | 2010-04-22 | Azuki Systems, Inc. | Method and apparatus for efficient http data streaming |
US8219711B2 (en) | 2008-11-24 | 2012-07-10 | Juniper Networks, Inc. | Dynamic variable rate media delivery system |
ES2430056T3 (en) | 2009-08-18 | 2013-11-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Method, device and computer program to impose a policy through associated sessions taking into account the usage fee for an associated user |
EP2468067A4 (en) | 2009-08-19 | 2015-09-09 | Opanga Networks Inc | Optimizing media content delivery based on user equipment determined resource metrics |
CN102598628A (en) | 2010-03-15 | 2012-07-18 | 莫维克网络公司 | Adaptive Chunked And Content-aware Pacing Of Multi-media Delivery Over Http Transport And Network Controlled Bit Rate Selection |
US8339947B2 (en) | 2010-07-01 | 2012-12-25 | Verizon Patent And Licensing Inc. | Flow-based proactive connection admission control (CAC) in wireless networks |
US20120158461A1 (en) * | 2010-12-17 | 2012-06-21 | Verizon Patent And Licensing Inc. | Content management and advertisement management |
US20130076654A1 (en) * | 2011-09-27 | 2013-03-28 | Imerj LLC | Handset states and state diagrams: open, closed transitional and easel |
-
2012
- 2012-06-19 US US13/527,397 patent/US8626910B1/en active Active
-
2014
- 2014-01-03 US US14/147,117 patent/US8959212B2/en active Active
- 2014-12-15 US US14/570,025 patent/US9794152B2/en active Active
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10230566B1 (en) | 2012-02-17 | 2019-03-12 | F5 Networks, Inc. | Methods for dynamically constructing a service principal name and devices thereof |
US20140143375A1 (en) * | 2012-04-27 | 2014-05-22 | F5 Networks, Inc. | Methods for optimizing service of content requests and devices thereof |
US10097616B2 (en) * | 2012-04-27 | 2018-10-09 | F5 Networks, Inc. | Methods for optimizing service of content requests and devices thereof |
US20140244648A1 (en) * | 2013-02-27 | 2014-08-28 | Pavlov Media, Inc. | Geographical data storage assignment based on ontological relevancy |
US10951688B2 (en) | 2013-02-27 | 2021-03-16 | Pavlov Media, Inc. | Delegated services platform system and method |
US10601943B2 (en) | 2013-02-27 | 2020-03-24 | Pavlov Media, Inc. | Accelerated network delivery of channelized content |
US10581996B2 (en) | 2013-02-27 | 2020-03-03 | Pavlov Media, Inc. | Derivation of ontological relevancies among digital content |
US10264090B2 (en) * | 2013-02-27 | 2019-04-16 | Pavlov Media, Inc. | Geographical data storage assignment based on ontological relevancy |
US20140282422A1 (en) * | 2013-03-12 | 2014-09-18 | Netflix, Inc. | Using canary instances for software analysis |
US10318399B2 (en) * | 2013-03-12 | 2019-06-11 | Netflix, Inc. | Using canary instances for software analysis |
US9786014B2 (en) | 2013-06-07 | 2017-10-10 | Google Inc. | Earnings alerts |
US9712411B2 (en) * | 2013-06-25 | 2017-07-18 | Netflix, Inc. | Progressive deployment and termination of canary instances for software analysis |
US20140379901A1 (en) * | 2013-06-25 | 2014-12-25 | Netflix, Inc. | Progressive deployment and termination of canary instances for software analysis |
US9225621B2 (en) * | 2013-06-25 | 2015-12-29 | Netflix, Inc. | Progressive deployment and termination of canary instances for software analysis |
US10355950B2 (en) | 2013-06-25 | 2019-07-16 | Netflix, Inc. | Progressive deployment and termination of canary instances for software analysis |
US10187319B1 (en) * | 2013-09-10 | 2019-01-22 | Instart Logic, Inc. | Automatic configuration generation for a proxy optimization server for optimizing the delivery of content of a web publisher |
US10187317B1 (en) | 2013-11-15 | 2019-01-22 | F5 Networks, Inc. | Methods for traffic rate control and devices thereof |
US20150172354A1 (en) * | 2013-12-17 | 2015-06-18 | Limelight Networks, Inc. | Content-delivery transfer for cooperative delivery systems |
US20150365454A1 (en) * | 2014-06-17 | 2015-12-17 | Qualcomm Incorporated | Media processing services on an access node |
US10404809B2 (en) * | 2014-08-29 | 2019-09-03 | Google Llc | Systems and methods for adaptive associative routing for mobile messaging |
US20160065532A1 (en) * | 2014-08-29 | 2016-03-03 | Google Inc. | Systems and methods for adaptive associative routing for mobile messaging |
US10498626B2 (en) | 2014-10-09 | 2019-12-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Method, traffic monitor (TM), request router (RR) and system for monitoring a content delivery network (CDN) |
CN106797330A (en) * | 2014-10-09 | 2017-05-31 | 瑞典爱立信有限公司 | Method, business monitor (TM), request router (RR) and system for Contents for Monitoring delivering network (CDN) |
WO2016055893A1 (en) * | 2014-10-09 | 2016-04-14 | Telefonaktiebolaget L M Ericsson (Publ) | Method, traffic monitor (tm), request router (rr) and system for monitoring a content delivery network (cdn) |
US20230334017A1 (en) * | 2014-10-17 | 2023-10-19 | Zestfinance, Inc. | Api for implementing scoring functions |
US12099470B2 (en) * | 2014-10-17 | 2024-09-24 | Zestfinance, Inc. | API for implementing scoring functions |
US10050912B2 (en) | 2014-10-27 | 2018-08-14 | At&T Intellectual Property I, L.P. | Subscription-based media push service |
US10462081B2 (en) | 2014-10-27 | 2019-10-29 | At&T Intellectual Property I, L.P. | Subscription-based media push service |
US10002058B1 (en) * | 2014-11-26 | 2018-06-19 | Intuit Inc. | Method and system for providing disaster recovery services using elastic virtual computing resources |
US20160182639A1 (en) * | 2014-12-17 | 2016-06-23 | University-Industry Cooperation Group Of Kyung-Hee University | Internet of things network system using fog computing network |
EP3281120A4 (en) * | 2015-04-06 | 2018-11-07 | Level 3 Communications, LLC | Server side content delivery network quality of service |
US10666522B2 (en) | 2015-04-06 | 2020-05-26 | Level 3 Communications, Llc | Server side content delivery network quality of service |
US10389599B2 (en) | 2015-04-06 | 2019-08-20 | Level 3 Communications, Llc | Server side content delivery network quality of service |
US20160344791A1 (en) * | 2015-05-20 | 2016-11-24 | Microsoft Technology Limited, Llc | Network node bandwidth management |
US10911526B2 (en) | 2015-09-10 | 2021-02-02 | Vimmi Communications Ltd. | Content delivery network |
US10432708B2 (en) * | 2015-09-10 | 2019-10-01 | Vimmi Communications Ltd. | Content delivery network |
US20170201571A1 (en) * | 2015-09-10 | 2017-07-13 | Vimmi Communications Ltd. | Content delivery network |
US11470148B2 (en) | 2015-09-10 | 2022-10-11 | Vimmi Communications Ltd. | Content delivery network |
US10938768B1 (en) * | 2015-10-28 | 2021-03-02 | Reputation.Com, Inc. | Local content publishing |
US11706182B2 (en) | 2015-10-28 | 2023-07-18 | Reputation.Com, Inc. | Local content publishing |
US11108662B2 (en) * | 2016-06-15 | 2021-08-31 | Algoblu Holdings Limited | Dynamic switching between edge nodes in autonomous network system |
US20190238434A1 (en) * | 2016-06-15 | 2019-08-01 | Algoblu Holdings Limited | Dynamic switching between edge nodes in autonomous network system |
US10333809B2 (en) * | 2016-06-15 | 2019-06-25 | Algoblu Holdings Limited | Dynamic switching between edge nodes in autonomous network system |
US20170366426A1 (en) * | 2016-06-15 | 2017-12-21 | Algoblu Holdings Limited | Dynamic switching between edge nodes in autonomous network system |
US20180039682A1 (en) * | 2016-08-02 | 2018-02-08 | Blackberry Limited | Electronic device and method of managing data transfer |
US10977273B2 (en) * | 2016-08-02 | 2021-04-13 | Blackberry Limited | Electronic device and method of managing data transfer |
US10182098B2 (en) | 2017-01-31 | 2019-01-15 | Wipro Limited | Method and system for proactively selecting a content distribution network (CDN) for delivering content |
US20180227187A1 (en) * | 2017-02-03 | 2018-08-09 | Prysm, Inc. | Automatic Network Connection Sharing Among Multiple Streams |
US10601768B2 (en) * | 2017-04-10 | 2020-03-24 | Verizon Digital Media Services Inc. | Automated steady state traffic management |
US20180295063A1 (en) * | 2017-04-10 | 2018-10-11 | Verizon Digital Media Services Inc. | Automated Steady State Traffic Management |
US20180367822A1 (en) * | 2017-06-18 | 2018-12-20 | Cisco Technology, Inc. | Abr streaming of panoramic video |
US11409834B1 (en) * | 2018-06-06 | 2022-08-09 | Meta Platforms, Inc. | Systems and methods for providing content |
WO2020050790A1 (en) * | 2018-09-05 | 2020-03-12 | Medianova Internet Hizmetleri Ve Ticaret Anonim Sirketi | Parametric parsing based routing system in content delivery networks |
CN111064997A (en) * | 2018-10-16 | 2020-04-24 | 深圳市云帆加速科技有限公司 | Resource pre-distribution method and device |
WO2020263198A1 (en) * | 2019-06-26 | 2020-12-30 | Medianova Internet Hizmetleri Ve Ticaret Anonim Sirketi | Performance enhanced cdn service |
US11184433B2 (en) | 2020-03-31 | 2021-11-23 | Microsoft Technology Licensing, Llc | Container mobility based on border gateway protocol prefixes |
US20220360512A1 (en) * | 2021-04-15 | 2022-11-10 | At&T Intellectual Property I, L.P. | System for addition and management of ad-hoc network attached compute (nac) resources |
Also Published As
Publication number | Publication date |
---|---|
US20140122711A1 (en) | 2014-05-01 |
US8626910B1 (en) | 2014-01-07 |
US9794152B2 (en) | 2017-10-17 |
US8959212B2 (en) | 2015-02-17 |
US20150100691A1 (en) | 2015-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9794152B2 (en) | Systems and methods for performing localized server-side monitoring in a content delivery network | |
US9391856B2 (en) | End-to-end monitoring and optimization of a content delivery network using anycast routing | |
US8738766B1 (en) | End-to-end monitoring and optimization of a content delivery network using anycast routing | |
US8868834B2 (en) | Efficient cache validation and content retrieval in a content delivery network | |
CN109565501B (en) | Method and apparatus for selecting a content distribution network entity | |
US10194351B2 (en) | Selective bandwidth modification for transparent capacity management in a carrier network | |
KR101578473B1 (en) | Real-time network monitoring and subscriber identification with an on-demand appliance | |
US9398347B2 (en) | Systems and methods for measuring quality of experience for media streaming | |
US9414248B2 (en) | System and methods for estimation and improvement of user, service and network QOE metrics | |
US8934374B2 (en) | Request modification for transparent capacity management in a carrier network | |
CA2982850C (en) | Server side content delivery network quality of service | |
US20150067185A1 (en) | Server-side systems and methods for reporting stream data | |
WO2017125017A1 (en) | Method for adjusting cache content, device, and system | |
US10791026B2 (en) | Systems and methods for adaptive over-the-top content quality of experience optimization | |
CN103945198A (en) | System and method for controlling streaming media route of video monitoring system | |
CN110771122A (en) | Method and network node for enabling a content delivery network to handle unexpected traffic surges | |
US10944808B2 (en) | Server-side reproduction of client-side quality-of-experience | |
RU2454711C1 (en) | Method of distributing load between content delivery network (cdn) servers | |
US8625529B2 (en) | System for and method of dynamic home agent allocation | |
CA2742038C (en) | Systems and methods for measuring quality of experience for media streaming | |
KR20130021729A (en) | System and method to deliver contents using dynamic context in the distributed network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EDGECAST NETWORKS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIENTZ, ANDREW;REEL/FRAME:028405/0021 Effective date: 20120619 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: VERIZON DIGITAL MEDIA SERVICES INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:EDGECAST NETWORKS, INC;REEL/FRAME:038511/0045 Effective date: 20160318 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: EDGECAST INC., VIRGINIA Free format text: CHANGE OF NAME;ASSIGNOR:VERIZON DIGITAL MEDIA SERVICES INC.;REEL/FRAME:059367/0990 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EDGIO, INC., ARIZONA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EDGECAST INC.;REEL/FRAME:061738/0972 Effective date: 20221021 |
|
AS | Assignment |
Owner name: LYNROCK LAKE MASTER FUND LP (LYNROCK LAKE PARTNERS LLC, ITS GENERAL PARTNER), NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:065597/0212 Effective date: 20231114 Owner name: U.S. BANK TRUST COMPANY, NATIONAL ASSOCIATION, ARIZONA Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:065597/0406 Effective date: 20231114 |
|
AS | Assignment |
Owner name: LYNROCK LAKE MASTER FUND LP (LYNROCK LAKE PARTNERS LLC, ITS GENERAL PARTNER), NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:068763/0276 Effective date: 20240823 |