WO2016074323A1 - 内容分发网络的http调度系统和方法 - Google Patents

内容分发网络的http调度系统和方法 Download PDF

Info

Publication number
WO2016074323A1
WO2016074323A1 PCT/CN2014/095485 CN2014095485W WO2016074323A1 WO 2016074323 A1 WO2016074323 A1 WO 2016074323A1 CN 2014095485 W CN2014095485 W CN 2014095485W WO 2016074323 A1 WO2016074323 A1 WO 2016074323A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
server cluster
client
scheduling
cluster
Prior art date
Application number
PCT/CN2014/095485
Other languages
English (en)
French (fr)
Inventor
洪珂
莫小琪
阮兆银
Original Assignee
网宿科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网宿科技股份有限公司 filed Critical 网宿科技股份有限公司
Priority to US15/525,042 priority Critical patent/US10404790B2/en
Priority to EP14905704.4A priority patent/EP3211857B1/en
Publication of WO2016074323A1 publication Critical patent/WO2016074323A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1014Server selection for load balancing based on the content of a request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1021Server selection for load balancing based on client or server locations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to a content distribution network, and more particularly to an HTTP scheduling system and method involving a content distribution network.
  • CDN Content Delivery Network
  • the main task of the CDN is to pass content from the source station to the client as quickly as possible.
  • the basic idea of CDN is to avoid the bottlenecks and links on the Internet that may affect the speed and stability of data transmission, so that content transmission is faster and better.
  • the system adds a global scheduling layer to the existing network architecture, distributes the source station content to the edge of the network closest to the user, so that the user can obtain the required content nearby, solve the congestion of the Internet network, and improve the response speed of the user visiting the website.
  • the problem of slow response time of users visiting websites caused by small bandwidth of the source station exit network, large user access, uneven distribution of network points, complicated carrier network, and small user access network bandwidth is solved.
  • the load balancing technology of the CDN is mainly divided into a DNS (Domain Name System) scheduling and an HTTP (Hypertext Transfer Protocol) scheduling.
  • DNS scheduling is mainly used for load balancing of pictures and dynamic acceleration type content.
  • HTTP scheduling is based on the 30X response code of the http protocol, and is mainly used for streaming media applications. Considering that streaming media services account for a large part of the CDN service, this part becomes an important part of CDN service control server and machine room load balancing.
  • HTTP scheduling is divided into a central mode and an edge mode.
  • the central mode is to direct client requests to several groups of scheduling machines via DNS. These scheduling machines do not provide content. Instead, the decision machine analyzes the bandwidth data of all the computer rooms and server clusters of the CDN, and timely allocates the traffic of the server cluster according to the schedule, and performs scheduling by the dispatcher to achieve load balancing. Second, the scheduling machine will enter according to the content of the request. Hash calculations, select specific servers to reduce content redundancy between clusters and servers within the cluster.
  • the edge mode is to allocate client requests to the edge server cluster through DNS. The scheduling machine in the server cluster judges the status of the cluster and the computer room in the cluster, selects the local service, or jumps the request to the backup server cluster to reach the load. The purpose of equilibrium.
  • the technical problem to be solved by the present invention is to provide a novel HTTP scheduling system and method.
  • the technical solution adopted by the present invention to solve the above technical problem is to provide an HTTP scheduling system for a content distribution network, including a central decision server, one or more central scheduling servers, and one or more edge scheduling servers.
  • the central decision server is configured to generate a central decision file based on the bandwidth and load information of the global server cluster.
  • the central scheduling server is connected to the central decision server, and the central scheduling server is configured to execute the central decision file.
  • the edge scheduling server is configured in the corresponding edge server cluster, and each edge scheduling server is configured to obtain the bandwidth and load status of the corresponding edge server cluster if there is a standby server cluster when receiving the client content request, and determine the request according to the request. Whether it should be directed to the alternate server cluster.
  • the central decision server sets weights of multiple clusters in each coverage domain in multiple coverage domains to control traffic of each cluster, where the multiple coverage domains are based on clients.
  • the site where the content is accessed and the location of the physical address are pre-divided.
  • the central scheduling server is configured to perform hash calculation and weight random calculation according to a client request, select a preferred server cluster in the service cluster list, and if it decides to select an alternate server cluster, further Select an alternate server cluster with a certain remaining service capacity in the server cluster list.
  • the one or more edge scheduling servers are configured to determine whether the corresponding edge server cluster is overloaded when receiving the client content request, and if yes, determine whether the request has an alternate server Cluster information, return information if there is information about the alternate server cluster Transfer the address to the client to jump to the alternate server cluster. If there is no information with the alternate server cluster, the edge scheduling server acts as a proxy for the client and the cache server in the corresponding edge server cluster; if the corresponding The edge server cluster is not overloaded, and the edge scheduling server acts as a proxy for the client and the cache server.
  • the one or more edge scheduling servers are configured to acquire the bandwidth data of the node once every certain time, and then calculate the number of locally allowed connections in the next time period, and in the next During the time, the first few requests determined by the number of connections are served by the corresponding edge server cluster, and subsequent requests are redirected to the alternate server cluster.
  • the invention also provides an HTTP scheduling method for a content distribution network, comprising the steps of: generating, at a central decision server, a central decision file according to bandwidth and load information of a global server cluster; and connecting one or more of the central decision servers
  • the central dispatching server executes the central decision file, and when receiving the client content request, selects a preferred server cluster according to the geographical location and the request content of the client, and determines whether to select a standby server cluster at the same time, and to the client.
  • the step of generating a decision file in the central decision server includes: setting weights of multiple clusters in each coverage domain in multiple coverage domains to control traffic of each cluster, where Multiple coverage domains are pre-divided based on the location of the site and address where the client accesses the content.
  • the following steps are performed in the central scheduling server: obtaining a service cluster list according to the service group to which the client belongs; performing hash calculation and weight random calculation according to the client request, in the service cluster list Select a preferred server cluster; if you decide to select an alternate server cluster, further select an alternate server cluster with a certain remaining service capacity in the server cluster list.
  • the one or more edge scheduling servers receive the client content When requesting, perform the following steps: determine whether the corresponding edge server cluster is overloaded; if the edge server cluster is overloaded, determine whether the request has the information of the candidate server cluster: if there is information of the alternative server cluster, return the jump Address, the client jumps to the alternate server cluster; if there is no information with the alternate server cluster, the edge scheduling server acts as a proxy for the client and the cache server in the corresponding edge server cluster; if the edge server If the cluster is not overloaded, the edge scheduling server acts as a proxy for the client and the cache server.
  • the bandwidth data of the node is acquired once every certain time in the one or more edge scheduling servers, and then the number of locally allowed connections in the next time period is calculated, and the next one is During the time, the first few requests determined by the number of connections are served by the corresponding edge server cluster, and subsequent requests are redirected to the alternate server cluster.
  • the present invention can obtain a client request in the central scheduling server, and return a preferred server cluster and an alternate server cluster, and the edge server cluster corrects the allocation of the central scheduling according to the bandwidth and load of the local server cluster.
  • the edge server cluster corrects the allocation of the central scheduling according to the bandwidth and load of the local server cluster.
  • FIG. 1 illustrates a network implementation environment in accordance with an embodiment of the present invention.
  • FIG. 2 shows a block diagram of a scheduling system in accordance with an embodiment of the present invention.
  • FIG. 3 shows a workflow of a central scheduling module according to an embodiment of the present invention.
  • FIG. 4 shows an edge scheduling module workflow according to an embodiment of the present invention.
  • FIG. 5 shows a workflow of a client module according to an embodiment of the present invention.
  • HTTP scheduling is known to be divided into a central mode and an edge mode. These two modes have their own characteristics and thus have been applied to a certain extent in practice. But these two modes also have their own flaws.
  • the drawback of the central mode is that the mode needs to collect CDN platform data for a period of time before the traffic scheduling, and the time of the decision may be delayed, resulting in errors.
  • the limitations of HTTP scheduling itself When performing flow control, only the request can be allocated proportionally, and the proportion of the traffic cannot be guaranteed to be equal to the request. That is to say, there are errors in the mode, and there are also problems of inaccurate regulation.
  • the drawback of the edge mode is that the scheduling between the clusters is performed by DNS, and content control cannot be performed between the clusters, and the data redundancy is large, which wastes storage space and bandwidth resources. And the controller of this mode is deployed in the edge cluster. When the cluster is overloaded, there is no guarantee that the traffic will be directed to the most suitable cluster.
  • Embodiments of the present invention will describe a novel HTTP scheduling method that provides performance superior to both central mode and edge mode.
  • FIG. 1 illustrates a network implementation environment in accordance with an embodiment of the present invention.
  • a content distribution network 100 typically includes a central decision server 101, a plurality of central dispatch servers 102, and a plurality of edge server clusters 110 that are connected together via the Internet.
  • An edge scheduling server 111 and a cache server 112 are disposed in each edge server cluster 110.
  • the central dispatch server 102 can be directly connected to the central decision server 101 or can be connected to the central decision server 101 via an additional network 103.
  • Many clients 120 can connect to the content distribution network 100 to obtain the requested content.
  • the central decision server 101 is configured to generate a central decision file based on bandwidth, load information, network conditions, and bandwidth cost information of the global server cluster.
  • Each central scheduling server 102 is configured to execute the central decision file, and when receiving the client content request, select a preferred server cluster according to the geographic location and the requested content of the client, and return a preferred server cluster to the client.
  • the jump URL of the standby server cluster (Uniform Resource Locator). If required, each central dispatch server 102 can simultaneously select an alternate server cluster and return a jump URL with the alternate server cluster to the client.
  • the URL is the address of a standard resource on the Internet.
  • the preferred server cluster is a server cluster that preferentially requests content from client 120.
  • An alternate server cluster is a cluster of servers for client 120 to request content when the preferred server cluster is overloaded or otherwise unable to respond to requests.
  • the edge scheduling server 111 is configured to acquire the bandwidth and load status of the local server cluster (ie, the server cluster where the edge scheduling server 111 is located) upon receiving the client content request, and accordingly determine whether the request should be directed to the standby server cluster.
  • the cache server 112 is responsible for providing the content that the client 120 needs.
  • FIG. 2 shows a block diagram of a scheduling system in accordance with an embodiment of the present invention.
  • a typical scheduling system The system includes a central decision server 101, a central dispatch server 102, and an edge dispatch server 111 to provide services to the client 120.
  • the client 120 makes requests and tracking of the content, saves the file, and displays the content.
  • the central decision module 201 can be stored and executed in the central decision server 101.
  • the central decision server 101 is configured to be able to generate a central decision file based on the bandwidth and load information of the global server cluster and transmit it to the central dispatch server 102.
  • the central scheduling module 202 can be stored and executed in the central dispatch server 102.
  • the central scheduling server 102 When the central scheduling module 202 executes, the central scheduling server 102 is configured to execute the central decision file. When receiving the client content request, the preferred server cluster and the optional standby server are selected according to the geographic location and the requested content of the client. The cluster is returned to client 120 with a jump URL with a preferred server cluster and an optional alternate server cluster.
  • An edge scheduling module 203 can be stored and executed in the edge scheduling server 111. When the edge scheduling module 203 executes, the edge scheduling server 111 is configured to obtain the bandwidth and load conditions of the local server cluster, and determine whether the request should be directed to the standby server cluster, if there is a standby server cluster at this time.
  • the local load balancing module 204 can also be stored and executed in the edge scheduling server 111.
  • the edge scheduling server 111 is configured to act as a proxy for the cache server 112.
  • Each of the servers 101, 102, 111 may include a processor, a memory, and a memory.
  • the various modules such as central decision module 201, central scheduling module 202, edge scheduling module 203, and local load balancing module 204, may be stored in memory in software and loaded into memory for execution by the processor.
  • the central dispatch server after receiving the client request, the central dispatch server returns the preferred server cluster and the alternate server cluster.
  • the allocation of the central schedule is corrected based on the bandwidth and load of the local server cluster, such as allowing the client to request a jump to the alternate server cluster. Therefore, the system can achieve cost/quality optimization through the macro control of the central scheduling and the adjustment of the edge scheduling, and the edge cluster will not be overloaded.
  • FIG. 3 illustrates a central decision making and scheduling server workflow in accordance with an embodiment of the present invention.
  • the central decision server 101 is responsible for collecting the bandwidth and load of all edge server clusters of the platform, and periodically generating a central decision file containing information such as the weight of the coverage domain.
  • the central decision server 101 sends the file to all of the central dispatch servers 102.
  • Coverage domain is based on guest The combination of the geographical area and the user range in which the user accesses the content of the site and the address (such as the IP address).
  • a geographical area is, for example, a province or a city.
  • the user range is, for example, a predetermined group of users.
  • a coverage domain can contain multiple edge server clusters, that is, multiple groups. The traffic of each cluster in the coverage domain is controlled by the weight of the cluster.
  • the specific process includes:
  • the farm is an edge server cluster, which is the basic unit of the weight decision scheduling of the central decision server 101; 2.
  • the farms belonging to the common area form a pop, and the pop is a network of multiple server clusters agreed with the operator. Access Point;
  • ⁇ BW_farm ⁇ W_farm*BW_cover/ ⁇ W_farm
  • the bandwidth of the network access point is the sum of the cluster bandwidths:
  • the new bandwidth value of the cluster should be no greater than the service capacity of the cluster, and the control cluster traffic is at the rated upper limit:
  • the new bandwidth value of the node should not be greater than the bandwidth limit of the cluster:
  • the central decision server tries to find a solution that minimizes ⁇
  • BW_farm represents the farm bandwidth
  • BW_cover represents the coverage domain bandwidth
  • W_farm Indicates the farm weight
  • Capacity_farm indicates the farm's rated serviceable bandwidth
  • Capacity_pop indicates the pop rated serviceable bandwidth
  • indicates the amount of change after the weight is adjusted
  • the client 120 sends the original request URL.
  • the central dispatch server 102 will get the address of the client and the requested content.
  • the central dispatch server 102 obtains a list of service clusters by determining the service group to which the customer belongs.
  • the central dispatch server 102 performs a hash (HASH) calculation and a weight random calculation based on the URL, and selects a preferred server cluster in the service cluster list.
  • HASH hash
  • the alternative service cluster selects a judgment. If an alternative service cluster needs to be selected, an alternative server cluster is selected in the server cluster list, and the candidate server cluster has a certain remaining service capability.
  • the central dispatch server 102 generates a new jump URL based on the original request URL, the preferred server cluster, and the alternate server cluster, and returns to the client 120.
  • the preferred IP is randomly obtained according to the ratio of the weights between the clusters in the coverage domain; if the URL is a non-hotspot URL, the hash is calculated according to the URL to obtain the preferred IP;
  • the alternate cluster is not selected.
  • the cache server 112 considering that the number of times the hotspot URL is accessed is relatively large, the cache server 112 generally has multiple accesses after caching the URLs. Therefore, load balancing between clusters can be ensured by scheduling hotspot URLs. In addition, the number of accesses of the non-hotspot URL is small, and the stability of the allocation can be ensured by hashing the URL, and the content repeatability between the cache servers 112 is reduced.
  • FIG. 4 shows an edge scheduling server workflow according to an embodiment of the present invention.
  • the client 120 sends a request, which is retrieved by the central dispatch server 102.
  • the edge scheduling server 111 gets the request.
  • the edge scheduling server 111 Determining whether the local server cluster is overloaded, and if so, in step 404, the edge scheduling server 111 further determines whether the request has information of the alternate IP; if the information with the alternative server cluster is present, the jump URL is returned in step 405, The client jumps to the alternate server cluster; if there is no information with the alternate server cluster, then in step 406 the edge dispatch server 111 serves as a proxy for the client 120 and the local cache server 112. If the local server cluster is not overloaded, the edge dispatch server 111 serves as a proxy for the client 120 and the local cache server 112.
  • the edge scheduling again jump judgment method is as follows:
  • the edge scheduling server 111 acquires the bandwidth data BW of the node once at a certain time (set to interval, such as 10 s), and then substitutes the following formula:
  • Request_Allow max(0,Capacity-BW)/BW_Per_Request
  • Capacity is the bandwidth limit of the node
  • FIG. 5 shows a client workflow of an embodiment of the present invention.
  • the client 120 sends a request to the central dispatch server 102.
  • the client 120 obtains the 302 jump response returned by the central dispatch server 102.
  • the client 120 sends a request after the jump to the edge dispatch server 111.
  • the edge dispatch server 111 is returned to the content directly, saved to the local or displayed on the interface.

Abstract

本发明涉及一种内容分发网络的HTTP调度系统,包括中心决策服务器、一个或多个中心调度服务器以及一个或多个边缘调度服务器。中心决策服务器配置为根据全局的服务器集群的带宽和负载信息,生成中心决策文件。中心调度服务器连接该中心决策服务器,中心调度服务器配置为执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择一首选服务器集群和一备用服务器集群,并向该客户端返回一个带有该首选及该备用服务器集群的跳转地址。边缘调度服务器设置在对应的边缘服务器集群内,各边缘调度服务器配置为在收到客户端内容请求时,获取对应的边缘服务器集群的带宽和负载情况,据此判断请求是否应该引导到备用服务器集群。

Description

内容分发网络的HTTP调度系统和方法 技术领域
本发明涉及内容分发网络,尤其是涉及内容分发网络的HTTP调度系统和方法。
背景技术
随着互联网设备的迅猛发展,流量日益增长,CDN的规模愈加庞大,负载均衡成为控制成本的重要手段。
CDN(Content Delivery Network),中文名称是内容分发网络。CDN的任务主要是使内容从源站尽可能快地传递到用户端。CDN的基本思想就是尽可能避开互联网上有可能影响数据传输速度和稳定性的瓶颈和环节,使内容传输的更快、更好。通过在网络各处放置边缘节点服务器所构成的内容分发网络,能够实时根据网络流量和各边缘节点的负载情况以及到用户的距离和响应时间等综合信息将用户的访问请求重定向至离用户最近且最好的边缘节点上。该系统在现有网络架构上增加一个全局调度层,将源站内容分发到最接近用户的网络边缘,使用户可以就近取得所需内容,解决Internet网络拥挤的状况,提高用户访问网站的响应速度,解决因源站出口网络带宽小、用户访问量大、网点分布不均、复杂的运营商网络、用户接入网络带宽小所造成的用户访问网站响应速度慢的问题。
当前CDN的负载均衡技术主要分为DNS(Domain Name System,域名系统)调度和HTTP(Hypertext transfer protocol,超文本传送协议)调度。DNS调度主要用于图片及动态加速类型内容的负载均衡。HTTP调度基于http协议的30X响应码,主要用于流媒体应用。考虑到流媒体业务占CDN业务的很大一部分,这部分成了CDN业务控制服务器及机房负载均衡的重要部分。
HTTP调度分为中心模式和边缘模式。中心模式是通过DNS将客户请求引导至若干组调度机器上。这些调度机器并不提供内容,而是首先由决策机分析CDN所有机房和服务器集群的带宽数据,并据此适时调配服务器集群的流量,由调度机执行调配,使集群达到负载均衡。其次,调度机器会根据请求内容进 行散列(Hash)计算,选择特定的服务器,以减少集群间和集群内服务器之间的内容冗余。边缘模式是通过DNS将客户请求分配至边缘服务器集群,由服务器集群中的调度机器对所在集群和机房的状态做出判断,选择由本地进行服务,或将请求跳转至备份服务器集群,达到负载均衡的目的。
发明内容
本发明所要解决的技术问题是提供一种新型的HTTP调度系统和方法。
本发明为解决上述技术问题而采用的技术方案是提出一种内容分发网络的HTTP调度系统,包括中心决策服务器、一个或多个中心调度服务器以及一个或多个边缘调度服务器。中心决策服务器配置为根据全局的服务器集群的带宽和负载信息,生成中心决策文件。中心调度服务器连接该中心决策服务器,该中心调度服务器配置为执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择一首选服务器集群,并决定是否同时选择一个备用服务器集群,然后向该客户端返回一个带有该首选服务器集群的跳转地址,以及当选择了备用服务器集群时,同时向该客户端返回一个带有该备用服务器集群的跳转地址。边缘调度服务器设置在对应的边缘服务器集群内,各边缘调度服务器配置为在收到客户端内容请求时,如果有备用服务器集群,则获取对应的边缘服务器集群的带宽和负载情况,据此判断请求是否应该引导到备用服务器集群。
在本发明的一实施例中,该中心决策服务器设定多个覆盖域中每一覆盖域内的多个集群的权值以对各集群的流量进行控制,其中该多个覆盖域是根据客户端访问内容的站点和物理地址所在地预先划分而成。
在本发明的一实施例中,该中心调度服务器配置为根据客户端请求进行散列计算和权值随机计算,在服务集群列表中选择一个首选服务器集群,且如果决定选择备选服务器集群,进一步在服务器集群列表中选择有一定剩余服务能力的备选服务器集群。
在本发明的一实施例中,该一个或多个边缘调度服务器配置为在收到客户端内容请求时,判断对应的边缘服务器集群是否过载,如果是,则判断请求中是否带有备选服务器集群的信息,如果带有备选服务器集群的信息,则返回跳 转地址,将客户端跳转至备选服务器集群,如果没有带备选服务器集群的信息,则以边缘调度服务器作为客户端与对应的边缘服务器集群内的缓存服务器的代理,提供服务;如果对应的边缘服务器集群不是过载,则以边缘调度服务器作为客户端与该缓存服务器的代理。
在本发明的一实施例中,该一个或多个边缘调度服务器配置为每隔一定时间获取一次节点的带宽数据,然后据此计算得到下一个时间段内,本地允许的连接数,且在下一个时间内,由连接数决定的前几个请求由对应的边缘服务器集群服务,而之后的请求则再次跳转到备选服务器集群。
本发明还提出一种内容分发网络的HTTP调度方法,包括以下步骤:在中心决策服务器为根据全局的服务器集群的带宽和负载信息,生成中心决策文件;在连接该中心决策服务器的一个或多个中心调度服务器执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择一首选服务器集群,并决定是否同时选择一备用服务器集群,并向该客户端返回一个带有该首选服务器集群的跳转地址,以及当选择了备用服务器集群时,同时向该客户端返回一个带有该备用服务器集群的跳转地址;在设置于对应的边缘服务器集群内且连接该一个或多个中心调度服务器的一个或多个边缘调度服务器,在收到客户端内容请求时,如果有备用服务器集群,则获取对应的边缘服务器集群的带宽和负载情况,判断请求是否应该引导到备用服务器集群。
在本发明的一实施例中,在该中心决策服务器生成决策文件的步骤包括:设定多个覆盖域中每一覆盖域内的多个集群的权值以对各集群的流量进行控制,其中该多个覆盖域是根据客户端访问内容的站点和地址所在地预先划分而成。
在本发明的一实施例中,在该中心调度服务器中执行以下步骤:根据判断客户所属的服务群体,得到服务集群列表;根据客户端请求进行散列计算和权值随机计算,在服务集群列表中选择一个首选服务器集群;如果决定选择备选服务器集群,进一步在服务器集群列表中选择一个有一定剩余服务能力的备选服务器集群。
在本发明的一实施例中,该一个或多个边缘调度服务器在收到客户端内容 请求时执行以下步骤:判断对应的边缘服务器集群是否过载;如果边缘服务器集群是过载,则判断请求中是否带有备选服务器集群的信息:如果带有备选服务器集群的信息,则返回跳转地址,将客户端跳转至备选服务器集群;如果没有带备选服务器集群的信息,则以边缘调度服务器作为客户端与对应的边缘服务器集群内的缓存服务器的代理,提供服务;如果边缘服务器集群不是过载,则以边缘调度服务器作为客户端与该缓存服务器的代理。
在本发明的一实施例中,在该一个或多个边缘调度服务器中每隔一定时间获取一次节点的带宽数据,然后据此计算得到下一个时间段内,本地允许的连接数,且在下一个时间内,由连接数决定的前几个请求由对应的边缘服务器集群服务,而之后的请求则再次跳转到备选服务器集群。
本发明由于采用以上技术方案,可以在中心调度服务器获取客户请求,并返回首选服务器集群和备选服务器集群,且在边缘服务器集群根据本地服务器集群的带宽和负载,对中心调度的分配进行校正。通过中心调度的宏观控制和边缘调度的调整,整个系统可以达到成本/质量较优的同时,边缘集群不会超负荷。
附图概述
本发明的特征、性能由以下的实施例及其附图进一步描述。
图1示出根据本发明一实施例的网络实施环境。
图2示出本发明一实施例的调度系统框图。
图3示出本发明一实施例的中心调度模块工作流程。
图4示出本发明一实施例的边缘调度模块工作流程。
图5示出本发明一实施例的客户端模块工作流程。
本发明的实施方式
已知HTTP调度分为中心模式和边缘模式。这两种模式有各自的特点,因而在实践中都得到一定程度的应用。但是这两种模式也有各自的缺陷。中心模式的缺陷在于,该模式在进行流量调度前需要收集一段时间的CDN平台数据,外加决策的时间,可能会有延迟,导致误差。而且由于HTTP调度自身的局限, 进行流量控制时,只能保证按比例分配请求,不能保证流量的比例与请求是对等的。即该模式有误差的同时,也有调控不准确的问题。边缘模式的缺陷在于,该模式集群间的调度通过DNS进行,无法在集群间进行内容控制,数据冗余量大,浪费存储空间和带宽资源。且该模式的控制器部署在边缘集群中,在控制集群过载时,不能保证将流量引导到最合适的集群中。
本发明的实施例将描述一种新型的HTTP调度方法,其提供优于中心模式和边缘模式的性能。
图1示出根据本发明一实施例的网络实施环境。参考图1所示,一个内容分发网络100通常包括一个中心决策服务器101,多个中心调度服务器102,多个边缘服务器集群110,这些装置通过互联网连接在一起。各边缘服务器集群110中配置有边缘调度服务器111和缓存服务器112。中心调度服务器102可以直接与中心决策服务器101相连,也可以通过额外的网络103与中心决策服务器101相连。
许多客户端120可以连接到内容分发网络100,从而获得所请求的内容。
中心决策服务器101配置为根据全局的服务器集群的带宽、负载信息、网络状况和带宽成本信息,生成中心决策文件。
各中心调度服务器102配置为执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择首选服务器集群,并向客户端返回一个带有首选服务器集群及备用服务器集群的跳转URL(Uniform Resource Locator,统一资源定位器)。如果需要各中心调度服务器102可以同时选择备用服务器集群,并向客户端返回一个带有备用服务器集群的跳转URL。URL是互联网上标准资源的地址。首选服务器集群是优先供客户端120请求内容的服务器集群。备用服务器集群是当首选服务器集群过载或其他无法响应请求的情形时,供客户端120请求内容的服务器集群。
边缘调度服务器111配置为在收到客户端内容请求时,获取本地服务器集群(即边缘调度服务器111所在的服务器集群)的带宽和负载情况,据此判断请求是否应该引导到备用服务器集群。缓存服务器112负责提供客户端120需要的内容。
图2示出本发明一实施例的调度系统框图。参考图2所示,典型的调度系 统包括中心决策服务器101、中心调度服务器102以及边缘调度服务器111,为客户端120提供服务。客户端120进行内容的请求和跟踪,并将文件保存,以及展示内容。中心决策服务器101中可存储并执行中心决策模块201。中心决策模块201执行时,将中心决策服务器101配置为能够根据全局的服务器集群的带宽和负载信息,生成中心决策文件,并传输给中心调度服务器102。中心调度服务器102中可存储并执行中心调度模块202。中心调度模块202执行时,将中心调度服务器102配置为执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择首选服务器集群和可选的备用服务器集群,并向客户端120返回一个带有首选服务器集群及可选的备用服务器集群的跳转URL。边缘调度服务器111中可存储并执行边缘调度模块203。边缘调度模块203执行时,将边缘调度服务器111配置为获取本地服务器集群的带宽和负载情况,判断请求是否应该引导到备用服务器集群,如果这时有备用服务器集群的话。
边缘调度服务器111中还可储存并执行本地负载均衡模块204。本地负载均衡模块204执行时,将边缘调度服务器111配置为用作缓存服务器112的代理。
各个服务器101、102、111可包括处理器、内存(Memory)和存储器(Storage)。各个模块,如中心决策模块201、中心调度模块202、边缘调度模块203和本地负载均衡模块204可以以软件形式储存在存储器中,并可载入到内存中,由处理器执行。
在本系统中,中心调度服务器获取客户请求后,返回了首选服务器集群和备选服务器集群。在作为首选服务器集群的边缘服务器集群中,会根据本地服务器集群的带宽和负载,对中心调度的分配进行校正,例如让客户请求跳转到备选服务器集群。因而本系统通过中心调度的宏观控制和边缘调度的调整,整个系统可以达到成本/质量较优的同时,边缘集群不会超负荷。
图3示出本发明一实施例的中心决策和调度服务器工作流程。参考图3所示,在步骤301,中心决策服务器101负责收集平台所有边缘服务器集群的带宽和负载,周期性生成包含覆盖域的权值等信息的中心决策文件。在步骤302,中心决策服务器101发送文件至所有的中心调度服务器102。覆盖域是根据客 户访问内容的站点和地址(如IP地址)所在地将流量分割而成的地理区域和用户范围的组合。地理区域例如是覆盖一个省或者一个市。用户范围例如是一个预定的用户群体。覆盖域中可包含多个边缘服务器集群,即多个组。通过集群的权值对覆盖域的各集群的流量进行控制。
具体的流程包括:
1)通过以下方法预测节点的带宽值:
获取节点最近时间点对应的带宽数据,将其中最大和最小的数据去除,然后使用最小二乘法,对数据进行拟合以下公式:
bw=a*t+b
计算得到a和b,然后将一分钟后的时刻t'代入,计算得到预测带宽值BW_pop。
2)遍历所有的覆盖域使以下条件得到满足:
定义:1、farm为一个边缘服务器集群,为中心决策服务器101权值调度的基本单位;2、多个属于共同区域的farm组成一个pop,pop是与运营商协定的多个服务器集群集合的网络接入点;
集群带宽的改变量:
ΔBW_farm=ΔW_farm*BW_cover/ΣW_farm
网络接入点的带宽为属下的集群带宽之和:
BW_pop=ΣBW_farm
集群新的带宽值应不大于集群的服务能力,控制集群流量在额定上限:
Capacity_farm>=BW_farm+ΔBW_farm
节点新的带宽值应不大于集群的带宽上限:
Capacity_pop>=BW_pop+ΣΔBW_farm
在以上的限制条件下,中心决策服务器尝试找到一个解,使Σ|ΔBW_pop|最小,也就是在覆盖域多个集群间流量变动最小。这样既保证了节点带宽的稳定,同时减少了因带宽在节点间的迁移而导致的缓存服务器112命中率下降问题。
说明:1、BW_farm表示farm带宽,BW_cover表示覆盖域带宽,W_farm 表示farm权值,Capacity_farm表示farm额定可服务带宽,Capacity_pop表示pop额定可服务带宽,Δ表示权值调控后的变化量;
3)最后得到每个集群在每个覆盖域中的权值比例(即W_farm)。
在步骤303,客户端120会发送原始请求URL。
在步骤304,中心调度服务器102会得到客户端的地址及请求内容。
在步骤305,中心调度服务器102通过判断客户所属的服务群体,得到服务集群列表。
在步骤306,中心调度服务器102根据URL进行散列(HASH)计算和权值随机计算,在服务集群列表中选择一个首选服务器集群。
在步骤307,备选服务集群选择判断,如需要选择备选服务集群,则在服务器集群列表中选择一个备选服务器集群,并保证备选服务器集群有一定剩余服务能力。
在步骤308,中心调度服务器102根据原始请求URL、首选服务器集群和备选服务器集群,生成新的跳转URL,并返回客户端120。
其中,首选服务器集群IP和备选服务器集群IP的确定方法示例如下:
1)根据请求的IP和域名确定覆盖域;
2)统计URL的访问情况,并确定该URL是否为热点URL
3)如果URL为热点URL,则根据覆盖域内集群间的权值比例,随机得到首选IP;如果URL为非热点URL,则根据URL进行散列计算,得到首选IP;
4)如果URL为热点URL,将首选IP所在的集群去除后,根据权值比例,从可选的备选服务器集群中重新随机选择得到一个IP;
如果URL为非热点URL,则不选择备选集群。
其中,考虑到热点URL被访问的次数相对较多,缓存服务器112缓存这些URL后一般还会有多次的访问。所以可以通过调度热点URL来保证集群间的负载均衡。另外非热点URL的访问次数较少,通过对URL进行散列可以保证分配的稳定性,减少缓存服务器112之间的内容重复性。
图4示出本发明一实施例的边缘调度服务器工作流程。参考图4所示,在步骤401,客户端120发送请求,该请求由中心调度服务器102跳转得到。在步骤402,边缘调度服务器111得到请求。在步骤403,边缘调度服务器111 判断本地服务器集群是否过载,如果是,在步骤404边缘调度服务器111进一步判断请求中是否带有备选IP的信息;如果带有备选服务器集群的信息,则在步骤405返回跳转URL,将客户端跳转至备选服务器集群;如果没有带备选服务器集群的信息,则在步骤406边缘调度服务器111作为客户端120与本地的缓存服务器112的代理,提供服务。如果本地服务器集群不是过载,则边缘调度服务器111作为客户端120与本地的缓存服务器112的代理,提供服务。
其中,边缘调度再次跳转判断方法如下:
1)边缘调度服务器111每隔一定时间(设为interval,如10s),获取一次节点的带宽数据BW,然后代入以下公式:
Request_Allow=max(0,Capacity-BW)/BW_Per_Request
(其中,Capacity是节点的带宽上限)
得到下一个interval时间段内,本地允许的连接数Request_Allow。
2)获取本地服务器集群的负载Load_current,和负载阈值Load_max。如果Load_current>=Load_max,则下一个interval内,所有请求都跳转到备选IP。
3)在下一个interval内,前Request_Allow个请求由本地服务,而之后的请求则再次跳转到备选IP。
图5示出本发明一实施例的客户端工作流程。参考图5所示,在步骤501,客户端120向中心调度服务器102发送请求。在步骤502,客户端120获取中心调度服务器102返回的302跳转响应。在步骤503,客户端120向边缘调度服务器111发送跳转后的请求。在步骤504,如果得到的响应为302跳转,则在步骤505继续执行步骤。在步骤506,得到边缘调度服务器111直接返回内容,保存至本地或显示在界面上。

Claims (10)

  1. 一种内容分发网络的HTTP调度系统,包括:
    中心决策服务器,配置为根据全局的服务器集群的带宽和负载信息,生成中心决策文件;
    一个或多个中心调度服务器,连接该中心决策服务器,该中心调度服务器配置为执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择一首选服务器集群,并决定是否同时选择一备用服务器集群,然后向该客户端返回一个带有该首选服务器集群的跳转地址,以及当选择了备用服务器集群时,同时向该客户端返回一个带有该备用服务器集群的跳转地址;
    一个或多个边缘调度服务器,设置在对应的边缘服务器集群内,各边缘调度服务器配置为在收到客户端内容请求时,如果有备用服务器集群,则获取对应的边缘服务器集群的带宽和负载情况,据此判断请求是否应该引导到备用服务器集群。
  2. 如权利要求1所述的内容分发网络的HTTP调度系统,其特征在于,该中心决策服务器设定多个覆盖域中每一覆盖域内的多个集群的权值以对各集群的流量进行控制,其中该多个覆盖域是根据客户端访问内容的站点和物理地址所在地预先划分而成。
  3. 如权利要求1所述的内容分发网络的HTTP调度系统,其特征在于,该中心调度服务器配置为根据客户端请求进行散列计算和权值随机计算,在服务集群列表中选择一首选服务器集群,且如果决定选择备选服务器集群,进一步在服务器集群列表中选择有一定剩余服务能力的备选服务器集群。
  4. 如权利要求1所述的内容分发网络的HTTP调度系统,其特征在于,该一个或多个边缘调度服务器配置为在收到客户端内容请求时,判断对应的边缘服务器集群是否过载,如果是,则判断请求中是否带有备选服务器集群的信息,如果带有备选服务器集群的信息,则返回跳转地址,将客户端跳转至备选服务器集群,如果没有带备选服务器集群的信息,则以边缘调度服务器作为客户端与对应的边缘服务器集群内的缓存服务器的代理,提供服务;如果对应的边缘服务器集群不是过载,则以边缘调度服务器作为客户端与该缓存服务器的 代理。
  5. 如权利要求1所述的内容分发网络的HTTP调度系统,其特征在于,该一个或多个边缘调度服务器配置为每隔一定时间获取一次节点的带宽数据,然后据此计算得到下一个时间段内,本地允许的连接数,且在下一个时间内,由连接数决定的前几个请求由对应的边缘服务器集群服务,而之后的请求则再次跳转到备选服务器集群。
  6. 一种内容分发网络的HTTP调度方法,包括以下步骤:
    在中心决策服务器为根据全局的服务器集群的带宽、负载信息、网络状况和成本信息,生成中心决策文件;
    在连接该中心决策服务器的一个或多个中心调度服务器执行该中心决策文件,在收到客户端内容请求时,根据客户端所在的地理位置及请求内容,选择一首选服务器集群,并决定是否同时选择一备用服务器集群,然后向该客户端返回一个带有该首选及该备用服务器集群的跳转地址,以及当选择了备用服务器集群时,同时向该客户端返回一个带有该备用服务器集群的跳转地址;
    在设置于对应的边缘服务器集群内且连接该一个或多个中心调度服务器的一个或多个边缘调度服务器,在收到客户端内容请求时,如果有备用服务器集群,则获取对应的边缘服务器集群的带宽和负载情况,判断请求是否应该引导到备用服务器集群。
  7. 如权利要求6所述的内容分发网络的HTTP调度方法,其特征在于,在该中心决策服务器生成决策文件的步骤包括:设定多个覆盖域中每一覆盖域内的多个集群的权值以对各集群的流量进行控制,其中该多个覆盖域是根据客户端访问内容的站点和物理地址所在地预先划分而成。
  8. 如权利要求6所述的内容分发网络的HTTP调度方法,其特征在于,在该中心调度服务器中执行以下步骤:
    根据判断客户所属的服务群体,得到服务集群列表;
    根据客户端请求进行散列计算和权值随机计算,在服务集群列表中选择一个首选服务器集群;
    如果决定选择备选服务器集群,进一步在服务器集群列表中选择一个有一定剩余服务能力的备选服务器集群。
  9. 如权利要求6所述的内容分发网络的HTTP调度方法,其特征在于,该一个或多个边缘调度服务器在收到客户端内容请求时执行以下步骤:
    判断对应的边缘服务器集群是否过载;
    如果边缘服务器集群是过载,则判断请求中是否带有备选服务器集群的信息:如果带有备选服务器集群的信息,则返回跳转地址,将客户端跳转至备选服务器集群;如果没有带备选服务器集群的信息,则以边缘调度服务器作为客户端与对应的边缘服务器集群内的缓存服务器的代理,提供服务;
    如果边缘服务器集群不是过载,则以边缘调度服务器作为客户端与该缓存服务器的代理。
  10. 如权利要求6所述的内容分发网络的HTTP调度方法,其特征在于,在该一个或多个边缘调度服务器中每隔一定时间获取一次节点的带宽数据,然后据此计算得到下一个时间段内,本地允许的连接数,且在下一个时间内,由连接数决定的前几个请求由对应的边缘服务器集群服务,而之后的请求则再次跳转到备选服务器集群。
PCT/CN2014/095485 2014-11-11 2014-12-30 内容分发网络的http调度系统和方法 WO2016074323A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/525,042 US10404790B2 (en) 2014-11-11 2014-12-30 HTTP scheduling system and method of content delivery network
EP14905704.4A EP3211857B1 (en) 2014-11-11 2014-12-30 Http scheduling system and method of content delivery network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410631207.9 2014-11-11
CN201410631207.9A CN104320487B (zh) 2014-11-11 2014-11-11 内容分发网络的http调度系统和方法

Publications (1)

Publication Number Publication Date
WO2016074323A1 true WO2016074323A1 (zh) 2016-05-19

Family

ID=52375655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095485 WO2016074323A1 (zh) 2014-11-11 2014-12-30 内容分发网络的http调度系统和方法

Country Status (4)

Country Link
US (1) US10404790B2 (zh)
EP (1) EP3211857B1 (zh)
CN (1) CN104320487B (zh)
WO (1) WO2016074323A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737544A (zh) * 2018-05-22 2018-11-02 中国联合网络通信集团有限公司 Cdn节点调度方法和装置
WO2019206033A1 (zh) * 2018-04-25 2019-10-31 阿里巴巴集团控股有限公司 服务器配置的方法和装置
WO2020168957A1 (zh) * 2019-02-18 2020-08-27 华为技术有限公司 调度内容分发网络cdn边缘节点的方法及设备
CN112491961A (zh) * 2020-11-02 2021-03-12 网宿科技股份有限公司 调度系统及方法、cdn系统
US11172023B2 (en) * 2017-09-29 2021-11-09 Wangsu Science & Technology Co., Ltd. Data synchronization method and system
CN115002126A (zh) * 2022-04-30 2022-09-02 苏州浪潮智能科技有限公司 一种基于边缘服务器集群的服务调度方法及装置
CN115208766A (zh) * 2022-07-29 2022-10-18 天翼云科技有限公司 一种cdn带宽调度方法及相关装置

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104270371A (zh) * 2014-10-13 2015-01-07 无锡云捷科技有限公司 一种基于模糊逻辑的cdn缓存服务器选择方法
CN104580520A (zh) * 2015-01-29 2015-04-29 广州华多网络科技有限公司 一种运行互动业务的方法及装置
CN104580526A (zh) * 2015-02-03 2015-04-29 浪潮电子信息产业股份有限公司 一种高性能Web服务网络中的就近访问负载均衡调度方法
CN106034083B (zh) * 2015-03-12 2019-10-08 阿里巴巴集团控股有限公司 Cdn网络系统及其限速控制方法、cdn节点和服务器
CN104735088B (zh) * 2015-04-16 2018-09-11 北京金山安全软件有限公司 一种cdn网络中服务器节点调整方法及装置
CN105323317B (zh) * 2015-10-28 2019-04-26 南京师范大学 基于解析器的ndn中支持有状态任播的方法和系统
CN105516343B (zh) * 2015-12-31 2018-07-17 中国电子科技集团公司第五十四研究所 一种网络动态自组织的文件共享实现方法
CN107277097A (zh) * 2016-04-08 2017-10-20 北京优朋普乐科技有限公司 内容分发网络及其负载预测均衡方法
CN111756815B (zh) * 2016-09-19 2023-04-07 网宿科技股份有限公司 302跳转方法、跳转域名生成方法、域名解析方法及系统
US11184318B2 (en) 2016-09-19 2021-11-23 Wangsu Science & Technology Co., Ltd. 302 redirecting method, URL generating method and system, and domain-name resolving method and system
CN106357792B (zh) * 2016-10-10 2019-09-06 网宿科技股份有限公司 节点选路方法及系统
CN106790482B (zh) * 2016-12-13 2020-05-22 网宿科技股份有限公司 资源调度方法及资源调度系统
EP3556063B1 (en) * 2016-12-16 2021-10-27 Telefonaktiebolaget LM Ericsson (publ) Method and request router for dynamically pooling resources in a content delivery network (cdn), for efficient delivery of live and on-demand content
CN107231436B (zh) 2017-07-14 2021-02-02 网宿科技股份有限公司 一种进行业务调度的方法和装置
WO2019061522A1 (zh) * 2017-09-30 2019-04-04 深圳前海达闼云端智能科技有限公司 域名解析方法、客户端、边缘节点及域名解析系统
CN107801086B (zh) * 2017-10-20 2019-01-04 广东省南方数字电视无线传播有限公司 多缓存服务器的调度方法和系统
CN107835437B (zh) * 2017-10-20 2018-10-09 广东省南方数字电视无线传播有限公司 基于多缓存服务器的调度方法和装置
CN107819754B (zh) * 2017-10-30 2020-01-14 网宿科技股份有限公司 一种防劫持方法、监控服务器、终端及系统
US10230683B1 (en) * 2018-02-09 2019-03-12 Capital One Services, Llc Routing for large server deployments
CN111131402B (zh) * 2018-03-22 2022-06-03 贵州白山云科技股份有限公司 一种配置共享缓存服务器组的方法、装置、设备及介质
CN108337327A (zh) * 2018-04-26 2018-07-27 拉扎斯网络科技(上海)有限公司 一种资源获取方法和代理服务器
CN108650317B (zh) * 2018-05-10 2021-02-05 深圳市汇星数字技术有限公司 内容分发网络的负载调节方法、装置及设备
CN108810145A (zh) * 2018-06-13 2018-11-13 郑州云海信息技术有限公司 一种基于p2p的多节点内容分发网络系统及方法
CN110830533B (zh) * 2018-08-10 2022-07-22 贵州白山云科技股份有限公司 一种用于云分发网络的http调度方法和系统
CN110839049B (zh) * 2018-08-15 2022-07-08 阿里巴巴集团控股有限公司 基于域名系统的数据调度方法和系统
CN111367650B (zh) * 2018-12-26 2023-11-21 浙江大学 一种输入输出流的处理方法、装置及系统
CN109889569B (zh) * 2019-01-03 2022-04-22 网宿科技股份有限公司 Cdn服务调度方法及系统
CN109618003B (zh) * 2019-01-14 2022-02-22 网宿科技股份有限公司 一种服务器规划方法、服务器及存储介质
CN110266744A (zh) * 2019-02-27 2019-09-20 中国联合网络通信集团有限公司 基于位置的边缘云资源调度方法及系统
CN109918205B (zh) * 2019-03-25 2023-11-17 深圳市网心科技有限公司 一种边缘设备调度方法、系统、装置及计算机存储介质
CN112019451B (zh) * 2019-05-29 2023-11-21 中国移动通信集团安徽有限公司 带宽分配方法、调试网元、本地缓存服务器及计算设备
US11310148B2 (en) * 2019-06-03 2022-04-19 At&T Intellectual Property I, L.P. Methods, systems, and computer programs for intelligent content delivery using a software defined network and edge computing
CN110311810A (zh) * 2019-06-13 2019-10-08 北京奇艺世纪科技有限公司 一种服务器资源配置方法、装置、电子设备及存储介质
CN110381134B (zh) * 2019-07-18 2022-05-17 湖南快乐阳光互动娱乐传媒有限公司 调度方法、系统、调度器及cdn系统
CN110493321B (zh) * 2019-07-24 2022-04-22 网宿科技股份有限公司 一种资源获取方法以及边缘调度系统、服务器
CN110460652B (zh) * 2019-07-26 2021-09-14 网宿科技股份有限公司 一种资源获取方法及边缘计算调度服务器
CN112398802B (zh) * 2019-08-16 2022-10-14 腾讯科技(深圳)有限公司 数据下载方法及相关设备
CN110708369B (zh) * 2019-09-25 2022-09-16 深圳市网心科技有限公司 设备节点的文件部署方法、装置、调度服务器及存储介质
CN110661879B (zh) * 2019-10-12 2023-03-24 北京奇艺世纪科技有限公司 节点调度方法、装置、系统、调度服务器及终端设备
CN110830564B (zh) * 2019-10-30 2022-11-01 北京金山云网络技术有限公司 Cdn调度方法、装置、系统及计算机可读存储介质
CN110933145A (zh) * 2019-11-14 2020-03-27 光通天下网络科技股份有限公司 异地调度方法、装置、设备及介质
CN113067714B (zh) * 2020-01-02 2022-12-13 中国移动通信有限公司研究院 一种内容分发网络调度处理方法、装置及设备
CN111225279B (zh) * 2020-01-06 2022-03-22 浙江云诺通信科技有限公司 一种基于宽带电视cdn智能调度算法
CN111371866B (zh) * 2020-02-26 2023-03-21 厦门网宿有限公司 一种处理业务请求的方法和装置
EP3879796B1 (en) * 2020-03-13 2024-02-21 Apple Inc. Selection of edge application server
US11201791B2 (en) * 2020-03-24 2021-12-14 Verizon Patent And Licensing Inc. Optimum resource allocation and device assignment in a MEC cluster
CN111526185B (zh) * 2020-04-10 2022-11-25 广东小天才科技有限公司 数据下载方法、装置、系统及存储介质
CN111614736A (zh) * 2020-04-30 2020-09-01 北京金山云网络技术有限公司 网络内容资源调度方法、域名调度服务器及电子设备
CN111770068B (zh) * 2020-06-15 2022-12-30 上海翌旭网络科技有限公司 一种基于最优链路选择的一致性鉴权方法
CN112272201B (zh) * 2020-09-15 2022-05-27 网宿科技股份有限公司 一种设备纳管方法、系统及纳管集群
CN112260962B (zh) * 2020-10-16 2023-01-24 网宿科技股份有限公司 一种带宽控制方法及装置
CN114553964A (zh) * 2020-11-20 2022-05-27 中移动信息技术有限公司 一种联播系统的管控方法、装置、设备及联播系统
CN112671664B (zh) * 2020-12-04 2022-08-19 新浪网技术(中国)有限公司 一种基于精细化调度的cdn调度系统及方法
CN112713924A (zh) * 2021-03-23 2021-04-27 南通先进通信技术研究院有限公司 一种基于卫星通信的cdn网络系统的工作方法
CN112769230A (zh) * 2020-12-22 2021-05-07 南方电网深圳数字电网研究院有限公司 基于容器技术的分布式边缘微云监控系统
CN112804110B (zh) * 2021-03-19 2023-05-09 上海七牛信息技术有限公司 基于内容分发网络指标系统的带宽精准控制方法及装置
CN112995771B (zh) * 2021-04-25 2021-07-16 广州华源网络科技有限公司 一种基于无线通信的多媒体增值服务系统
CN114172964B (zh) * 2021-11-04 2024-02-02 北京快乐茄信息技术有限公司 内容分发网络的调度方法、装置、通信设备及存储介质
CN116208556A (zh) * 2021-11-30 2023-06-02 中兴通讯股份有限公司 流量均衡方法、电子设备、计算机可读存储介质
CN113950165B (zh) * 2021-12-03 2022-11-08 中电信数智科技有限公司 智能组网平台组网设备连接方法及装置
US11909816B2 (en) * 2022-01-21 2024-02-20 Dell Products L.P. Distributed network address discovery in non-uniform networks
CN116684433A (zh) * 2022-02-23 2023-09-01 中移(苏州)软件技术有限公司 一种请求处理方法、装置及存储介质
WO2023192133A1 (en) * 2022-03-28 2023-10-05 Interdigital Patent Holdings, Inc. Pin configuration, management and application service discovery
CN114650295B (zh) * 2022-03-29 2023-12-05 北京有竹居网络技术有限公司 Cdn质量调度方法、装置、介质和电子设备
CN114884944B (zh) * 2022-04-28 2024-04-30 广东电网有限责任公司 一种数据处理方法、装置、设备和存储介质
CN115225507B (zh) * 2022-07-21 2024-03-08 天翼云科技有限公司 一种服务器组资源分配方法、装置、设备及介质
US11956309B1 (en) * 2022-12-13 2024-04-09 International Business Machines Corporation Intermediary client reconnection to a preferred server in a high availability server cluster
CN116599999B (zh) * 2023-07-18 2023-10-10 中移(苏州)软件技术有限公司 预测cdn用户的实时用量数据的方法、装置及设备
CN117041260B (zh) * 2023-10-09 2024-01-02 湖南快乐阳光互动娱乐传媒有限公司 一种控制处理方法及系统
CN117459587B (zh) * 2023-12-22 2024-03-01 常州尊尚信息科技有限公司 基于边缘计算的内容分发网络的调度方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010023424A1 (en) * 2008-08-26 2010-03-04 British Telecommunications Public Limited Company Operation of a content distribution network
CN102148752A (zh) * 2010-12-22 2011-08-10 华为技术有限公司 基于内容分发网络的路由实现方法及相关设备、系统
CN102195788A (zh) * 2011-05-25 2011-09-21 中国联合网络通信集团有限公司 应用层组播系统及流媒体数据处理方法
US20110289214A1 (en) * 2001-06-06 2011-11-24 Akamai Technologies, Inc. Content delivery network map generation using passive measurement data

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108703A (en) 1998-07-14 2000-08-22 Massachusetts Institute Of Technology Global hosting system
US6253230B1 (en) * 1998-09-22 2001-06-26 International Business Machines Corporation Distributed scalable device for selecting a server from a server cluster and a switched path to the selected server
US7340532B2 (en) * 2000-03-10 2008-03-04 Akamai Technologies, Inc. Load balancing array packet routing system
US20010051980A1 (en) * 2000-06-01 2001-12-13 Raciborski Nathan F. Preloading content objects on content exchanges
US9167036B2 (en) * 2002-02-14 2015-10-20 Level 3 Communications, Llc Managed object replication and delivery
US20030217147A1 (en) * 2002-05-14 2003-11-20 Maynard William P. Directing a client computer to a least network latency server site
US9762692B2 (en) * 2008-04-04 2017-09-12 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US8073940B1 (en) * 2008-11-17 2011-12-06 Amazon Technologies, Inc. Managing content delivery network service providers
US8984144B2 (en) * 2011-03-02 2015-03-17 Comcast Cable Communications, Llc Delivery of content
US8589996B2 (en) * 2011-03-16 2013-11-19 Azuki Systems, Inc. Method and system for federated over-the-top content delivery
CN103078880A (zh) * 2011-10-25 2013-05-01 中国移动通信集团公司 基于多个内容分发网络的内容信息处理方法、系统和设备
CN103095597B (zh) * 2011-10-28 2017-04-26 华为技术有限公司 一种用于负载均衡的方法和装置
CN102427412A (zh) * 2011-12-31 2012-04-25 网宿科技股份有限公司 基于内容分发网络的零延时主备源灾备切换方法和系统
CN103227839B (zh) * 2013-05-10 2016-08-17 网宿科技股份有限公司 内容分发网络服务器区域自治的管理系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110289214A1 (en) * 2001-06-06 2011-11-24 Akamai Technologies, Inc. Content delivery network map generation using passive measurement data
WO2010023424A1 (en) * 2008-08-26 2010-03-04 British Telecommunications Public Limited Company Operation of a content distribution network
CN102148752A (zh) * 2010-12-22 2011-08-10 华为技术有限公司 基于内容分发网络的路由实现方法及相关设备、系统
CN102195788A (zh) * 2011-05-25 2011-09-21 中国联合网络通信集团有限公司 应用层组播系统及流媒体数据处理方法

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11172023B2 (en) * 2017-09-29 2021-11-09 Wangsu Science & Technology Co., Ltd. Data synchronization method and system
WO2019206033A1 (zh) * 2018-04-25 2019-10-31 阿里巴巴集团控股有限公司 服务器配置的方法和装置
US11431669B2 (en) 2018-04-25 2022-08-30 Alibaba Group Holding Limited Server configuration method and apparatus
CN108737544A (zh) * 2018-05-22 2018-11-02 中国联合网络通信集团有限公司 Cdn节点调度方法和装置
CN108737544B (zh) * 2018-05-22 2021-11-26 中国联合网络通信集团有限公司 Cdn节点调度方法和装置
WO2020168957A1 (zh) * 2019-02-18 2020-08-27 华为技术有限公司 调度内容分发网络cdn边缘节点的方法及设备
US11888958B2 (en) 2019-02-18 2024-01-30 Petal Cloud Technology Co., Ltd. Content delivery network CDN edge node scheduling method and device
CN112491961A (zh) * 2020-11-02 2021-03-12 网宿科技股份有限公司 调度系统及方法、cdn系统
CN115002126A (zh) * 2022-04-30 2022-09-02 苏州浪潮智能科技有限公司 一种基于边缘服务器集群的服务调度方法及装置
CN115002126B (zh) * 2022-04-30 2024-01-12 苏州浪潮智能科技有限公司 一种基于边缘服务器集群的服务调度方法及装置
CN115208766A (zh) * 2022-07-29 2022-10-18 天翼云科技有限公司 一种cdn带宽调度方法及相关装置
CN115208766B (zh) * 2022-07-29 2023-11-03 天翼云科技有限公司 一种cdn带宽调度方法及相关装置

Also Published As

Publication number Publication date
CN104320487A (zh) 2015-01-28
EP3211857B1 (en) 2019-05-22
US20180288141A1 (en) 2018-10-04
US10404790B2 (en) 2019-09-03
CN104320487B (zh) 2018-03-20
EP3211857A1 (en) 2017-08-30
EP3211857A4 (en) 2018-06-13

Similar Documents

Publication Publication Date Title
WO2016074323A1 (zh) 内容分发网络的http调度系统和方法
US10911527B2 (en) Load balancing with layered edge servers
CN107465708B (zh) 一种cdn带宽调度系统及方法
US10574586B2 (en) Method and system for self-adaptive bandwidth control of CDN platform
US10361902B2 (en) Method and system for guaranteeing resource utilization rate of website using content distribution network
US20170142177A1 (en) Method and system for network dispatching
EP1625709B1 (en) Method and system for managing a streaming media service
JP5901024B2 (ja) コンテンツ配信に利用される動的バインド
WO2017080172A1 (zh) 网络调度方法和系统
US11200089B2 (en) Systems and methods for dynamic load distribution in a multi-tier distributed platform
US8843630B1 (en) Decentralized request routing
CN111464649B (zh) 一种访问请求回源方法和装置
CN102291447A (zh) 内容分发网络负载调度方法和系统
CN103312629A (zh) 一种cdn流量分配方法、设备及系统
CN104994156A (zh) 一种集群的负载均衡方法及系统
KR102650892B1 (ko) 지역적으로 분산된 다중 클라우드 환경에서의 컨테이너 오케스트레이션 장치 및 이를 이용한 방법
CN101980505A (zh) 一种基于3Tnet的视频点播的负载均衡方法
Fadahunsi et al. Locality sensitive request distribution for fog and cloud servers
CN111193672B (zh) 一种流量精细化调度的方法及系统
JP6793498B2 (ja) データストア装置およびデータ管理方法
Gowri et al. Dynamic Energy Efficient Load Balancing Approach in Fog Computing Environment
CN113329050B (zh) 内容分发方法及系统
WO2018000617A1 (zh) 一种数据库的更新方法及调度服务器
US20220109609A1 (en) Policy determination apparatus, policy determining method and program
CN110839086A (zh) 一种高并发负载均衡处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14905704

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15525042

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014905704

Country of ref document: EP