WO2019148569A1 - 一种发送数据资源的获取请求的方法和系统 - Google Patents

一种发送数据资源的获取请求的方法和系统 Download PDF

Info

Publication number
WO2019148569A1
WO2019148569A1 PCT/CN2018/077556 CN2018077556W WO2019148569A1 WO 2019148569 A1 WO2019148569 A1 WO 2019148569A1 CN 2018077556 W CN2018077556 W CN 2018077556W WO 2019148569 A1 WO2019148569 A1 WO 2019148569A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
traffic
scheduling policy
traffic scheduling
secondary nodes
Prior art date
Application number
PCT/CN2018/077556
Other languages
English (en)
French (fr)
Inventor
张宇
董曙佳
Original Assignee
网宿科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网宿科技股份有限公司 filed Critical 网宿科技股份有限公司
Priority to US16/073,549 priority Critical patent/US11178220B2/en
Priority to EP18769023.5A priority patent/EP3547625B1/en
Publication of WO2019148569A1 publication Critical patent/WO2019148569A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/06Generation of reports
    • H04L43/062Generation of reports related to network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing

Definitions

  • the present invention relates to the field of data transmission technologies, and in particular, to a method and system for transmitting an acquisition request of a data resource.
  • the CDN service cluster includes a large number of node servers for storing data resources and accelerating network services.
  • the node server may include an edge node server (which may be simply referred to as an edge node) and a parent node server (which may be simply referred to as a parent node, including a dynamic parent node and a static node). Parent node).
  • the user can send a data resource acquisition request to the CDN service cluster through the terminal, so that an edge node in the CDN service cluster can receive the acquisition request.
  • the edge node may select an optimal path for source processing to obtain the data resource, that is, the dynamic parent node on the optimal path sends a data resource acquisition request to the corresponding server.
  • the dynamic parent node needs to forward a large number of data resource acquisition requests at the same time, thereby dynamically parenting the node.
  • the efficiency of the return source is reduced due to the continuous overload of the traffic, and the quality of the return source acceleration service of the CDN service cluster is deteriorated.
  • an embodiment of the present invention provides a method and system for sending an acquisition request of a data resource.
  • the technical solution is as follows:
  • a method for sending an acquisition request for a data resource comprising:
  • the first node When the first data resource is requested to be sent, the first node acquires a traffic scheduling policy of the multiple secondary nodes corresponding to the resource server to which the first data resource belongs to the local storage, where the traffic scheduling policy is The secondary node is generated based on a local traffic load condition;
  • the first node sends an acquisition request of the first data resource to the target node.
  • the method further includes:
  • the second node configures the RFC2697 token bucket algorithm according to the normal load limit of the local traffic
  • the second node periodically acquires the local historical total traffic in the historical period, and predicts the latest local total traffic of the current period;
  • the second node determines a traffic scheduling policy of the second node according to the local latest total traffic and the execution parameter of the RFC2697 token bucket algorithm.
  • the second node determines, according to the local latest total traffic and the execution parameter of the RFC2697 token bucket algorithm, the traffic scheduling policy of the second node, including:
  • the traffic scheduling policy is a partial offloading
  • the traffic scheduling policy is not offloaded.
  • the method further includes:
  • the first node periodically sends a probe message to all secondary nodes
  • the first node receives a probe response fed back by each of the secondary nodes, and determines a network delay between the first node and each of the secondary nodes;
  • the first node sets a priority for each secondary node according to the network delay, wherein a network delay corresponding to a secondary node with a higher priority is smaller.
  • the first node selects a target node among the multiple secondary nodes according to a traffic scheduling policy of the multiple secondary nodes, including:
  • the first node sequentially selects an candidate node among the plurality of secondary nodes in order of priority from high to low;
  • the device When the traffic scheduling policy of the candidate node is partially offloaded, and the request for acquiring the first data resource does not meet the preset offloading criterion, or the traffic scheduling policy of the candidate node is not split, the device is configured to be offloaded.
  • the selected node is determined as the target node.
  • the probe response fed by each secondary node carries a respective traffic scheduling policy
  • the first node After the first node receives the probe response fed back by each of the secondary nodes, the first node further includes:
  • the first node updates the traffic scheduling policy of each of the secondary nodes stored locally based on the traffic scheduling policy carried in the probe response.
  • the method further includes:
  • the first node updates the locally stored traffic scheduling policy of the third node based on the traffic scheduling policy of the third node that is carried in the request response.
  • a system for transmitting an acquisition request of a data resource includes a plurality of nodes, the plurality of nodes including a first node, and the first node is configured to:
  • the multiple nodes further include a second node, where the second node is used to:
  • the second node is specifically configured to:
  • the traffic scheduling policy is a partial offloading
  • the traffic scheduling policy is not offloaded.
  • the first node is further configured to:
  • a priority is set for each of the secondary nodes according to the network delay, wherein a network delay corresponding to a secondary node having a higher priority is smaller.
  • the first node is specifically configured to:
  • the device When the traffic scheduling policy of the candidate node is partially offloaded, and the request for acquiring the first data resource does not meet the preset offloading criterion, or the traffic scheduling policy of the candidate node is not split, the device is configured to be offloaded.
  • the selected node is determined as the target node.
  • the probe response fed by each secondary node carries a respective traffic scheduling policy
  • the first node is further configured to:
  • the multiple nodes further include a third node, where the first node is further configured to:
  • the first node when the acquisition request of the first data resource needs to be sent, acquires a traffic scheduling policy of the multiple secondary nodes corresponding to the resource server to which the first data resource that is stored locally belongs, where the traffic scheduling policy is Each secondary node is generated based on a local traffic load situation; the first node selects a target node among the multiple secondary nodes according to a traffic scheduling policy of the multiple secondary nodes; the first node sends the first data resource to the target node. request.
  • each node may send the foregoing acquisition request to the secondary node that is not in the traffic overload according to the traffic scheduling policy of the secondary node, thereby effectively reducing the dynamic parent node due to continuous overload of the traffic.
  • the efficiency of the return source is reduced, so that the quality of the return source acceleration service of the CDN service cluster can be improved.
  • FIG. 1 is a network architecture diagram of a CDN service cluster according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a method for sending a data resource acquisition request according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a request for acquiring a data resource according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method for sending an acquisition request of a data resource, which may be implemented by multiple node servers in a CDN service cluster.
  • the network architecture may be as shown in FIG. 1 , and the node server may include an edge node server ( The following is abbreviated as an edge node and a multi-level parent node server (hereinafter referred to as a parent node), wherein the edge node is an access node requested by a user in a CDN service cluster, and the parent node is a CDN service cluster in which the deployment level is located behind the edge node.
  • the nodes involved in this embodiment are not considered static parent nodes. All nodes in this embodiment refer to node servers in the CDN service cluster, excluding terminals and resource servers.
  • the foregoing node server may include a processor, a memory, and a transceiver, and the processor may be configured to perform a process of acquiring a data resource in the following process, where the memory may be used to store data required in the following processing and generated. Data, the transceiver can be used to receive and transmit relevant data in the following processing.
  • the function of the foregoing node server may be implemented by a server group formed by multiple servers. In this embodiment, the node server is used as a separate server as an example, and the rest of the situation is similar, and details are not described herein.
  • Step 201 When the acquisition request of the first data resource needs to be sent, the first node acquires a traffic scheduling policy of the multiple secondary nodes corresponding to the resource server to which the first data resource that is stored locally belongs.
  • the first node may be any of the edge nodes and all the dynamic parent nodes of the secondary node.
  • the dynamic parent node generates a traffic scheduling policy based on the local traffic load condition, and then feeds the traffic scheduling policy to the upper node in a preset manner.
  • the upper-level node can store it in the locality to select the transmission path according to the traffic scheduling policy when sending the data resource acquisition request.
  • the first node in the case that the first node is an edge node, after receiving the user's acquisition request for a certain data resource (such as the first resource data), if the first resource data is not stored locally, the first node needs to pass other The node sends a corresponding acquisition request to the resource server to which the first resource data belongs, and the first node may first determine multiple secondary nodes corresponding to the resource server to which the first resource data belongs, and then obtain traffic scheduling of multiple secondary nodes stored locally. Strategy. If the first node is a dynamic parent node, the first node needs to forward the acquisition request to the resource server to which the first resource data belongs after receiving the first resource data acquisition request sent by the upper-level node.
  • a certain data resource such as the first resource data
  • the first node may first determine a plurality of secondary nodes corresponding to the resource server to which the first resource data belongs, and then obtain a traffic scheduling policy of the plurality of secondary nodes stored locally. It can be understood that, for each resource server, a technician of the CDN service cluster can set a secondary node corresponding to the resource server in each edge node and a dynamic parent node, that is, the current node can pass any one of the secondary nodes. A request to acquire a data resource is sent to the resource server.
  • Step 202 The first node selects a target node among the multiple secondary nodes according to a traffic scheduling policy of the multiple secondary nodes.
  • the first node may select, according to the traffic scheduling policies, a target node for forwarding the acquisition request of the first resource data among the multiple secondary nodes.
  • Step 203 The first node sends a request for acquiring the first data resource to the target node.
  • the first node may send a request for acquiring the first data resource to the target node.
  • the dynamic parent node may use the RFC2697 token bucket to determine the traffic scheduling policy, and the corresponding processing may be as follows: the second node configures the RFC2697 token bucket according to the normal load limit of the local traffic; the second node periodically acquires the history period. The local historical total traffic predicts the latest local total traffic of the current period; the second node determines the traffic scheduling policy of the second node according to the latest local total traffic and the execution parameters of the RFC2697 token bucket algorithm.
  • the second node may be any one of all the dynamic parent nodes for providing the source acceleration service, and may be any secondary node of any node in the CDN service cluster.
  • the execution parameters of the RFC2697 token bucket algorithm include the token. Add speed and token bucket capacity.
  • the CDN service cluster may also be provided with a configuration server, and the configuration server may determine the machine performance of different dynamic parent nodes, such as the number of CPUs and CPU frequency, the number of memories, the network card bandwidth, the disk type, and the number of revolutions, etc.
  • the maximum allowed traffic on each dynamic parent node that is, the normal load limit of local traffic.
  • the above conventional load ceiling can also be generated autonomously by the dynamic parent node.
  • the RFC2697 token bucket algorithm includes two buckets of a normal bucket (which can be called a C bucket) and an excess bucket (which can be called an E bucket).
  • the token bucket can be regarded as a token for storing a certain number of tokens.
  • the container can add a token to the C bucket according to the set token addition speed. After the token in the C bucket is full, the token can be added to the E bucket at the same token addition speed.
  • the traffic sent to the token bucket will preferentially consume the token in the C bucket. The number of tokens consumed by different sizes of traffic is different.
  • the second node can set the token addition speed to the above-mentioned conventional load upper limit, and then set the capacities of the C bucket and the E bucket to the product of the token addition speed and the period duration. For example, if the normal load limit is 100 M/s and the cycle duration is 1 min, the capacity of the C bucket and the E bucket is 6000 M. Taking a token equivalent to 1 M as an example, the token addition speed is 100 /s.
  • a traffic server can also be configured in the CDN service cluster.
  • the traffic server is used to record the actual traffic load of each dynamic parent node.
  • the dynamic parent node needs to update the traffic scheduling policy periodically when providing the source acceleration service.
  • the second node can periodically obtain the local historical total traffic in the historical period from the traffic server, and then predict the latest local total traffic of the current period based on the data. Specifically, the least squares method may be used for fitting, or other feasible prediction algorithms may be employed. Then, the second node may determine the traffic scheduling policy of the second node in the current period according to the predicted local total total traffic and the execution parameters of the RFC2697 token bucket algorithm.
  • the traffic scheduling policy may be divided into a full partial flow, a partial split flow, and a splitless flow.
  • the process of determining the traffic scheduling policy may be as follows: the second node according to the RFC 2697 token bucket algorithm and the current bucket exceeding the bucket Number and token addition rate, determine the total number of available tokens in the normal bucket of the current period; if the total number of available tokens is negative, determine that the traffic scheduling policy is full-stream; if the total number of available tokens is positive, and less than the latest local total traffic The traffic scheduling policy is determined to be partial offloading. If the total number of available tokens is positive and not less than the latest local total traffic, the traffic scheduling policy is determined to be non-split.
  • the second node may determine the current number of tokens in the C bucket and the E bucket of the RFC 2697 token bucket algorithm in the process of generating the traffic scheduling policy, and then calculate the current period C bucket according to the token adding rate.
  • the total number of available tokens that is, the total number of available tokens in the C bucket is the current number of tokens in the C bucket plus the total number of tokens that can be added in the current period. It should be noted that if the token in the E bucket is not full, it represents the second.
  • the node is in a debt state, and the current number of tokens in the C bucket can be represented in the form of a negative number.
  • the traffic scheduling policy can be determined to be a full partial stream;
  • the total number of available tokens is positive and less than the latest local total traffic, which means that the total number of available tokens in this period is insufficient to carry all traffic, indicating that the second node is partially overloaded, and then the traffic scheduling policy can be determined to be partially offloaded;
  • the total number of tokens is a positive number, and is not less than the latest local total traffic. The total number of available tokens in this period is sufficient to carry all traffic.
  • the second node is not overloaded, and the traffic scheduling policy can be determined to be non-split.
  • the node may prioritize all the secondary nodes, and the corresponding processing may be as follows: the first node periodically sends a probe message to all the secondary nodes; the first node receives feedback from each secondary node of all the secondary nodes. The probe response determines a network delay between the first node and each secondary node; the first node sets a priority for each secondary node according to the network delay, wherein the higher priority secondary node corresponds to The network latency is smaller.
  • the first node may periodically send probe messages to all of its secondary nodes that are pre-recorded. After receiving the probe message, the secondary node may feed back the corresponding probe response to the first node. Thereafter, the first node may receive the probe response fed back by each of the secondary nodes, and then determine the time between the first node and each of the secondary nodes by the transmission time of the probe message and the reception time of each probe response. Network delay. Further, the first node may set a priority for each secondary node according to the network delay, where a higher priority network node has a smaller network delay.
  • the first node may preferentially send the acquisition request to the secondary node with a small network delay.
  • the processing of step 202 may be as follows: the first node is in the order of priority from high to low, in multiple secondary The candidate node is selected in turn; when the traffic scheduling policy of the candidate node is partially offloaded, and the first data resource acquisition request does not meet the preset traffic classification criterion, or the traffic scheduling policy of the candidate node is not split, the device will prepare The selected node is determined as the target node.
  • the first node may select one secondary node as the candidate node in descending order according to the priorities of the multiple secondary nodes. If the traffic scheduling policy of the candidate node is not offloaded, the candidate node may be determined as the target node. If the traffic scheduling policy of the candidate node is part of the traffic distribution, the first data resource acquiring request may be determined to meet the pre-determination.
  • the shunting standard is set, if not, the representative does not need to offload the obtaining request, and the candidate node may be determined as the target node, and if the traffic scheduling policy of the candidate node is partially split, and the first data resource is If the acquisition request meets the preset offloading criterion, or the traffic scheduling policy of the candidate node is a full partial flow, the representative needs to be offloaded, and then the next secondary node may be selected as the candidate node.
  • the preset offloading rule may be any pre-defined flow-off rule preset by a technician of the CDN service cluster, and the splitting rules of different secondary nodes may be the same or different.
  • a random number may be set for each acquisition request. If the random number is smaller than the above ratio, the corresponding acquisition request is consistent with the preset distribution rule. Further, the domain name of each resource server may be classified according to importance, and the request for obtaining the resource server of each level corresponds to a ratio, for example, there are three levels of v1, v2, and v3, and the corresponding split ratio is 100%. , 50%, 0%.
  • the above specific ratio can be set by each secondary node in combination with the local traffic load when generating the traffic scheduling policy.
  • the secondary node may send the traffic scheduling policy to the first node by using a probe response or a request response of the data resource, which may be as follows:
  • the first node updates the traffic scheduling policy of each secondary node stored locally based on the traffic scheduling policy carried in the probe response.
  • each secondary node may add the locally generated traffic scheduling policy to the probe response, and then feed back the probe response to the first node. In this way, after receiving the foregoing probe response, the first node may update the traffic scheduling policy of each secondary node stored locally based on the traffic scheduling policy carried therein.
  • the first node receives the request response of the second data resource sent by the third node; the first node updates the traffic scheduling policy of the locally stored third node based on the traffic scheduling policy of the third node carried in the request response.
  • the third node may be any one of all the dynamic parent nodes for providing the return source acceleration service, and may be any secondary node of any node in the CDN service cluster, and may be the same as or different from the second node.
  • the third node may receive the second data resource request fed back by the resource server. response. Thereafter, the third node may add the locally generated traffic scheduling policy to the request response, and then the request response may be fed back to the first node. After the first node receives the request response of the second data resource sent by the third node, the traffic scheduling policy of the third node stored locally may be updated based on the traffic scheduling policy of the third node carried in the request response.
  • FIG. 3 is a simple logic diagram of the processing flow of the embodiment, where “historical traffic” refers to the total local historical traffic in the historical period, and “predicted traffic” is the latest local total traffic predicted in the current period. If the card is sufficient, the number of available tokens is not less than the predicted local latest total traffic.
  • the first node when the acquisition request of the first data resource needs to be sent, acquires a traffic scheduling policy of the multiple secondary nodes corresponding to the resource server to which the first data resource that is stored locally belongs, where the traffic scheduling policy is Each secondary node is generated based on a local traffic load situation; the first node selects a target node among the multiple secondary nodes according to a traffic scheduling policy of the multiple secondary nodes; the first node sends the first data resource to the target node. request.
  • each node may send the foregoing acquisition request to the secondary node that is not in the traffic overload according to the traffic scheduling policy of the secondary node, thereby effectively reducing the dynamic parent node due to continuous overload of the traffic.
  • the efficiency of the return source is reduced, so that the quality of the return source acceleration service of the CDN service cluster can be improved.
  • an embodiment of the present invention further provides a system for sending an acquisition request of a data resource, where the system includes multiple nodes, the multiple nodes include a first node, and the first node is used to :
  • the multiple nodes further include a second node, where the second node is used to:
  • the second node is specifically configured to:
  • the traffic scheduling policy is a partial offloading
  • the traffic scheduling policy is not offloaded.
  • the first node is further configured to:
  • a priority is set for each of the secondary nodes according to the network delay, wherein a network delay corresponding to a secondary node having a higher priority is smaller.
  • the first node is specifically configured to:
  • the device When the traffic scheduling policy of the candidate node is partially offloaded, and the request for acquiring the first data resource does not meet the preset offloading criterion, or the traffic scheduling policy of the candidate node is not split, the device is configured to be offloaded.
  • the selected node is determined as the target node.
  • the probe response fed by each secondary node carries a respective traffic scheduling policy
  • the first node is further configured to:
  • the multiple nodes further include a third node, where the first node is further configured to:
  • the first node when the acquisition request of the first data resource needs to be sent, acquires a traffic scheduling policy of the multiple secondary nodes corresponding to the resource server to which the first data resource that is stored locally belongs, where the traffic scheduling policy is Each secondary node is generated based on a local traffic load situation; the first node selects a target node among the multiple secondary nodes according to a traffic scheduling policy of the multiple secondary nodes; the first node sends the first data resource to the target node. request.
  • each node may send the foregoing acquisition request to the secondary node that is not in the traffic overload according to the traffic scheduling policy of the secondary node, thereby effectively reducing the dynamic parent node due to continuous overload of the traffic.
  • the efficiency of the return source is reduced, so that the quality of the return source acceleration service of the CDN service cluster can be improved.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明公开了一种发送数据资源的获取请求的方法和系统,属于数据传输技术领域。所述方法包括:当需要发送第一数据资源的获取请求时,第一节点获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述流量调度策略由每个所述次级节点基于本地流量负载情况生成;所述第一节点根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;所述第一节点向所述目标节点发送所述第一数据资源的获取请求。采用本发明,可以减少动态父节点因为流量持续过载而导致回源效率降低的情况,提高了CDN服务集群的回源加速服务的质量。

Description

一种发送数据资源的获取请求的方法和系统 技术领域
本发明涉及数据传输技术领域,特别涉及一种发送数据资源的获取请求的方法和系统。
背景技术
随着互联网技术的不断进步,CDN(内容分发网络,Content Delivery Network)服务也随之快速发展。CDN服务集群中包括大量用于存储数据资源和加速网络服务的节点服务器,节点服务器可以包括边缘节点服务器(可简称为边缘节点)和父节点服务器(可简称为父节点,包括动态父节点和静态父节点)。
当用户想要获取某个数据资源时,用户可以通过终端向CDN服务集群发送数据资源的获取请求,从而CDN服务集群中的某个边缘节点可以接收到该获取请求。如果本地未存储有上述数据资源,该边缘节点则可以选择最优路径进行回源处理以获取数据资源,即通过最优路径上的动态父节点向相应的服务器发送数据资源的获取请求。
在实现本发明的过程中,发明人发现现有技术至少存在以下问题:
如果某一时刻边缘节点中存在大量数据资源的回源需求,而相应的最优路径均通过同一动态父节点,这样,动态父节点将需要同时转发大量的数据资源的获取请求,从而动态父节点会因为流量持续过载而导致回源效率降低、CDN服务集群的回源加速服务的质量变差。
发明内容
为了解决现有技术的问题,本发明实施例提供了一种发送数据资源的获取请求的方法和系统。所述技术方案如下:
第一方面,提供了一种发送数据资源的获取请求的方法,所述方法包括:
当需要发送第一数据资源的获取请求时,第一节点获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述 流量调度策略由每个所述次级节点基于本地流量负载情况生成;
所述第一节点根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;
所述第一节点向所述目标节点发送所述第一数据资源的获取请求。
可选的,所述方法还包括:
第二节点根据本地流量的常规负载上限配置RFC2697令牌桶算法;
所述第二节点周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;
所述第二节点根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略。
可选的,所述第二节点根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略,包括:
所述第二节点根据所述RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;
如果所述可用令牌总数为负数,则确定流量调度策略为全部分流;
如果所述可用令牌总数为正数,且小于所述本地最新总流量,则确定流量调度策略为部分分流;
如果所述可用令牌总数为正数,且不小于所述本地最新总流量,则确定流量调度策略为不分流。
可选的,所述方法还包括:
所述第一节点周期性向所有次级节点发送探测消息;
所述第一节点接收所述所有次级节点中每个次级节点反馈的探测响应,确定所述第一节点和所述每个次级节点间的网络时延;
所述第一节点根据所述网络时延,为所述每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
可选的,所述第一节点根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点,包括:
所述第一节点按照优先级从高到低的顺序,在所述多个次级节点中依次选取备选节点;
当所述备选节点的流量调度策略为部分分流,且所述第一数据资源的获取 请求不满足预设分流标准,或者所述备选节点的流量调度策略为不分流时,将所述备选节点确定为目标节点。
可选的,所述每个次级节点反馈的探测响应中携带有各自的流量调度策略;
所述第一节点接收所述所有次级节点中每个次级节点反馈的探测响应之后,还包括:
所述第一节点基于所述探测响应中携带的流量调度策略,更新本地存储的所述每个次级节点的流量调度策略。
可选的,所述方法还包括:
所述第一节点接收第三节点发送的第二数据资源的请求响应;
所述第一节点基于所述请求响应中携带的所述第三节点的流量调度策略,更新本地存储的所述第三节点的流量调度策略。
第二方面,提供了一种发送数据资源的获取请求的系统,所述系统包括多个节点,所述多个节点包括第一节点,所述第一节点,用于:
当需要发送第一数据资源的获取请求时,获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述流量调度策略由每个所述次级节点基于本地流量负载情况生成;
根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;
向所述目标节点发送所述第一数据资源的获取请求。
可选的,所述多个节点还包括第二节点,所述第二节点,用于:
根据本地流量的常规负载上限配置RFC2697令牌桶算法;
周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;
根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略。
可选的,所述第二节点,具体用于:
根据所述RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;
如果所述可用令牌总数为负数,则确定流量调度策略为全部分流;
如果所述可用令牌总数为正数,且小于所述本地最新总流量,则确定流量调度策略为部分分流;
如果所述可用令牌总数为正数,且不小于所述本地最新总流量,则确定流量调度策略为不分流。
可选的,所述第一节点,还用于:
周期性向所有次级节点发送探测消息;
接收所述所有次级节点中每个次级节点反馈的探测响应,确定所述第一节点和所述每个次级节点间的网络时延;
根据所述网络时延,为所述每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
可选的,所述第一节点,具体用于:
按照优先级从高到低的顺序,在所述多个次级节点中依次选取备选节点;
当所述备选节点的流量调度策略为部分分流,且所述第一数据资源的获取请求不满足预设分流标准,或者所述备选节点的流量调度策略为不分流时,将所述备选节点确定为目标节点。
可选的,所述每个次级节点反馈的探测响应中携带有各自的流量调度策略;
所述第一节点,还用于:
基于所述探测响应中携带的流量调度策略,更新本地存储的所述每个次级节点的流量调度策略。
可选的,所述多个节点还包括第三节点,所述第一节点,还用于:
接收所述第三节点发送的第二数据资源的请求响应;
基于所述请求响应中携带的所述第三节点的流量调度策略,更新本地存储的所述第三节点的流量调度策略。
本发明实施例提供的技术方案带来的有益效果是:
本发明实施例中,当需要发送第一数据资源的获取请求时,第一节点获取本地存储的第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,流量调度策略由每个次级节点基于本地流量负载情况生成;第一节点根据多个次级节点的流量调度策略,在多个次级节点中选择目标节点;第一节点向目标节点发送第一数据资源的获取请求。这样,在发送数据资源的获取请求时,每个节点可以根据次级节点的流量调度策略,向未处于流量过载的次级节 点发送上述获取请求,有效减少了动态父节点因为流量持续过载而导致回源效率降低的情况,从而可以提高CDN服务集群的回源加速服务的质量。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种CDN服务集群的网络架构图;
图2是本发明实施例提供的一种发送数据资源的获取请求的方法流程图;
图3是本发明实施例提供的一种发送数据资源的获取请求的逻辑示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
本发明实施例提供了一种发送数据资源的获取请求的方法,该方法可以由CDN服务集群中的多个节点服务器共同实现,网络架构可以如图1所示,节点服务器可以包含边缘节点服务器(以下简称为边缘节点)和多级父节点服务器(以下简称为父节点),其中,边缘节点为CDN服务集群中用户请求的接入节点,父节点为CDN服务集群中部署层级位于边缘节点之后的节点,包含用来缓存数据资源的静态父节点,和用来提供回源加速服务的动态父节点。本实施例后续涉及的节点均不考虑静态父节点。本实施例中所有节点均指CDN服务集群中的节点服务器,不包括终端和资源服务器。上述节点服务器中可以包括处理器、存储器、收发器,处理器可以用于进行下述流程中的发送数据资源的获取请求的处理,存储器可以用于存储下述处理过程中需要的数据以及产生的数据,收发器可以用于接收和发送下述处理过程中的相关数据。在某种情况下,上述节点服务器的功能可以由多台服务器形成的服务器组来实现,本实施例以节点服务器为单独服务器为例进行说明,其余情况与之类似,不再赘述。
下面将结合具体实施方式,对图2所示的处理流程进行详细的说明,内容可以如下:
步骤201,当需要发送第一数据资源的获取请求时,第一节点获取本地存储的第一数据资源所属资源服务器对应的多个次级节点的流量调度策略。
其中,第一节点可以是所有边缘节点和存在次级节点的所有动态父节点中的任一节点。
在实施中,动态父节点在提供回源加速服务时,可以基于本地流量负载情况生成流量调度策略,然后将流量调度策略通过预设的方式反馈给上级节点。上级节点在获取到动态父节点的流量调度策略后,可以将其存储在本地,以在发送数据资源的获取请求时根据流量调度策略选择传输路径。这样,在第一节点为边缘节点的情况下,第一节点在接收到用户对于某个数据资源(如第一资源数据)的获取请求后,如果本地未存储有第一资源数据,需要通过其它节点向第一资源数据所属资源服务器发送相应的获取请求,第一节点则可以先确定第一资源数据所属资源服务器对应的多个次级节点,然后获取本地存储的多个次级节点的流量调度策略。在第一节点为动态父节点的情况下,第一节点如果接收到上一级节点发送的第一资源数据的获取请求后,需要将该获取请求通过其它节点转发至第一资源数据所属资源服务器,第一节点则可以先确定第一资源数据所属资源服务器对应的多个次级节点,然后获取本地存储的多个次级节点的流量调度策略。可以理解,针对每个资源服务器,CDN服务集群的技术人员可以在每个边缘节点和动态父节点中设置该资源服务器对应的次级节点,即当前节点可以通过这些次级节点中的任一节点向资源服务器发送数据资源的获取请求。
步骤202,第一节点根据多个次级节点的流量调度策略,在多个次级节点中选择目标节点。
在实施中,第一节点在获取到多个次级节点的流量调度策略之后,可以根据这些流量调度策略,在多个次级节点中选择用于转发第一资源数据的获取请求的目标节点。
步骤203,第一节点向目标节点发送第一数据资源的获取请求。
在实施中,第一节点在选择完目标节点后,可以向目标节点发送第一数据资源的获取请求。
可选的,动态父节点可以使用RFC2697令牌桶来确定流量调度策略,相应的处理可以如下:第二节点根据本地流量的常规负载上限配置RFC2697令牌桶; 第二节点周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;第二节点根据本地最新总流量和RFC2697令牌桶算法的执行参数,确定第二节点的流量调度策略。
其中,第二节点可以是用于提供回源加速服务的所有动态父节点中的任一节点,可以是CDN服务集群中任意节点的任意次级节点,RFC2697令牌桶算法的执行参数包括令牌添加速度和令牌桶的容量。
在实施中,CDN服务集群中还可以设有配置服务器,配置服务器可以针对不同的动态父节点的机器性能,如CPU数和CPU频率、内存数、网卡带宽、磁盘类型和转数等,确定在一般情况下每个动态父节点上最多允许通过的流量,即本地流量的常规负载上限。当然,上述常规负载上限也可以由动态父节点自主生成。之后,以第二节点为例,第二节点获取到本地流量的常规负载上限后,可以根据常规负载上限配置RFC2697令牌桶算法,即设置RFC2697令牌桶算法的执行参数。需要说明的是,RFC2697令牌桶算法中包含普通桶(可称为C桶)和超出桶(可称为E桶)两个令牌桶,令牌桶可以看作是一个存放一定数量令牌的容器,设备可以先按设定的令牌添加速度向C桶中添加令牌,C桶中的令牌满后,可以再按相同的令牌添加速度继续向E桶中添加令牌。传送到令牌桶的流量将优先消耗C桶中的令牌,不同大小的流量消耗的令牌数量不一样,在流量高峰时段,如果C桶的全部令牌不足以应付当前待传输的流量,则可以以欠债的方式继续消耗E桶内的令牌,而之后在流量低谷时段,可以以还债的方式优先补满E桶中的令牌,然后再向C桶中添加令牌。故而,第二节点可以将令牌添加速度设置为上述常规负载上限,然后将C桶和E桶的容量均设置为令牌添加速度和周期时长的乘积。例如,常规负载上限为100M/s,周期时长为1min,则C桶和E桶的容量均为6000M,以一个令牌相当于1M为例,则令牌添加速度为100个/s。
CDN服务集群中还可以设有流量服务器,流量服务器用于记录各个动态父节点实际的流量负载情况。动态父节点在提供回源加速服务时,需要周期性对流量调度策略进行更新。以第二节点为例,第二节点在配置完RFC2697令牌桶算法后,可以周期性从流量服务器获取历史周期中的本地历史总流量,然后基于这些数据,预测当前周期的本地最新总流量,具体可以采用最小二乘法来进行拟合,或者采用其它可行的预测算法。之后,第二节点则可以根据预测出的 本地最新总流量和RFC2697令牌桶算法的执行参数,来确定当前周期的第二节点的流量调度策略。
可选的,流量调度策略可以分为全部分流、部分分流和不分流,相应的,确定流量调度策略的处理可以如下:第二节点根据RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;如果可用令牌总数为负数,则确定流量调度策略为全部分流;如果可用令牌总数为正数,且小于本地最新总流量,则确定流量调度策略为部分分流;如果可用令牌总数为正数,且不小于本地最新总流量,则确定流量调度策略为不分流。
在实施中,第二节点在生成流量调度策略的过程中,可以先确定RFC2697令牌桶算法的C桶和E桶的当前令牌数,之后可以根据令牌添加速率计算出当前周期C桶的可用令牌总数,即C桶的可用令牌总数为C桶的当前令牌数加上当前周期可添加的总令牌数,需要说明的是,如果E桶中令牌不满,则代表第二节点处于欠债状态,此时可以以负数的形式来表示C桶的当前令牌数。进一步的,如果可用令牌总数为负数,代表即使本周期内没有任何流量负载,第二节点也将处于欠债状态,则说明第二节点完全过载,进而可以确定流量调度策略为全部分流;如果可用令牌总数为正数,且小于本地最新总流量,代表本周期内的可用令牌总数不足以承载全部流量,则说明第二节点部分过载,进而可以确定流量调度策略为部分分流;如果可用令牌总数为正数,且不小于本地最新总流量,代表本周期内的可用令牌总数足以承载全部流量,则说明第二节点不过载,进而可以确定流量调度策略为不分流。
可选的,节点可以对所有次级节点进行优先级排序,相应的处理可以如下:第一节点周期性向所有次级节点发送探测消息;第一节点接收所有次级节点中每个次级节点反馈的探测响应,确定第一节点和每个次级节点间的网络时延;第一节点根据网络时延,为每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
在实施中,第一节点可以周期性向其预先记录的所有次级节点发送探测消息。次级节点接收到探测消息后,可以向第一节点反馈相应的探测响应。之后,第一节点可以接收所有次级节点中每个次级节点反馈的探测响应,然后可以通过探测消息的发送时间和每个探测响应的接收时间确定第一节点和每个次级节 点间的网络时延。进一步的,第一节点可以根据上述网络时延,为每个次级节点设置优先级,其中优先级越高的次级节点对应的网络时延越小。
可选的,第一节点可以优先选择向网络延迟小的次级节点发送获取请求,相应的,步骤202的处理可以如下:第一节点按照优先级从高到低的顺序,在多个次级节点中依次选取备选节点;当备选节点的流量调度策略为部分分流,且第一数据资源的获取请求不满足预设分流标准,或者备选节点的流量调度策略为不分流时,将备选节点确定为目标节点。
在实施中,第一节点在获取了多个次级节点的流量调度策略之后,可以按照多个次级节点的优先级,以从高到低的顺序依次选取一个次级节点作为备选节点。如果备选节点的流量调度策略为不分流,则可以将该备选节点确定为目标节点,如果备选节点的流量调度策略为部分分流,则可以先确定第一数据资源的获取请求是否满足预设分流标准,如果不满足,则代表不需要对该获取请求进行分流,进而可以将该备选节点确定为目标节点,而如果备选节点的流量调度策略为部分分流,且第一数据资源的获取请求满足预设分流标准,或者备选节点的流量调度策略为全部分流,则代表需要对该获取请求进行分流,进而可以选取下一次级节点作为备选节点。需要说明的是,上述预设分流规则可以是CDN服务集群的技术人员预先设定的任意分流规则,不同的次级节点的分流规则可以相同也可以不同。例如,如果需要分流一定比例的获取请求,则可以对于每个获取请求设置一个随机数,如果该随机数小于上述比例,则代表相应的获取请求符合预设分流规则。进一步的,还可以对每个资源服务器的域名按照重要性划分等级,每个等级的资源服务器的获取请求均对应一个比例,如存在v1、v2、v3三个等级,相应的分流比例为100%、50%、0%。当然,上述具体比值可以由每个次级节点在生成流量调度策略时,结合本地流量负载情况进行设置。
可选的,次级节点可以通过探测响应或者数据资源的请求响应将流量调度策略发送给第一节点,具体可以如下述两种情况:
情况一,第一节点基于探测响应中携带的流量调度策略,更新本地存储的每个次级节点的流量调度策略。
在实施中,第一节点周期性向所有次级节点发送探测消息后,每个次级节点都可以将本地最新生成的流量调度策略添加至探测响应中,然后将探测响应 反馈给第一节点。这样,第一节点接收到上述探测响应后,可以基于其中携带的流量调度策略,更新本地存储的每个次级节点的流量调度策略。
情况二,第一节点接收第三节点发送的第二数据资源的请求响应;第一节点基于请求响应中携带的第三节点的流量调度策略,更新本地存储的第三节点的流量调度策略。
其中,第三节点可以是用于提供回源加速服务的所有动态父节点中的任一节点,可以是CDN服务集群中任意节点的任意次级节点,可以和上述第二节点相同也可以不同。
在实施中,如果第一节点成功通过第三节点向某个资源服务器发送了第二数据资源的获取请求,则在一定时间之后,第三节点可以接收到资源服务器反馈的第二数据资源的请求响应。之后,第三节点可以将本地最新生成的流量调度策略添加至该请求响应中,然后可以将请求响应反馈给第一节点。这样,第一节点接收到第三节点发送的第二数据资源的请求响应之后;可以基于请求响应中携带的第三节点的流量调度策略,更新本地存储的第三节点的流量调度策略。
为了便于理解,图3为本实施例处理流程的简单逻辑示意图,其中,“历史流量”指代历史周期中的本地历史总流量,“预测流量”为预测当前周期的本地最新总流量,“令牌足够”即表示可用令牌数不小于预测的本地最新总流量。
本发明实施例中,当需要发送第一数据资源的获取请求时,第一节点获取本地存储的第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,流量调度策略由每个次级节点基于本地流量负载情况生成;第一节点根据多个次级节点的流量调度策略,在多个次级节点中选择目标节点;第一节点向目标节点发送第一数据资源的获取请求。这样,在发送数据资源的获取请求时,每个节点可以根据次级节点的流量调度策略,向未处于流量过载的次级节点发送上述获取请求,有效减少了动态父节点因为流量持续过载而导致回源效率降低的情况,从而可以提高CDN服务集群的回源加速服务的质量。
基于相同的技术构思,本发明实施例还提供了一种发送数据资源的获取请求的系统,所述系统包括多个节点,所述多个节点包括第一节点,所述第一节点,用于:
当需要发送第一数据资源的获取请求时,获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述流量调度策略由每个所述次级节点基于本地流量负载情况生成;
根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;
向所述目标节点发送所述第一数据资源的获取请求。
可选的,所述多个节点还包括第二节点,所述第二节点,用于:
根据本地流量的常规负载上限配置RFC2697令牌桶算法;
周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;
根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略。
可选的,所述第二节点,具体用于:
根据所述RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;
如果所述可用令牌总数为负数,则确定流量调度策略为全部分流;
如果所述可用令牌总数为正数,且小于所述本地最新总流量,则确定流量调度策略为部分分流;
如果所述可用令牌总数为正数,且不小于所述本地最新总流量,则确定流量调度策略为不分流。
可选的,所述第一节点,还用于:
周期性向所有次级节点发送探测消息;
接收所述所有次级节点中每个次级节点反馈的探测响应,确定所述第一节点和所述每个次级节点间的网络时延;
根据所述网络时延,为所述每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
可选的,所述第一节点,具体用于:
按照优先级从高到低的顺序,在所述多个次级节点中依次选取备选节点;
当所述备选节点的流量调度策略为部分分流,且所述第一数据资源的获取请求不满足预设分流标准,或者所述备选节点的流量调度策略为不分流时,将 所述备选节点确定为目标节点。
可选的,所述每个次级节点反馈的探测响应中携带有各自的流量调度策略;
所述第一节点,还用于:
基于所述探测响应中携带的流量调度策略,更新本地存储的所述每个次级节点的流量调度策略。
可选的,所述多个节点还包括第三节点,所述第一节点,还用于:
接收所述第三节点发送的第二数据资源的请求响应;
基于所述请求响应中携带的所述第三节点的流量调度策略,更新本地存储的所述第三节点的流量调度策略。
本发明实施例中,当需要发送第一数据资源的获取请求时,第一节点获取本地存储的第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,流量调度策略由每个次级节点基于本地流量负载情况生成;第一节点根据多个次级节点的流量调度策略,在多个次级节点中选择目标节点;第一节点向目标节点发送第一数据资源的获取请求。这样,在发送数据资源的获取请求时,每个节点可以根据次级节点的流量调度策略,向未处于流量过载的次级节点发送上述获取请求,有效减少了动态父节点因为流量持续过载而导致回源效率降低的情况,从而可以提高CDN服务集群的回源加速服务的质量。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (14)

  1. 一种发送数据资源的获取请求的方法,其特征在于,所述方法包括:
    当需要发送第一数据资源的获取请求时,第一节点获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述流量调度策略由每个所述次级节点基于本地流量负载情况生成;
    所述第一节点根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;
    所述第一节点向所述目标节点发送所述第一数据资源的获取请求。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    第二节点根据本地流量的常规负载上限配置RFC2697令牌桶算法;
    所述第二节点周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;
    所述第二节点根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略。
  3. 根据权利要求2所述的方法,其特征在于,所述第二节点根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略,包括:
    所述第二节点根据所述RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;
    如果所述可用令牌总数为负数,则确定流量调度策略为全部分流;
    如果所述可用令牌总数为正数,且小于所述本地最新总流量,则确定流量调度策略为部分分流;
    如果所述可用令牌总数为正数,且不小于所述本地最新总流量,则确定流量调度策略为不分流。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一节点周期性向所有次级节点发送探测消息;
    所述第一节点接收所述所有次级节点中每个次级节点反馈的探测响应,确定所述第一节点和所述每个次级节点间的网络时延;
    所述第一节点根据所述网络时延,为所述每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
  5. 根据权利要求4所述的方法,其特征在于,所述第一节点根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点,包括:
    所述第一节点按照优先级从高到低的顺序,在所述多个次级节点中依次选取备选节点;
    当所述备选节点的流量调度策略为部分分流,且所述第一数据资源的获取请求不满足预设分流标准,或者所述备选节点的流量调度策略为不分流时,将所述备选节点确定为目标节点。
  6. 根据权利要求4所述的方法,其特征在于,所述每个次级节点反馈的探测响应中携带有各自的流量调度策略;
    所述第一节点接收所述所有次级节点中每个次级节点反馈的探测响应之后,还包括:
    所述第一节点基于所述探测响应中携带的流量调度策略,更新本地存储的所述每个次级节点的流量调度策略。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一节点接收第三节点发送的第二数据资源的请求响应;
    所述第一节点基于所述请求响应中携带的所述第三节点的流量调度策略,更新本地存储的所述第三节点的流量调度策略。
  8. 一种发送数据资源的获取请求的系统,其特征在于,所述系统包括多个节点,所述多个节点包括第一节点,所述第一节点,用于:
    当需要发送第一数据资源的获取请求时,获取本地存储的所述第一数据资源所属资源服务器对应的多个次级节点的流量调度策略,其中,所述流量调度策略由每个所述次级节点基于本地流量负载情况生成;
    根据所述多个次级节点的流量调度策略,在所述多个次级节点中选择目标节点;
    向所述目标节点发送所述第一数据资源的获取请求。
  9. 根据权利要求8所述的系统,其特征在于,所述多个节点还包括第二节点,所述第二节点,用于:
    根据本地流量的常规负载上限配置RFC2697令牌桶算法;
    周期性获取历史周期中的本地历史总流量,预测当前周期的本地最新总流量;
    根据所述本地最新总流量和所述RFC2697令牌桶算法的执行参数,确定所述第二节点的流量调度策略。
  10. 根据权利要求9所述的系统,其特征在于,所述第二节点,具体用于:
    根据所述RFC2697令牌桶算法的普通桶和超出桶的当前令牌数和令牌添加速率,确定当前周期普通桶的可用令牌总数;
    如果所述可用令牌总数为负数,则确定流量调度策略为全部分流;
    如果所述可用令牌总数为正数,且小于所述本地最新总流量,则确定流量调度策略为部分分流;
    如果所述可用令牌总数为正数,且不小于所述本地最新总流量,则确定流量调度策略为不分流。
  11. 根据权利要求8所述的系统,其特征在于,所述第一节点,还用于:
    周期性向所有次级节点发送探测消息;
    接收所述所有次级节点中每个次级节点反馈的探测响应,确定所述第一节点和所述每个次级节点间的网络时延;
    根据所述网络时延,为所述每个次级节点设置优先级,其中,优先级越高的次级节点对应的网络时延越小。
  12. 根据权利要求11所述的系统,其特征在于,所述第一节点,具体用于:
    按照优先级从高到低的顺序,在所述多个次级节点中依次选取备选节点;
    当所述备选节点的流量调度策略为部分分流,且所述第一数据资源的获取请求不满足预设分流标准,或者所述备选节点的流量调度策略为不分流时,将所述备选节点确定为目标节点。
  13. 根据权利要求11所述的系统,其特征在于,所述每个次级节点反馈的探测响应中携带有各自的流量调度策略;
    所述第一节点,还用于:
    基于所述探测响应中携带的流量调度策略,更新本地存储的所述每个次级节点的流量调度策略。
  14. 根据权利要求8所述的系统,其特征在于,所述多个节点还包括第三节点,所述第一节点,还用于:
    接收所述第三节点发送的第二数据资源的请求响应;
    基于所述请求响应中携带的所述第三节点的流量调度策略,更新本地存储的所述第三节点的流量调度策略。
PCT/CN2018/077556 2018-02-02 2018-02-28 一种发送数据资源的获取请求的方法和系统 WO2019148569A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/073,549 US11178220B2 (en) 2018-02-02 2018-02-28 Method and system for transmitting a data resource acquisition request
EP18769023.5A EP3547625B1 (en) 2018-02-02 2018-02-28 Method and system for sending request for acquiring data resource

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810106136.9A CN108366020B (zh) 2018-02-02 2018-02-02 一种发送数据资源的获取请求的方法和系统
CN201810106136.9 2018-02-02

Publications (1)

Publication Number Publication Date
WO2019148569A1 true WO2019148569A1 (zh) 2019-08-08

Family

ID=63004116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077556 WO2019148569A1 (zh) 2018-02-02 2018-02-28 一种发送数据资源的获取请求的方法和系统

Country Status (4)

Country Link
US (1) US11178220B2 (zh)
EP (1) EP3547625B1 (zh)
CN (1) CN108366020B (zh)
WO (1) WO2019148569A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3633999A1 (en) * 2018-10-05 2020-04-08 InterDigital CE Patent Holdings Method to be implemented at a device able to run one adaptive streaming session, and corresponding device
CN111385315B (zh) * 2018-12-27 2022-12-16 阿里巴巴集团控股有限公司 点对点资源下载方法和装置
CN110058941A (zh) * 2019-03-16 2019-07-26 平安城市建设科技(深圳)有限公司 任务调度管理方法、装置、设备及存储介质
CN110138756B (zh) 2019-04-30 2021-05-25 网宿科技股份有限公司 一种限流方法及系统
CN113132437B (zh) * 2019-12-31 2024-01-23 中兴通讯股份有限公司 Cdn调度方法、系统、设备和存储介质
CN112529400A (zh) * 2020-12-09 2021-03-19 平安科技(深圳)有限公司 一种数据处理方法、装置、终端和可读存储介质
CN112653736B (zh) * 2020-12-10 2022-05-06 北京金山云网络技术有限公司 一种并行回源方法、装置及电子设备
CN116938837A (zh) * 2022-04-01 2023-10-24 中国移动通信有限公司研究院 一种资源调度方法、装置及设备
CN117714755A (zh) * 2022-09-06 2024-03-15 华为云计算技术有限公司 分配网络资源的方法、装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049753A1 (en) * 2006-08-22 2008-02-28 Heinze John M System and method for load balancing network resources using a connection admission control engine
CN101741686A (zh) * 2008-11-13 2010-06-16 天津比蒙新帆信息技术有限公司 一种基于数学建模技术应用于p2p网络的流量识别与控制的方法
CN103888379A (zh) * 2013-12-03 2014-06-25 江苏达科信息科技有限公司 一种基于可信调度的改进队列调度算法
CN104518985A (zh) * 2013-09-27 2015-04-15 国家广播电影电视总局广播科学研究院 一种分布式网络环境下服务节点的选择方法及终端

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002362568A1 (en) * 2001-09-28 2003-04-07 Savvis Communications Corporation System and method for policy dependent name to address resolutioin.
US7860964B2 (en) 2001-09-28 2010-12-28 Level 3 Communications, Llc Policy-based content delivery network selection
US8180922B2 (en) * 2003-11-14 2012-05-15 Cisco Technology, Inc. Load balancing mechanism using resource availability profiles
US20100223364A1 (en) * 2009-02-27 2010-09-02 Yottaa Inc System and method for network traffic management and load balancing
EP2282458A1 (en) * 2009-07-17 2011-02-09 BRITISH TELECOMMUNICATIONS public limited company Usage policing in data networks
CN101674247B (zh) * 2009-10-21 2015-01-28 中兴通讯股份有限公司 一种对业务流量进行监管的方法及其装置
CN102143199A (zh) * 2010-10-19 2011-08-03 华为技术有限公司 获取内容的方法、节点及内容网络
CN102082693B (zh) * 2011-02-15 2015-05-20 中兴通讯股份有限公司 网络流量监管方法及装置
WO2013113181A1 (en) * 2012-01-31 2013-08-08 Telefonaktiebolaget L M Ericsson (Publ) Server selection in communications network with respect to a mobile user
CN103685436B (zh) * 2012-09-26 2017-05-24 联想(北京)有限公司 数据获取方法和终端设备
US20140337472A1 (en) * 2012-12-13 2014-11-13 Level 3 Communications, Llc Beacon Services in a Content Delivery Framework
CN103945461A (zh) * 2013-01-23 2014-07-23 中兴通讯股份有限公司 数据多流传输方法及装置
CN104427005B (zh) * 2013-08-20 2018-01-02 阿里巴巴集团控股有限公司 在cdn上实现请求精确调度的方法及系统
CN105207947B (zh) * 2015-08-28 2018-12-04 网宿科技股份有限公司 一种过滤抖动的渐进式流量调度方法和系统
US10306308B2 (en) * 2015-12-15 2019-05-28 Telefonaktiebolaget Lm Ericsson (Publ) System and method for media delivery using common mezzanine distribution format
CN106549878B (zh) * 2016-10-26 2020-07-07 中国银联股份有限公司 一种业务分流方法和装置
CN107222560A (zh) * 2017-06-29 2017-09-29 珠海市魅族科技有限公司 一种多节点回源的方法、装置及存储介质
CN107613030A (zh) * 2017-11-06 2018-01-19 网宿科技股份有限公司 一种处理业务请求的方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080049753A1 (en) * 2006-08-22 2008-02-28 Heinze John M System and method for load balancing network resources using a connection admission control engine
CN101741686A (zh) * 2008-11-13 2010-06-16 天津比蒙新帆信息技术有限公司 一种基于数学建模技术应用于p2p网络的流量识别与控制的方法
CN104518985A (zh) * 2013-09-27 2015-04-15 国家广播电影电视总局广播科学研究院 一种分布式网络环境下服务节点的选择方法及终端
CN103888379A (zh) * 2013-12-03 2014-06-25 江苏达科信息科技有限公司 一种基于可信调度的改进队列调度算法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3547625A4 *

Also Published As

Publication number Publication date
CN108366020A (zh) 2018-08-03
EP3547625B1 (en) 2021-06-23
US20210211490A1 (en) 2021-07-08
US11178220B2 (en) 2021-11-16
EP3547625A1 (en) 2019-10-02
EP3547625A4 (en) 2019-10-02
CN108366020B (zh) 2020-09-18

Similar Documents

Publication Publication Date Title
WO2019148569A1 (zh) 一种发送数据资源的获取请求的方法和系统
WO2019148568A1 (zh) 一种发送数据资源的获取请求的方法和系统
US11900098B2 (en) Micro-service management system and deployment method, and related device
EP2975820B1 (en) Reputation-based strategy for forwarding and responding to interests over a content centric network
EP3211857B1 (en) Http scheduling system and method of content delivery network
EP3453148B1 (en) System and method for latency-based queuing
CN106201356B (zh) 一种基于链路可用带宽状态的动态数据调度方法
JP6881575B2 (ja) 資源割当システム、管理装置、方法およびプログラム
WO2017101366A1 (zh) Cdn服务节点的调度方法及服务器
WO2018201856A1 (en) System and method for self organizing data center
EP3456029B1 (en) Network node and method of receiving an http-message
US11805172B1 (en) File access service
CN113301071A (zh) 网络的回源方法、装置及设备
CN112929427A (zh) 一种面向低轨卫星边缘计算的服务节点确定方法及装置
WO2022121079A1 (zh) 一种流量转发设备的链路聚合方法及流量转发设备
EP3063969B1 (en) System and method for traffic engineering using link buffer status
CN112087382B (zh) 一种服务路由方法及装置
CN115580618A (zh) 一种负载均衡方法、装置、设备及介质
CN109688171B (zh) 缓存空间调度方法、装置和系统
WO2018000617A1 (zh) 一种数据库的更新方法及调度服务器
US10834005B2 (en) Buffer shortage management system
JP6195785B2 (ja) 転送したコンテンツを保存する通信装置、サーバ装置及びプログラム
Abdellah et al. A based cloud computing solution using DiffServ architecture for scalability issue in IoT networks with multiple SLA requirements
CN117478604A (zh) 基于动态选择链路的报文转发方法、系统、设备和介质
JP2015049854A (ja) 転送したコンテンツを保存する通信装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2018769023

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018769023

Country of ref document: EP

Effective date: 20180925

ENP Entry into the national phase

Ref document number: 2018769023

Country of ref document: EP

Effective date: 20180925

NENP Non-entry into the national phase

Ref country code: DE