WO2017101366A1 - Cdn服务节点的调度方法及服务器 - Google Patents

Cdn服务节点的调度方法及服务器 Download PDF

Info

Publication number
WO2017101366A1
WO2017101366A1 PCT/CN2016/088861 CN2016088861W WO2017101366A1 WO 2017101366 A1 WO2017101366 A1 WO 2017101366A1 CN 2016088861 W CN2016088861 W CN 2016088861W WO 2017101366 A1 WO2017101366 A1 WO 2017101366A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
user
cache
nodes
determining
Prior art date
Application number
PCT/CN2016/088861
Other languages
English (en)
French (fr)
Inventor
李洪福
Original Assignee
乐视控股(北京)有限公司
乐视云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视云计算有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/246,134 priority Critical patent/US20170171344A1/en
Publication of WO2017101366A1 publication Critical patent/WO2017101366A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to the field of Internet technologies, and in particular, to a scheduling method and a server for a CDN service node.
  • the full name of the CDN is the Content Delivery Network, the content distribution network.
  • the goal is to publish the content of the site to the "edge" of the network closest to the user by adding a new layer of network architecture to the existing Internet.
  • the user can obtain the required content in the vicinity, solve the congestion of the Internet network, and improve the response speed of the user visiting the website.
  • CDN technology is divided into dynamic acceleration and static acceleration technologies.
  • Most of the current widespread use is static acceleration, which is to deploy CDN nodes at the edge of the network.
  • static acceleration which is to deploy CDN nodes at the edge of the network.
  • the CDN system directs the user to the nearest edge node by scheduling, that is, the Global Server Load Balance (GSLB) policy, which is responsible for processing the user's request.
  • GSLB Global Server Load Balance
  • the node will proxy the user to initiate a return source request to other nodes or the source server, and schedule the search for the source path.
  • the content requested by the user is obtained according to the source path and then forwarded to the user to complete the processing of the request.
  • the inventor found that there are many nodes in the CDN network, but sometimes there may be only one data source uploaded, especially when it is broadcasted. It is now common practice to return the source by determining the shortest path according to some method when there is no content requested by the user at the edge node, and finally find the source station server that provides the data source for the user.
  • the prior art does not consider the case where the cache of the requested content already exists in the CDN full network node. In fact, there may already be other users accessing the same live video, and the video has been cached to a CDN node closer to the user. At this time, the user may get faster if he gets data on the node that has already been cached.
  • the integrated CND full network node already has a cache, and the shortest return path obtained by a certain method is scheduled, and the access time may not be the shortest, and the user may not be provided with the best service node. Therefore, how to provide a user with a shorter access time and improve the user experience in consideration of the CDN network node cache has become an urgent problem to be solved.
  • the invention provides a scheduling method and a server for a CDN service node, which are used to solve the technical problem that the optimal CDN node cannot be scheduled for the user in the prior art, thereby affecting the user experience.
  • a scheduling method of a CDN service node including:
  • Receiving a user's access request determining the location of the user and the content of the request;
  • the cache node is selected as a service node in response to the access request.
  • a scheduling server for a CDN serving node including:
  • a minimum spanning tree determining module configured to generate a minimum spanning tree according to each distance metric value between all nodes
  • An access request receiving module configured to receive an access request of the user, determine a location where the user is located, and a content of the request;
  • a cache node determining module configured to determine, by using the minimum spanning tree, a cache node that caches the content closest to the user;
  • the service node scheduling module is configured to select the cache node as a service node in response to the access request.
  • the scheduling method and the server of the CDN service node in the embodiment of the present invention determine the distance between the nodes globally, so that the scheduling center can directly determine the node closest to the user according to the minimum spanning tree when the user schedules the node, which is reduced.
  • the response time of the scheduling in addition, by determining that the video node requested by the user access request in all the nodes is the cache node, and determining the cache node closest to the user according to the minimum spanning tree, the direct return source in the prior art is avoided.
  • FIG. 1 is a flowchart of an embodiment of a scheduling method of a CDN service node according to the present invention
  • FIG. 2 is a flowchart of another embodiment of a scheduling method of a CDN serving node according to the present invention.
  • FIG. 3 is a flowchart of still another embodiment of a scheduling method of a CDN service node according to the present invention.
  • FIG. 4 is a schematic diagram of an embodiment of a scheduling server of a CDN serving node according to the present invention.
  • FIG. 5 is a schematic diagram of an embodiment of a cache node determining module in the present invention.
  • FIG. 6 is a schematic diagram of another embodiment of a cache node determining module in the present invention.
  • FIG. 7 is a system architecture diagram of a scheduling method and a scheduling server of a CDN service node according to the present invention.
  • FIG. 8 is a schematic structural view of an embodiment of an electronic device according to the present invention.
  • the invention is applicable to a wide variety of general purpose or special purpose computing system environments or configurations.
  • the invention may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.
  • a scheduling method of a CDN service node includes:
  • the scheduling center determines a distance metric between nodes according to a historical data transmission quality between nodes.
  • the scheduling center generates a minimum spanning tree according to each distance metric between all nodes, and the distance metric between the two nodes is a weight between the two nodes, and a minimum spanning tree for all nodes is obtained according to a specific algorithm, where specific
  • the algorithm may be any algorithm for calculating a minimum spanning tree, for example, a Prim algorithm (Prim algorithm), a Kruskal algorithm, and two algorithms are listed here, but are not limited to the two algorithms listed;
  • the scheduling center receives the access request of the user, and determines the location of the user and the content of the request, where the location information is information of the location of the user, and the requested content is feature information of the video requested by the user, for example, the name of the video requested by the user. ;
  • the scheduling center uses the minimum spanning tree to determine a cache node that caches the content closest to the user; according to step S13, a minimum spanning tree is obtained for all nodes, and then a cache is selected from the minimum spanning tree. a node of the content requested by the user;
  • the scheduling center selects the cache node as a service node that responds to the access request.
  • the scheduling center determines the distance between the nodes globally, so that when the scheduling center schedules the node for the user, the node closest to the user can be directly determined according to the minimum spanning tree, and the reaction time of the scheduling is reduced.
  • the scheduling center determines that the video node requested by the user access request in all the nodes is the cache node, and then determines the cache node closest to the user according to the minimum spanning tree, thereby avoiding the direct return source caused by the prior art.
  • a minimum spanning tree may be generated from a graph composed of all nodes according to a data transmission rate, a round trip time, and a packet loss rate of all nodes.
  • the scheduling center determines, according to the historical data transmission quality between the nodes, that the historical data transmission quality in the inter-node distance metric includes at least one of a data transmission rate, a round trip time, and a packet loss rate.
  • the dispatch center generates a minimum spanning tree based on each distance metric between all nodes, including:
  • the scheduling center assigns a first weight, a second weight, and a third weight to the reciprocal, the round trip time, and the packet loss rate of the data transmission rate respectively; the scheduling center performs weighted summation on the reciprocal, round trip time, and packet loss rate of the data transmission rate.
  • a distance metric between nodes is obtained; the dispatch center generates a minimum spanning tree based on the distance metric between the nodes.
  • the tunable first weight, the second weight, and the third weight corresponding to the magnitude of the influence of the reciprocal of the data transmission rate, the round-trip time, and the packet loss rate on the distance metric between the nodes, and the sum of the three is 1, That is, the three weights are normalized to facilitate the real-time influence on the metric distance according to the three factors (the reciprocal of the data transmission rate, the round trip time, and the packet loss rate) that affect the distance between the nodes.
  • the proportion is adjusted to more reasonable adjustment of the reciprocal of the data transmission rate, the round trip time and the packet loss rate to obtain the distance metric between the nodes as accurately as possible, so as to more accurately determine the distance between the nodes. .
  • the scheduling center measures the distance between the two nodes by comprehensively considering the download rate, the round trip time, and the packet loss rate between the two nodes (the download rate is a measure of the speed of data transmission between the two nodes, and the download rate. The larger the distance is, the closer the distance between the two nodes is, so the download rate is inversely proportional to the distance between the two nodes; the round-trip time is the time for a complete communication between the two nodes. The shorter the round-trip time, the more the distance between the two nodes is. Near packet loss is a measure of the integrity of the transmitted information when communicating between two nodes.
  • the data transmission rate and round trip time in this embodiment are directly monitored.
  • the round-trip time is simply the time elapsed since the sender sent the data and received the confirmation message from the recipient.
  • the round-trip time is an important performance indicator in the computer network, which means that the delay from the start of the data transmission from the sender to the receipt of the acknowledgement from the receiver (the receiver immediately sends the acknowledgement after receiving the data).
  • the RTT (Round-trip Time Round Trip Time) value is determined by three parts: the propagation time of the link, the processing time of the end system, and the queuing and processing time in the router's cache.
  • Loss Tolerance Package Loss Rate refers to the ratio of the number of lost packets in the test to the transmitted data set.
  • the calculation method is: “[(input message-output message)/input message]*100 %".
  • the packet loss rate in this embodiment is the data sent by the first node minus the data received by the second node divided by the data sent by the first node multiplied by one hundred percent.
  • the scheduling center utilizes a minimum spanning tree to determine that the cache node that caches the content closest to the user includes:
  • the scheduling center queries, according to the content, multiple cache nodes that have been cached with the requested content in all nodes.
  • the scheduling center allocates a corresponding nearest service node according to the location of the user.
  • the scheduling center determines whether the nearest serving node is a cache node, and if yes, determines the cache node that is closest to the user; otherwise, the scheduling center selects the cache node closest to the nearest node in the minimum spanning tree. Determining whether the nearest serving node is a cache node specifically determines whether the requested content is cached in the nearest serving node, and the requested content is content corresponding to the user's access request.
  • the scheduling center queries, according to the content (the video content requested by the user), a plurality of nodes that have been cached with the requested video in all the nodes as the cache node, that is, all the cache nodes in the minimum spanning tree are determined at one time for subsequent determination.
  • the nearest cache node provided by the user avoids the situation that the service delay is delayed and the user experience is affected when the service node closest to the user does not cache the requested video.
  • the scheduling center utilizes a minimum spanning tree to determine a cache node that caches content closest to the user using a minimum spanning tree, including:
  • the scheduling center allocates a corresponding nearest service node according to the location of the user.
  • the scheduling center determines, according to the content, whether the latest service node caches the content, and if yes, determines the cache node that is closest to the user; otherwise, the scheduling center sequentially selects the nearest service node in the minimum spanning tree.
  • the service node because the distance between the service nodes in the lowest spanning tree is already determined, the service node is selected in order from the near to the farthest until the cache node is determined) and the determination is made until the nearest cache node is determined.
  • the embodiment also provides a method for the scheduling center to determine, from the minimum spanning tree, the cache node to provide the most recent cache node for the user, and avoids directly returning the source when the service node closest to the user does not cache the requested video.
  • the scheduling center does not directly determine all cache nodes that cache the requested video, but selects the service node closest to the user from the minimum spanning tree one by one, and then determines the Whether the service node is a cache node. If not, the service node closest to the user is further determined, and it is determined whether it is a cache node.
  • the service node closest to the user is selected from near to far in order to determine the cache node.
  • Such a method of judging avoids computational redundancy waste caused by determining all cache nodes at once. Because if n cache nodes are determined, but only one of them is the optimal cache node, then the calculation of other n-1 cache nodes is determined to be redundant calculation, which causes waste and generates certain Delay.
  • the embodiment of the invention saves the calculation time, shortens the time for scheduling the cache node and providing the service for the user, and improves the user experience.
  • a related function module can be implemented by a hardware processor.
  • an embodiment of the present invention further provides a scheduling server of a CDN serving node, including:
  • a minimum spanning tree determining module configured to generate a minimum spanning tree according to each distance metric value between all nodes
  • An access request receiving module configured to receive an access request of the user, determine a location where the user is located, and a content of the request;
  • a cache node determining module configured to determine, by using a minimum spanning tree determined by the minimum spanning tree determining module, a cache node that caches the content closest to the user;
  • the service node scheduling module is configured to select a cache node determined by the cache node determination module as a service node that responds to the access request.
  • the scheduling server determines the distance between the nodes globally, so that the scheduling center (the scheduling server is the scheduling center, or the scheduling server is only one or more servers in the scheduling center) can be used as the user scheduling node.
  • the node closest to the user is directly determined according to the minimum spanning tree, which reduces the reaction time of the scheduling.
  • the scheduling server determines that the video node requested by the user access request in all the nodes is the cache node, and then determines the cache node closest to the user according to the minimum spanning tree, thereby avoiding the response caused by the direct return source in the prior art. A technical problem of reduced quality of service caused by time delays.
  • the scheduling server of the CDN service node may be a separate server or a server cluster, and each of the foregoing modules may be a separate server or a server cluster.
  • the interaction between the modules is reflected by each module.
  • the scheduling server formed by the server or the server cluster corresponding to each module includes:
  • the minimum spanning tree determines a server or a server cluster for generating a minimum spanning tree according to each distance metric between all nodes;
  • the cache node determines a server or a server cluster, configured to determine, by using the minimum spanning tree, a minimum spanning tree determined by the server or the server cluster, and determine a cache node that caches the content closest to the user;
  • the service node scheduling server or the server cluster is configured to select the cache node to determine a cache node determined by the server or the server cluster as a service node in response to the access request.
  • the minimum spanning tree determining module constitutes a first server or a first server cluster
  • the access request receiving module constitutes a second server or a second server cluster
  • the cache node determining module and the service node scheduling module together constitute a third server or a third server cluster .
  • the interaction between the above modules represents an interaction between the first server to the third server or an interaction between the first server cluster to the third server cluster, the first server to the third server or the first server
  • the cluster to third server cluster together constitute the scheduling server of the present invention.
  • the scheduling server may further include: a distance metric value module, configured to determine an inter-node distance metric according to a historical data transmission quality between the nodes.
  • the distance metric module is a separate server or server cluster, and a separate server or server cluster corresponding to the minimum spanning tree determining module, the access request receiving module, the cache node determining module, and the service node scheduling module respectively.
  • the scheduling server is configured. The interaction between the modules constituting the scheduling server at this time represents the interaction between the individual servers or servers corresponding to the respective modules.
  • the scheduling server formed by the server or the server cluster corresponding to each module includes:
  • the minimum spanning tree determines a server or a server cluster for generating a minimum spanning tree according to each distance metric between all nodes;
  • the cache node determines a server or a server cluster, configured to determine, by using the minimum spanning tree, a minimum spanning tree determined by the server or the server cluster, and determine a cache node that caches the content closest to the user;
  • the service node scheduling server or the server cluster is configured to select the cache node to determine a cache node determined by the server or the server cluster as a service node in response to the access request.
  • several of the plurality of modules described above may collectively form a server or cluster of servers.
  • the minimum spanning tree determining module and the distance metric module together constitute a first server or a first server cluster
  • the access request receiving module constitutes a second server or a second server cluster
  • the cache node determining module and the service node scheduling module jointly form a third Server or third server cluster.
  • the interaction between the above modules represents an interaction between the first server to the third server or an interaction between the first server cluster to the third server cluster, the first server to the third server or the first server
  • the cluster to third server cluster together constitute the scheduling server of the present invention.
  • a minimum spanning tree may be generated from a graph composed of all nodes according to a data transmission rate, a round trip time, and a packet loss rate of all nodes.
  • determining historical data transmission quality in the inter-node distance metric based on historical data transmission quality between nodes includes at least one of a data transmission rate, a round trip time, and a packet loss rate.
  • the distance between the two nodes is measured by comprehensively considering the download rate, the round trip time, and the packet loss rate between the two nodes (the download rate is a measure of the speed of data transmission between the two nodes, and the download rate is larger. The closer the distance between the two nodes is, the download rate is inversely proportional to the distance between the two nodes.
  • the round-trip time is the time for a complete communication between the two nodes. The shorter the round-trip time, the closer the distance between the two nodes.
  • the packet loss rate is a measure of the integrity of the transmitted information when communicating between two nodes.
  • the distance between the two nodes is more reliable, which can provide a more reliable scheduling basis for the content distribution of the CDN system, and ensure the quality of service to the user, thereby contributing to the user experience.
  • the cache node determining module includes:
  • a multi-cache node determining unit configured to query, according to the content, a plurality of cache nodes of all nodes that have been cached with the requested content
  • a nearest node determining unit configured to allocate a corresponding nearest serving node according to the location of the user
  • a cache node determining unit configured to determine whether the nearest serving node determined by the nearest node determining unit is one of a plurality of cache nodes determined by the multi-cache node determining unit, and if yes, determine the cache closest to the user Node; otherwise select the cache node closest to the nearest node in the minimum spanning tree.
  • the cache node determining module may be a server or a server cluster, where each unit may be a separate server or a server cluster.
  • the interaction between the units is represented by a server or a server corresponding to each unit.
  • the interaction between the clusters, the plurality of servers or server clusters together constitute the cache node determination module described above for constructing the dispatch server of the present invention.
  • a plurality of nodes that have been cached with the requested video are queried as cache nodes according to the content (the video content requested by the user), that is, all the cache nodes in the minimum spanning tree are determined at one time for subsequent determination to provide the user with the cache node.
  • the nearest cache node of the service avoids the situation that the service delay is delayed and the user experience is affected when the service node closest to the user does not cache the requested video.
  • the cache node determining module includes:
  • a nearest node determining unit configured to allocate a corresponding nearest serving node according to the location of the user
  • a cache node determining unit configured to determine, according to the content, whether the most recent service node determined by the nearest node determining unit caches the content, and if yes, determine to be the cache node closest to the user; The service node next to the nearest serving node is sequentially selected in the spanning tree and the determination is made until the nearest cache node is determined.
  • the cache node determining module may be a server or a server cluster, where each unit may be a separate server or a server cluster.
  • the interaction between the units is represented by a server or a server corresponding to each unit.
  • the server or server cluster together constitutes the above-described cache node determination module for constituting the dispatch server of the present invention.
  • the embodiment also provides a server that determines that the cache node is the nearest cache node serving the user from the minimum spanning tree, and avoids directly returning the source when the service node closest to the user does not cache the requested video. Users provide service delays that affect the user experience.
  • the difference from the previous embodiment is that the nearest cache node determining unit in this embodiment does not directly determine all cache nodes that have cached the requested video, but selects the service node closest to the user from the minimum spanning tree one by one. Then, it is judged whether the service node is a cache node, and if not, further determining the service node closest to the user, and determining whether it is a cache node, so that the service node closest to the user is selected from near to far according to the above steps.
  • a related function module can be implemented by a hardware processor.
  • a scheduling method of a CDN serving node and a system architecture diagram 700 of a scheduling server include a scheduling center 710, a CDN node group 720, and a client 730, wherein the degree center 710 includes
  • the scheduling servers 711-71j, CDN node groups 720 include CDN nodes 721-72i.
  • the user sends an access request (for example, a video access request) to the dispatching center through the client 730, and the dispatching center parses the received access request to determine the location of the user and the content of the request, and utilizes the pre-based CDN node.
  • the minimum spanning tree generated by information such as the reciprocal of the data transmission rate between the nodes uploaded by the group 720, the round trip time, and the packet loss rate, determines the cache node that caches the content closest to the user, and finally selects the cache node as the cache node.
  • Respond The service node of the access request wherein the minimum spanning tree is generated based on respective distance metric values between all nodes in the CDN node group 720, and the cache node is selected as a service node in response to the access request.
  • the embodiment of the present invention further provides a computer readable non-transitory storage medium, where the storage medium stores one or more programs including execution instructions, which can be used by an electronic device (including but not limited to a computer, The server, or network device, etc., reads and executes for performing the relevant steps in the above method embodiments, for example:
  • Receiving a user's access request determining the location of the user and the content of the request;
  • the cache node is selected as a service node in response to the access request.
  • FIG. 8 which is a schematic structural diagram of an embodiment of an electronic device 800 (including but not limited to a computer, a server, or a network device, etc.), the specific embodiment of the present application does not limit the specific implementation of the electronic device 800.
  • the electronic device 800 can include:
  • a processor 810 a communications interface 820, a memory 830, and a communication bus 840. among them:
  • the processor 810, the communication interface 820, and the memory 830 complete communication with each other via the communication bus 840.
  • the communication interface 820 is configured to communicate with a network element such as a client.
  • the processor 810 is configured to execute the program 832 in the memory 830, and specifically perform the related steps in the foregoing method embodiments.
  • program 832 can include program code, the program code including computer operating instructions.
  • the processor 810 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • the scheduling server in the above implementation includes:
  • a memory for storing computer operating instructions
  • a processor configured to execute the computer operating instructions of the memory storage to perform:
  • Receiving a user's access request determining the location of the user and the content of the request;
  • the cache node is selected as a service node in response to the access request.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本发明提供了一种CDN服务节点的调度方法,包括:确定节点间距离度量值;根据所有节点间的各个距离度量值生成最小生成树;接收用户的访问请求,确定用户所在的位置和请求的内容;利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;选取所述缓存节点作为响应所述访问请求的服务节点;相应的还提供一种调度服务器;本发明的CDN服务节点的调度方法及服务器,避免了现有技术中因直接回源造成的响应时间的延时而引发的服务质量的降低的技术问题。

Description

CDN服务节点的调度方法及服务器 技术领域
本发明涉及互联网技术领域,特别涉及一种CDN服务节点的调度方法及服务器。
背景技术
CDN的全称是Content Delivery Network,即内容分发网络。其目的是通过在现有的Internet中增加一层新的网络架构,将网站的内容发布到最接近用户的网络“边缘”。使用户可以就近取得所需的内容,解决Internet网络拥塞状况,提高用户访问网站的响应速度。
CDN技术分为动态加速和静态加速两种技术。目前普遍使用的多是静态加速,即在网络的边缘部署CDN节点。当有用户请求某项服务时,CDN系统通过调度,即全局负载均衡(Global Server Load Balance,GSLB)策略将用户定向到距它最近的一个边缘节点,该节点负责处理用户的请求。如果用户请求的内容在该节点上有缓存且有效,将缓存的内容发给该用户。否则,该节点会代理用户向其他节点或者源站服务器发起回源请求,调度寻找回源路径。根据回源路径取得用户请求的内容再转发给用户,完成这次请求的处理。
发明人在实现本发明的过程中发现,CDN网络中有很多个节点,但有时上传的数据源可能只有一个,特别在直播时尤其明显。现在一般作法是如果在边缘节点没有用户请求的内容时,根据某种方法确定最短路径而进行回源,最终为用户找到提供数据源的源站服务器。但是,现有技术并未考虑CDN全网节点中已经存在所请求内容的缓存的情况。实际上,可能已经有其它的用户在访问同一个直播视频,并且已经将视频缓存到了离本用户更近的一个CDN节点上了。这时用户如果到已经缓存的节点上获取数据可能会更快。这样来看综合CND全网结点已经存在缓存的情况,调度根据某种方法得到的最短回源路径,其访问时间可能并不是最短的,不能为用户提供最佳服务节点。 因此,如何在考虑CDN全网节点缓存的情况下,为用户提供一个访问时间更短的服务节点,提升用户体验,已经成为一个亟需解决的问题。
发明内容
本发明提供一种CDN服务节点的调度方法及服务器,用于解决现有技术中不能为用户调度最优CDN节点,从而影响用户体验的技术问题。
根据本发明的一个方面,提供了一种CDN服务节点的调度方法,包括:
根据所有节点间的各个距离度量值生成最小生成树;
接收用户的访问请求,确定用户所在的位置和请求的内容;
利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
选取所述缓存节点作为响应所述访问请求的服务节点。
根据本发明的另一个方面,提供一种CDN服务节点的调度服务器,包括:
最小生成树确定模块,用于根据所有节点间的各个距离度量值生成最小生成树;
访问请求接收模块,用于接收用户的访问请求,确定用户所在的位置和请求的内容;
缓存节点确定模块,用于利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
服务节点调度模块,用于选取所述缓存节点作为响应所述访问请求的服务节点。
本发明实施例的CDN服务节点的调度方法及服务器,从全局上确定下了各个节点之间的距离,使得调度中心为用户调度节点时可以直接根据最小生成树确定距离用户最近的节点,减少了调度的反应时间;此外通过确定所有的节点中已经缓存了用户访问请求所请求的视频节点为缓存节点,再根据最小生成树确定距离用户最近的缓存节点,避免了现有技术中因直接回源造成的响应时间的延时而引发的服务质量的降低。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的CDN服务节点的调度方法的一实施例的流程图;
图2为本发明的CDN服务节点的调度方法的另一实施例的流程图;
图3为本发明的CDN服务节点的调度方法的又一实施例的流程图;
图4为本发明的CDN服务节点的调度服务器的一实施例的示意图;
图5为本发明中的缓存节点确定模块的一实施例的示意图;
图6为本发明中的缓存节点确定模块的另一实施例的示意图;
图7为本实施本发明的CDN服务节点的调度方法及调度服务器的系统架构图;
图8为本发明的电子设备的一实施例的结构示意图。
具体实施例
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
本发明可用于众多通用或专用的计算系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算 机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。
本发明可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本发明,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”,不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
如图1所示,本发明的一实施例的CDN服务节点的调度方法,包括:
S11、调度中心根据节点间的历史数据传输质量确定节点间距离度量值;
S12、调度中心根据所有节点间的各个距离度量值生成最小生成树,两两节点间的距离度量值为两两节点之间的权重,根据特定的算法得到关于所有节点的最小生成树,其中特定算法可以是任何一种计算最小生成树的算法,例如,普里姆算法(Prim算法)、Kruskal算法,此处列举两种算法,但并不限于所列的两种算法;
S13、调度中心接收用户的访问请求,确定用户所在的位置和请求的内容,其中位置信息为用户所在地域的信息,请求的内容为用户请求的视频的特征信息,例如用户请求的视频的名称等;
S14、调度中心利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;根据步骤S13已经得到了关于所有节点的最小生成树,然后从最小生成树中选择出缓存了用户请求的内容的节点;
S15、调度中心选取所述缓存节点作为响应所述访问请求的服务节点。
本实施例中,调度中心从全局上确定下了各个节点之间的距离,使得调度中心为用户调度节点时可以直接根据最小生成树确定距离用户最近的节点,减少了调度的反应时间。此外,调度中心通过确定所有的节点中已经缓存了用户访问请求所请求的视频节点为缓存节点,再根据最小生成树确定距离用户最近的缓存节点,避免了现有技术中因直接回源造成的响应时间的延时而引发的服务质量的降低。在本发明的实施例中,可以根据所有节点间历史的数据传输速率、往返时间和丢包率将所有节点构成的图生成最小生成树。
在一些实施例中,调度中心根据节点间的历史数据传输质量确定节点间距离度量值中的历史数据传输质量包括数据传输速率、往返时间和丢包率中的至少一者。此外,调度中心根据所有节点间的各个距离度量值生成最小生成树包括:
调度中心对所述数据传输速率的倒数、往返时间和丢包率分别赋予第一权重、第二权重、第三权重;调度中心对数据传输速率的倒数、往返时间和丢包率进行加权求和得到节点之间的距离度量值;调度中心根据节点之间的距离度量值生成最小生成树。其中调度中心根据数据传输速率的倒数、往返时间和丢包率对节点间的距离度量的影响的大小对应的可调第一权重、第二权重、第三权重,且三者之和为1,即对三个权重之间进行了归一化处理,以便于实时的根据影响节点之间度量距离的三个因素(数据传输速率的倒数、往返时间和丢包率)对度量距离影响的大小对其比重进行调整,更加合理的调整数据传输速率的倒数、往返时间和丢包率三者之间的比重,以得到尽量准确的节点间的距离度量值,从而更加准确的确定各个节点间的距离。
本实施例中调度中心通过综合考虑两节点间的下载速率、往返时间和丢包率来度量两节点之间的距离(其中下载速率为两个节点之间进行数据传输的速度的衡量,下载速率越大说明两节点之间的距离越近,所以下载速率与两节点之间的距离成反比;往返时间为两节点之间进行一次完整的通信的时间,往返时间越短说明两节点间距离越近;丢包率为两节点之间通信时传输信息的完整性的度量,丢包率越大则表明两节点之间传输信息的越不完整,即两节点间的距离越大),使得最终确定的两节点之间的距离值更可靠,从而能够为CDN系统进行内容的分发提供更可靠的调度依据,保证对用户的服务质量,从而有助于提升用户体验。
本实施例中的数据传输速率和往返时间直接进行监测得到。其中,往返时间简单来说就是发送方从发送数据开始,到收到来自接受方的确认信息所经历的时间。往返时间在计算机网络中是一个重要的性能指标,表示从发送端发送数据开始,到发送端收到来自接收端的确认(接收端收到数据后便立即发送确认),总共经历的时延。RTT(Round-trip Time往返时间)值由三个部分决定:即链路的传播时间、末端系统的处理时间以及路由器的缓存中的排队和处理时间。丢包率(Loss Tolerance或Packet Loss Rate)是指测试中所丢失数据包数量占所发送数据组的比率,计算方法是:“[(输入报文-输出报文)/输入报文]*100%”。本实施例中的丢包率为第一节点发送的数据减去第二节点接收到的数据除以第一节点发送的数据乘以百分百。
如图2所述,在一些实施例中调度中心利用最小生成树,确定距用户最近的缓存了所述内容的缓存节点包括:
S21、调度中心根据内容查询所有节点中已经缓存有被请求内容的多个缓存节点;
S22、调度中心根据用户的位置分配相应的最近的服务节点;
S23、调度中心判断最近的服务节点是否为缓存节点,如果是则确定为距所述用户最近的缓存节点;否则调度中心在最小生成树中选择距离最近节点最近的缓存节点。判断最近的服务节点是否为缓存节点具体通过判断最近的服务节点中是否缓存有被请求内容,被请求内容是相应于用户的访问请求的内容。
本实施例中调度中心根据内容(用户请求的视频内容)查询所有节点中已经缓存有被请求视频的多个节点作为缓存节点,即一次性确定最小生成树中所有的缓存节点以供后续确定为用户提供服务的最近的缓存节点,避免了在距离用户最近的服务节点没有缓存被请求视频时直接回源而造成为用户提供服务延迟,影响用户体验的情况的发生。
如图3所述,在一些实施例中调度中心利用最小生成树,利用最小生成树,确定距用户最近的缓存了内容的缓存节点包括:
S31、调度中心根据用户的位置分配相应的最近的服务节点;
S32、调度中心根据内容判断所述最近的服务节点是否缓存有所述内容,如果是则确定为距用户最近的缓存节点;否则调度中心在最小生成树中依次选择距离最近的服务节点次近的服务节点(因为最下生成树中各个服务节点间的距离是已经确定的,所以由近到远依次选择服务节点进行判断直至确定缓存节点)并进行所述判断,直至确定最近的缓存节点。
本实施例也提供了一种调度中心从最小生成树中确定缓存节点为用户提供服务的最近的缓存节点的方法,避免了在距离用户最近的服务节点没有缓存被请求视频时直接回源而造成为用户提供服务延迟,影响用户体验的情况的发生。与上一实施例不同点在于,本实施例中调度中心不是直接确定出所有缓存了被请求视频的缓存节点,而是逐一的从最小生成树中选择距离用户最近的服务节点,然后在判断该服务节点是否为缓存节点。如果不是,则进一步确定距离用户次最近的服务节点,并进行判断是否为缓存节点。这样按照上述步骤依次由近到远一次选择距离用户最近的服务节点进行判断,直到确定缓存节点。这样的判断方法避免了一次性确定所有缓存节点而造成的计算上的冗余浪费。因为,如果确定出了n个缓存节点,但最终实际只有一个是最优的缓存节点,那么确定其它n-1个缓存节点进行的计算即为冗余计算,造成了浪费,并且产生了一定的时延。相反本实施例中通过逐一选择,逐一判断的方式,在确定缓存节点后,就不再需要进行确定其它缓存节点的冗余计算。因此,本发明实施例节省了计算时间,缩短了为用户调度缓存节点并提供服务的时间,提升了用户体验。
本发明实施例中可以通过硬件处理器(hardware processor)来实现相关功能模块。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作合并。但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制。因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有 详述的部分,可以参见其他实施例的相关描述。
如图4所示,本发明的实施例还提供一种CDN服务节点的调度服务器,其包括:
最小生成树确定模块,用于根据所有节点间的各个距离度量值生成最小生成树;
访问请求接收模块,用于接收用户的访问请求,确定用户所在的位置和请求的内容;
缓存节点确定模块,用于利用所述最小生成树确定模块确定的最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
服务节点调度模块,用于选取所述缓存节点确定模块确定的缓存节点作为响应所述访问请求的服务节点。
本实施例中,调度服务器从全局上确定下了各个节点之间的距离,使得调度中心(调度服务器就是调度中心,或者调度服务器只是调度中心中的一个或者多个服务器)为用户调度节点时可以直接根据最小生成树确定距离用户最近的节点,减少了调度的反应时间。此外调度服务器通过确定所有的节点中已经缓存了用户访问请求所请求的视频节点为缓存节点,再根据最小生成树确定距离用户最近的缓存节点,避免了现有技术中因直接回源造成的响应时间的延时而引发的服务质量的降低的技术问题。
在本实施例中,所述CDN服务节点的调度服务器可以为单独的服务器或者服务器集群,上述各模块可以为单独的服务器或者服务器集群,此时,各模块之间的交互体现为各模块所对应的服务器或者服务器集群之间的交互,各模块所对应的服务器或者服务器集群共同构成了本发明的调度服务器。
具体地,各模块所对应的服务器或者服务器集群共同构成的调度服务器包括:
最小生成树确定服务器或者服务器集群,用于根据所有节点间的各个距离度量值生成最小生成树;
访问请求接收服务器或者服务器集群,用于接收用户的访问请求,确定用户所在的位置和请求的内容;
缓存节点确定服务器或者服务器集群,用于利用所述最小生成树确定服务器或者服务器集群确定的最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
服务节点调度服务器或者服务器集群,用于选取所述缓存节点确定服务器或者服务器集群确定的缓存节点作为响应所述访问请求的服务节点。
在一种替代实施例中,可以是上述多个模块中的几个模块共同组成一个服务器或者服务器集群。例如:最小生成树确定模块构成第一服务器或者第一服务器集群,访问请求接收模块构成第二服务器或者第二服务器集群,缓存节点确定模块和服务节点调度模块共同构成第三服务器或者第三服务器集群。
此时,上述模块之间的交互表现为第一服务器至第三服务器之间的交互或者第一服务器集群至第三服务器集群之间的交互,所述第一服务器至第三服务器或第一服务器集群至第三服务器集群共同构成本发明的调度服务器。
在本发明的实施例中,该调度服务器还可以包括:距离度量值模块,用于根据节点间的历史数据传输质量确定节点间距离度量值。
在本实施例中,距离度量值模块为一个单独的服务器或者服务器集群,并且与最小生成树确定模块、访问请求接收模块、缓存节点确定模块和服务节点调度模块分别对应的单独的服务器或者服务器集群共同构成调度服务器,此时构成调度服务器的各个模块之间的交互表现为各个模块所对应的单独的服务器或者服务器之间的交互。
具体地,各模块所对应的服务器或者服务器集群共同构成的调度服务器包括:
距离度量值服务器或者服务器集群,用于根据节点间的历史数据传输质量确定节点间距离度量值。
最小生成树确定服务器或者服务器集群,用于根据所有节点间的各个距离度量值生成最小生成树;
访问请求接收服务器或者服务器集群,用于接收用户的访问请求,确定用户所在的位置和请求的内容;
缓存节点确定服务器或者服务器集群,用于利用所述最小生成树确定服务器或者服务器集群确定的最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
服务节点调度服务器或者服务器集群,用于选取所述缓存节点确定服务器或者服务器集群确定的缓存节点作为响应所述访问请求的服务节点。
在一种替代实施例中,可以是上述多个模块中的几个模块共同组成一个服务器或者服务器集群。例如:最小生成树确定模块和距离度量值模块共同构成第一服务器或者第一服务器集群,访问请求接收模块构成第二服务器或者第二服务器集群,缓存节点确定模块和服务节点调度模块共同构成第三服务器或者第三服务器集群。
此时,上述模块之间的交互表现为第一服务器至第三服务器之间的交互或者第一服务器集群至第三服务器集群之间的交互,所述第一服务器至第三服务器或第一服务器集群至第三服务器集群共同构成本发明的调度服务器。
在本发明的实施例中,可以根据所有节点间历史的数据传输速率、往返时间和丢包率将所有节点构成的图生成最小生成树。
在一些实施例中,根据节点间的历史数据传输质量确定节点间距离度量值中的历史数据传输质量包括数据传输速率、往返时间和丢包率中的至少一者。本实施例中通过综合考虑两节点间的下载速率、往返时间和丢包率来度量两节点之间的距离(其中下载速率为两个节点之间进行数据传输的速度的衡量,下载速率越大说明两节点之间的距离越近,所以下载速率与两节点之间的距离成反比。往返时间为两节点之间进行一次完整的通信的时间,往返时间越短说明两节点间距离越近。丢包率为两节点之间通信时传输信息的完整性的度量,丢包率越大则表明两节点之间传输信息的越不完整,即两节点间的距离越大),使得最终确定的两节点之间的距离值更可靠,从而能够为CDN系统进行内容的分发提供更可靠的调度依据,保证对用户的服务质量,从而有助于提升用户体验。
如图5所示,在一些实施例中,缓存节点确定模块包括:
多缓存节点确定单元,用于根据所述内容查询所有节点中已经缓存有被请求内容的多个缓存节点;
最近节点确定单元,用于根据所述用户的位置分配相应的最近的服务节点;
最近缓存节点确定单元,用于判断所述最近节点确定单元确定的最近的服务节点是否为多缓存节点确定单元所确定的多个缓存节点之一,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中选择距离所述最近节点最近的缓存节点。
在本实施例中,缓存节点确定模块可以为一个服务器或者服务器集群,其中每个单元可以是单独的服务器或者服务器集群,此时,上述单元之间的交互表现为各单元所对应的服务器或者服务器集群之间的交互,所述多个服务器或者服务器集群共同构成上述缓存节点确定模块以用于构成本发明的调度服务器。
在一种替代实施例中,可以是上述多个单元中的几个单元共同组成一个服务器或者服务器集群。
本实施例中根据内容(用户请求的视频内容)查询所有节点中已经缓存有被请求视频的多个节点作为缓存节点,即一次性确定最小生成树中所有的缓存节点以供后续确定为用户提供服务的最近的缓存节点,避免了在距离用户最近的服务节点没有缓存被请求视频时直接回源而造成为用户提供服务延迟,影响用户体验的情况的发生。
如图6所示,在一些实施例中,缓存节点确定模块包括:
最近节点确定单元,用于根据所述用户的位置分配相应的最近的服务节点;
最近缓存节点确定单元,用于根据所述内容判断所述最近节点确定单元所确定的最近的服务节点是否缓存有所述内容,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中依次选择距离所述最近的服务节点次近的服务节点并进行所述判断,直至确定最近的缓存节点。
在本实施例中,缓存节点确定模块可以为一个服务器或者服务器集群,其中每个单元可以是单独的服务器或者服务器集群,此时,上述单元之间的交互表现为各单元所对应的服务器或者服务器集群之间的交互,所述多个服 务器或者服务器集群共同构成上述缓存节点确定模块以用于构成本发明的调度服务器。
在一种替代实施例中,可以是上述多个单元中的几个单元共同组成一个服务器或者服务器集群。
本实施例也提供了一种从最小生成树中确定缓存节点为为用户提供服务的最近的缓存节点的服务器,避免了在距离用户最近的服务节点没有缓存被请求视频时直接回源而造成为用户提供服务延迟,影响用户体验的情况的发生。与上一实施例不同点在于,本实施例中的最近缓存节点确定单元不是直接确定出所有缓存了被请求视频的缓存节点,而是逐一的从最小生成树中选择距离用户最近的服务节点,然后在判断该服务节点是否为缓存节点,如果不是,则进一步确定距离用户次最近的服务节点,并进行判断是否为缓存节点,这样按照上述步骤依次由近到远一次选择距离用户最近的服务节点进行判断,直到确定缓存节点,这样的判断方法避免了一次性确定所有缓存节点而造成的计算上的冗余浪费,因为如果确定出了n个缓存节点,但最终实际只有一个是最优的缓存节点,那么确定其它n-1个缓存节点进行的计算即为冗余计算,造成了浪费,并且产生了一定的时延;相反本实施例中通过逐一选择,逐一判断的方式,在确定缓存节点后,就不再需要进行确定其它缓存节点的冗余计算,从而节省了计算时间,从而缩短了为用户调度缓存节点并提供服务的时间,提升了用户体验。
本发明实施例中可以通过硬件处理器(hardware processor)来实现相关功能模块。
如图7所示,为本实施本发明的实施例的CDN服务节点的调度方法及调度服务器的系统架构图700,包括调度中心710、CDN节点组720和客户端730,其中度中心710包括了调度服务器711-71j,CDN节点组720包括CDN节点721-72i。在本系统框架中,用户通过客户端730发送访问请求(例如,视频访问请求)至调度中心,调度中心解析接收到的访问请求以确定用户所在的位置和请求的内容,并利用预先基于CDN节点组720上传的节点间的数据传输速率的倒数、往返时间和丢包率等信息生成的最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点,最后选取所述缓存节点作为响应 所述访问请求的服务节点,其中最小生成树基于CDN节点组720中的所有节点间的各个距离度量值生成,选取所述缓存节点作为响应所述访问请求的服务节点。
本发明实施例还提供一种计算机可读的非瞬时性存储介质,所述存储介质中存储有一个或多个包括执行指令的程序,所述执行指令能够被电子设备(包括但不限于计算机,服务器,或者网络设备等)读取并执行,以用于执行上述方法实施例中的相关步骤,例如:
确定节点间距离度量值;
根据所有节点间的各个距离度量值生成最小生成树;
接收用户的访问请求,确定用户所在的位置和请求的内容;
利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
选取所述缓存节点作为响应所述访问请求的服务节点。
如图8所示,为本发明的电子设备800(包括但不限于计算机,服务器,或者网络设备等)的一实施例的结构示意图,本申请具体实施例并不对电子设备800的具体实现做限定。如图8所示,该电子设备800可以包括:
处理器(processor)810、通信接口(Communications Interface)820、存储器(memory)830、以及通信总线840。其中:
处理器810、通信接口820、以及存储器830通过通信总线840完成相互间的通信。
通信接口820,用于与比如客户端等的网元通信。
处理器810,用于执行存储器830中的程序832,具体可以执行上述方法实施例中的相关步骤。
具体地,程序832可以包括程序代码,所述程序代码包括计算机操作指令。
处理器810可能是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路。
上述实施中的调度服务器包括:
存储器,用于存放计算机操作指令;
处理器,用于执行所述存储器存储的计算机操作指令,以执行:
确定节点间距离度量值;
根据所有节点间的各个距离度量值生成最小生成树;
接收用户的访问请求,确定用户所在的位置和请求的内容;
利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
选取所述缓存节点作为响应所述访问请求的服务节点。
以上所描述的方法实施例仅仅是示意性的,其中所述作为分离部件说明的单元或者模块可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施例的描述,本领域的技术人员可以清楚地了解到各实施例可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通 过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种CDN服务节点的调度方法,包括:
    确定节点间距离度量值;
    根据所有节点间的各个距离度量值生成最小生成树;
    接收用户的访问请求,确定用户所在的位置和请求的内容;
    利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
    选取所述缓存节点作为响应所述访问请求的服务节点。
  2. 根据权利要求1所述的CDN服务节点的调度方法,其特征在于,所述利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点包括:
    根据所述内容查询所有服务节点中已经缓存有被请求内容的多个缓存节点;
    根据所述用户的位置分配相应的最近的服务节点;
    判断所述最近的服务节点是否为缓存节点,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中选择距离所述最近的服务节点最近的缓存节点。
  3. 根据权利要求1所述的CDN服务节点的调度方法,其特征在于,所述利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点包括:
    根据所述用户的位置分配相应的最近的服务节点;
    根据所述内容判断所述最近的服务节点是否缓存有所述内容,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中依次选择距离所述最近的服务节点次近的服务节点并进行所述判断,直至确定最近的缓存节点。
  4. 根据权利要求1-3任一项所述的CDN服务节点的调度方法,其特征在于,所述历史数据传输质量包括数据传输速率、往返时间和丢包率中的至 少一者。
  5. 根据权利要求1-3任一项所述的CDN服务节点的调度方法,其特征在于,包括:
    根据节点间的历史数据传输质量确定节点间距离度量值。
  6. 一种CDN服务节点的调度服务器,包括:
    最小生成树确定模块,用于根据所有节点间的各个距离度量值生成最小生成树;
    访问请求接收模块,用于接收用户的访问请求,确定用户所在的位置和请求的内容;
    缓存节点确定模块,用于利用所述最小生成树,确定距所述用户最近的缓存了所述内容的缓存节点;
    服务节点调度模块,用于选取所述缓存节点作为响应所述访问请求的服务节点。
  7. 根据权利要求6所述的CDN服务节点的调度服务器,其特征在于,所述缓存节点确定模块包括:
    多缓存节点确定单元,用于根据所述内容查询所有服务节点中已经缓存有被请求内容的多个缓存节点;
    最近节点确定单元,用于根据所述用户的位置分配相应的最近的服务节点;
    最近缓存节点确定单元,用于判断所述最近的服务节点是否为缓存节点,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中选择距离所述最近的服务节点最近的缓存节点。
  8. 根据权利要求6所述的CDN服务节点的调度服务器,其特征在于,所述缓存节点确定模块包括:
    最近节点确定单元,用于根据所述用户的位置分配相应的最近的服务节点;
    最近缓存节点确定单元,用于根据所述内容判断所述最近的服务节点是否缓存有所述内容,如果是则确定为距所述用户最近的缓存节点;否则在最小生成树中依次选择距离所述最近的服务节点次近的服务节点并进行所述判断,直至确定最近的缓存节点。
  9. 根据权利要求6-8任一项所述的CDN服务节点的调度服务器,其特征在于,所述历史数据传输质量包括数据传输速率、往返时间和丢包率中的至少一者。
  10. 根据权利要求6-8任一项所述的CDN服务节点的调度服务器,其特征在于,还包括:
    距离度量值模块,用于根据节点间的历史数据传输质量确定节点间距离度量值。
PCT/CN2016/088861 2015-12-15 2016-07-06 Cdn服务节点的调度方法及服务器 WO2017101366A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/246,134 US20170171344A1 (en) 2015-12-15 2016-08-24 Scheduling method and server for content delivery network service node

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510931364.6A CN105897845A (zh) 2015-12-15 2015-12-15 Cdn服务节点的调度方法及服务器
CN201510931364.6 2015-12-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/246,134 Continuation US20170171344A1 (en) 2015-12-15 2016-08-24 Scheduling method and server for content delivery network service node

Publications (1)

Publication Number Publication Date
WO2017101366A1 true WO2017101366A1 (zh) 2017-06-22

Family

ID=57002420

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088861 WO2017101366A1 (zh) 2015-12-15 2016-07-06 Cdn服务节点的调度方法及服务器

Country Status (2)

Country Link
CN (1) CN105897845A (zh)
WO (1) WO2017101366A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677464A (zh) * 2019-09-09 2020-01-10 深圳市网心科技有限公司 边缘节点设备、内容分发系统、方法、计算机设备及介质
CN112866060A (zh) * 2021-01-25 2021-05-28 湖南快乐阳光互动娱乐传媒有限公司 服务器响应时长获取方法及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846613A (zh) * 2016-09-18 2018-03-27 中兴通讯股份有限公司 视频获取方法、平台和系统,终端,调度和缓存子系统
CN106357792B (zh) * 2016-10-10 2019-09-06 网宿科技股份有限公司 节点选路方法及系统
CN108076350A (zh) * 2016-11-14 2018-05-25 中国科学院声学研究所 一种基于路由器协同缓存的视频服务系统及方法
CN106656674A (zh) * 2016-12-29 2017-05-10 北京爱奇艺科技有限公司 一种数据回源的调度方法及装置
EP3606005B1 (en) * 2017-04-26 2022-04-13 Huawei Technologies Co., Ltd. Redirection method, control plane network element, aggregation user plane network element, content server and terminal device
CN107911722B (zh) * 2017-10-31 2020-06-16 贝壳找房(北京)科技有限公司 一种内容分发网络调度方法、装置、电子设备及计算机可读存储介质
CN110740146B (zh) * 2018-07-18 2020-06-26 贵州白山云科技股份有限公司 一种调度缓存节点的方法、装置及计算机网络系统
CN109151067B (zh) * 2018-10-17 2020-11-03 广东广信通信服务有限公司 一种结合缓存和cdn访问数据的方法及装置
CN110661879B (zh) * 2019-10-12 2023-03-24 北京奇艺世纪科技有限公司 节点调度方法、装置、系统、调度服务器及终端设备
CN111601178B (zh) * 2020-05-26 2022-09-23 维沃移动通信有限公司 视频数据处理方法、装置和电子设备
CN114285788B (zh) * 2020-09-18 2023-06-20 华为技术有限公司 一种设备连接方法、装置和设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1322094A1 (en) * 2001-12-21 2003-06-25 Castify Holdings, Ltd Process for selecting a server in a content delivery network
CN101860720A (zh) * 2009-04-10 2010-10-13 中兴通讯股份有限公司 内容定位方法及内容分发网络节点
CN102118376A (zh) * 2010-01-06 2011-07-06 中兴通讯股份有限公司 内容分发网络服务器及内容下载方法
CN102137087A (zh) * 2010-09-15 2011-07-27 华为技术有限公司 业务处理方法、对已分发的内容进行调整的方法和业务节点
WO2015084589A1 (en) * 2013-12-06 2015-06-11 Fastly, Inc. Return path selection for content delivery
CN105450753A (zh) * 2015-11-27 2016-03-30 浪潮(北京)电子信息产业有限公司 一种数据获取方法、目录服务器及分布式文件系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101640699A (zh) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 P2p流媒体系统及其中的流媒体下载方法
CN102333130A (zh) * 2011-10-31 2012-01-25 北京蓝汛通信技术有限责任公司 一种访问缓存服务器的方法、系统及缓存智能调度器
CN104303489A (zh) * 2012-04-30 2015-01-21 Nec欧洲有限公司 在网络中执行dns解析的方法、内容分发系统和用于在内容分发系统中进行部署的客户端终端
US9317223B2 (en) * 2012-12-17 2016-04-19 International Business Machines Corporation Method and apparatus for automated migration of data among storage centers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1322094A1 (en) * 2001-12-21 2003-06-25 Castify Holdings, Ltd Process for selecting a server in a content delivery network
CN101860720A (zh) * 2009-04-10 2010-10-13 中兴通讯股份有限公司 内容定位方法及内容分发网络节点
CN102118376A (zh) * 2010-01-06 2011-07-06 中兴通讯股份有限公司 内容分发网络服务器及内容下载方法
CN102137087A (zh) * 2010-09-15 2011-07-27 华为技术有限公司 业务处理方法、对已分发的内容进行调整的方法和业务节点
WO2015084589A1 (en) * 2013-12-06 2015-06-11 Fastly, Inc. Return path selection for content delivery
CN105450753A (zh) * 2015-11-27 2016-03-30 浪潮(北京)电子信息产业有限公司 一种数据获取方法、目录服务器及分布式文件系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677464A (zh) * 2019-09-09 2020-01-10 深圳市网心科技有限公司 边缘节点设备、内容分发系统、方法、计算机设备及介质
CN112866060A (zh) * 2021-01-25 2021-05-28 湖南快乐阳光互动娱乐传媒有限公司 服务器响应时长获取方法及装置

Also Published As

Publication number Publication date
CN105897845A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017101366A1 (zh) Cdn服务节点的调度方法及服务器
US10404790B2 (en) HTTP scheduling system and method of content delivery network
US20170171344A1 (en) Scheduling method and server for content delivery network service node
WO2017181587A1 (zh) Cdn网络中的节点管理方法和电子设备
US9667739B2 (en) Proxy-based cache content distribution and affinity
US10044797B2 (en) Load balancing of distributed services
US7953887B2 (en) Asynchronous automated routing of user to optimal host
US20140188801A1 (en) Method and system for intelligent load balancing
US10171610B2 (en) Web caching method and system for content distribution network
CN106230992B (zh) 一种负载均衡方法和负载均衡节点
WO2017096837A1 (zh) 节点间距离的度量方法及系统
US11277342B2 (en) Lossless data traffic deadlock management system
JP6793498B2 (ja) データストア装置およびデータ管理方法
US9491067B2 (en) Timeout for identifying network device presence
US11995469B2 (en) Method and system for preemptive caching across content delivery networks
US11755381B1 (en) Dynamic selection of where to execute application code in a distributed cloud computing network
JP6339974B2 (ja) Api提供システムおよびapi提供方法
Kontogiannis et al. ALBL: an adaptive load balancing algorithm for distributed web systems
US11579915B2 (en) Computing node identifier-based request allocation
KR20100038800A (ko) 캐시서버에 저장된 데이터 갱신방법, 그 캐시서버 및 컨텐츠 제공시스템
WO2018000617A1 (zh) 一种数据库的更新方法及调度服务器
US12095847B2 (en) Distributed content distribution network
US9213735B1 (en) Flow control in very large query result sets using a release message to confirm that a client computer is ready to receive the data associated with a data collection operation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16874428

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16874428

Country of ref document: EP

Kind code of ref document: A1