WO2022057001A1 - 一种设备纳管方法、系统及纳管集群 - Google Patents

一种设备纳管方法、系统及纳管集群 Download PDF

Info

Publication number
WO2022057001A1
WO2022057001A1 PCT/CN2020/122548 CN2020122548W WO2022057001A1 WO 2022057001 A1 WO2022057001 A1 WO 2022057001A1 CN 2020122548 W CN2020122548 W CN 2020122548W WO 2022057001 A1 WO2022057001 A1 WO 2022057001A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge machine
edge
managed cluster
resource
cluster
Prior art date
Application number
PCT/CN2020/122548
Other languages
English (en)
French (fr)
Inventor
王文庭
李寿景
Original Assignee
网宿科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网宿科技股份有限公司 filed Critical 网宿科技股份有限公司
Publication of WO2022057001A1 publication Critical patent/WO2022057001A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a device hosting method, system, and hosting cluster.
  • the current Content Delivery Network (CDN) system usually deploys a large number of edge machines in order to provide services to users around the world.
  • edge machines there may be many idle resources.
  • edge machines that are standby nodes usually do not run services, and only run services for a period of time when the primary node fails or the primary node needs to be urgently replaced.
  • some edge machines serve fewer customers or need to process less business data, so they operate in a low-load operation mode for a long time. In view of this, how to manage a large number of edge machines so as to effectively utilize idle resources in edge machines has become a problem to be solved in CDN.
  • the purpose of the present application is to provide a device management method, system, and management cluster, which can uniformly manage a large number of edge machines, and can effectively utilize idle resources in the edge machines.
  • one aspect of the present application provides a device management method, the method includes: an edge machine reports resource information to a managed cluster; the managed cluster stores the resource information in a database, and according to the The resource information updates the scheduling list, the edge machines in the scheduling list have idle resources, and the data in the database is reported to the central management platform; the central management platform receives the user's application request and reports it according to each managed cluster and the application request is scheduled to the target managed cluster, so as to execute the application request through the edge machine under the target managed cluster.
  • another aspect of the present application further provides a device management system
  • the system includes a central management platform, a managed cluster and an edge machine, wherein: the edge machine is used to report to the managed cluster resource information, and execute the application request scheduled by the managed cluster; the managed cluster is used to store the resource information in the database, and update the scheduling list according to the resource information, and the edge machines in the scheduling list have idle resources; report the data in the database to the central management platform, receive application requests issued by the central management platform, and schedule the application requests to the edge machines in the scheduling list;
  • the central management platform is used to receive application requests from users, and according to the data reported by each managed cluster, schedule the application requests to the target managed cluster, so that the edge machines under the target managed cluster can execute the described application. application request.
  • the present application further provides a managed cluster, wherein the managed cluster includes: a scheduling list update unit, configured to receive resource information reported by edge machines, and update the scheduling list according to the resource information, The edge machines in the scheduling list have idle resources; a resource reporting unit is used to store the resource information in a database and report the data in the database to the central management platform, so that the central management platform Determine the managed cluster for receiving application requests according to the reported data; the request scheduling unit is configured to receive the application requests issued by the central management platform, and schedule the application requests to the edge machines in the scheduling list to pass The edge machine executes the application request.
  • a scheduling list update unit configured to receive resource information reported by edge machines, and update the scheduling list according to the resource information, The edge machines in the scheduling list have idle resources
  • a resource reporting unit is used to store the resource information in a database and report the data in the database to the central management platform, so that the central management platform Determine the managed cluster for receiving application requests according to the reported data
  • the request scheduling unit is configured to receive the application requests issued by the central management
  • edge machines can also use idle resources to process user application requests.
  • the edge machine can report its own resource information to the managed cluster in real time.
  • the managed cluster can generate and update the scheduling list by analyzing the resource information.
  • the edge machines in the scheduling list can be edge machines with idle resources.
  • these edge machines can also process user application requests.
  • the managed cluster can store the collected resource information in the database, and further report the data in the database to the central management platform.
  • the central management platform can preliminarily schedule the user's application request to the target managed cluster, and the target managed cluster will be further dispatched to the lower-level edge machines for final processing by the edge machines.
  • the app requests can be seen from the above that the application can manage a large number of edge machines in a unified manner through the central management platform and the managed cluster, and analyze the real-time resource information of the edge machines, so that the edge machines can process normal CDN services and be able to. Use idle resources to process application requests of other users, so as to effectively use idle resources.
  • FIG. 1 is a schematic diagram of the architecture of a device management system provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of each component in a device management system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of functional modules of a managed cluster provided by an embodiment of the present application.
  • FIG. 4 is a step diagram of a device management method provided by an embodiment of the present application.
  • the present application provides a device management system, please refer to FIG. 1 and FIG. 2 .
  • the system includes a central management platform, a management cluster, and edge machines.
  • edge machines are used to process normal content delivery network (Content Delivery Network, CDN for short) services and user application requests.
  • CDN Content Delivery Network
  • a large number of edge machines can be divided into multiple managed clusters, and different managed clusters can Edge machines are managed.
  • edge machines can be partitioned by geographic location. Specifically, edge machines in corresponding geographic locations can be managed through managed clusters in East China and Middle China regions.
  • the managed cluster in East China can manage the Shanghai computer room and Jinan computer room; while the managed cluster in Central China can manage the Kaifeng computer room and the Shandong computer room.
  • the central management platform can manage all managed clusters, and the information collected in the managed clusters can eventually be aggregated on the central management platform.
  • an edge management client may be installed in the edge machine, and correspondingly, an edge management server (cloudcore) may be installed in the managed cluster to which the edge machine belongs.
  • the edge management client can collect resource information in the edge machine in real time, and the resource information can be the current hardware resource utilization rate of the edge machine.
  • the hardware resource usage rate may be CPU usage rate, memory usage rate, and the like.
  • the resource information may also be a resource distribution period preset by the edge machine.
  • the CDN services processed by edge machines are usually relatively fixed, and the data volume of the services usually only varies within a controllable range.
  • an edge machine is mainly responsible for accelerating the traffic of the shopping platform.
  • the resource distribution period can indicate the resource utilization rate in different periods, and by analyzing the resource utilization rate, the busy period and idle period of the edge machine can be determined. Specifically, the average resource utilization during busy periods may be higher than a certain threshold, while the average resource utilization during idle periods may be lower than a certain threshold. In this way, by identifying the current moment, it is possible to know whether the edge machine is in a busy period or an idle period. In addition, resource information can also characterize whether edge machines are currently available.
  • the edge management client can detect the running state of the edge machine, or can collect the running log of the edge machine, and the running state or the running log can represent whether the edge machine is currently in an available state.
  • the resource information can also include some other information about the edge machine, such as the model of the edge machine, the IP address of the edge machine, the failure rate of the edge machine, etc.
  • the resource information of the edge machine can be used to characterize the edge machine. The current operating status and the environment in which edge machines are located are not listed here.
  • the edge machine can report its own resource information to the managed cluster.
  • the edge management server in the managed cluster can establish a long connection with the edge management client, so as to receive the resource information reported by the edge machine through the edge management server.
  • a database for storing resource information can be deployed in the managed cluster, and the database can be flexibly selected according to actual needs.
  • the database may be a key/value relational database etcd.
  • the edge management server can write the resource information reported by the edge machine into the database. In this way, the edge management server can keep updating the resource information of each edge machine in the database by collecting the resource information in the edge machine in real time.
  • the managed cluster may generate a scheduling list according to the collected resource information.
  • the scheduling list may include edge machines with idle resources, and subsequently, one or more edge machines may be selected from the scheduling list, and the user's application request may be processed through the idle resources of the selected edge machines.
  • the idle resources of edge machines may refer to the part of resources that do not process CDN services. In edge machines, there are usually some resources to process normal CDN services. Except for this part of resources, other resources can be used as idle resources to process application requests of other users.
  • the managed cluster can determine which edge machines available for scheduling should be included in the scheduling list by analyzing the resource information reported by the edge machines.
  • the managed cluster can identify the current hardware resource utilization of each edge machine, and then add edge machines whose hardware resource utilization is lower than a certain threshold to the scheduling list. For example, a managed cluster can add edge machines with hardware resource utilization below 50% to the scheduling list.
  • the resource distribution period in which the edge machine is currently located can also be identified. If the edge machine is currently in an idle period, the edge machine can be added to the scheduling list. And if the edge machine is currently in a busy period, the edge machine can be removed from the scheduling list.
  • the managed cluster may schedule the user's application request to one or more of the edge machines, so as to process the user's application request through the idle resources of these edge machines.
  • the user's application request can be a request other than the normal CDN service.
  • idle resources to process the user's application request will not conflict with the normal CDN service on the one hand, and on the other hand, it can maximize the use of edge machines. resource of.
  • the idle resources of edge machines will also change constantly.
  • managed clusters can set resource running high thresholds for managed edge machines.
  • the resource running high threshold may be a relatively high resource utilization rate, and the resource running high threshold may be used to determine whether to continue to schedule the user's application request to the edge machine.
  • the resource running high threshold set by the managed cluster for the edge machine can be 80%. When the current resource usage of the edge machine reaches 80%, it indicates that the current load of the edge machine is high, and the managed cluster is at this time. Scheduling application requests to the edge machine may be discontinued.
  • the managed cluster can monitor the amount of resources used by edge machines in real time. If the amount of resources used by edge machines is always lower than the resource high threshold within a certain period of time, the managed cluster can resume to The edge machine schedules application requests. .
  • the managed cluster can set a high eviction policy for edge machines.
  • a higher eviction threshold can be set than the above-mentioned resource high threshold. For example, if the resource run high threshold is 80%, then the eviction threshold can be 90%.
  • the managed cluster can evict the application services scheduled to the edge machine, so as to increase the available resources in the edge machine.
  • Resources can also be expelled according to application priorities. For example, application services can be expelled in descending order of priority.
  • the application request originally expected to be scheduled to the edge machine may not be executed by the edge machine.
  • the managed cluster can determine another an edge machine, and reschedules the application request to the other edge machine to ensure that the application request can be executed normally.
  • the managed cluster may include a controller and a scheduler, wherein the controller can monitor the status information of the applications running in the edge machine, and control the number of copies of the application in the edge machine according to the current idle resources of the edge machine .
  • the state information of the application may include a series of information such as the amount of resources occupied by the application, the running time of the application, and the running progress of the application. The same application may need to run multiple copies at the same time.
  • the controller can dynamically control the number of copies of the application according to the remaining idle resources of the edge machine, so as to make full use of the idle resources without affecting the normal CDN service.
  • the above-mentioned scheduler can parse the data in the database to determine the current idle resources of each edge machine, and according to the determined idle resources, filter out the target edge machine from the scheduling list, and put it under the central management platform.
  • the sent application request is dispatched to the target edge machine.
  • the scheduler can monitor the idle resources of each edge machine in real time by parsing the data in the database etcd.
  • the edge machine with the most idle resources currently in the scheduling list can be used as the target edge machine.
  • target edge machines can also be filtered out based on the resource requirements requested by the application.
  • the resource requirements corresponding to the application request can be identified at this time, and The edge machine that meets the resource requirements is selected from the scheduling list as the target edge machine.
  • the target edge machines can also be screened according to more information, which will not be listed here.
  • a resource reporting client (agent-edge) may be installed in the managed cluster, and a resource reporting server (agent-manager) may be installed in the central management platform, wherein the resource reporting client may be Report the data in the database etcd of the managed cluster to the central management platform.
  • the resource reporting client can first filter and count the data in the database, for example, it can remove duplicate data in the database, and count the resource information occupied by the applications in the edge machines, as well as the current status of each edge machine. Information such as the running status and the model of each edge machine, and report the statistical data to the central management platform.
  • the resource reporting server in the central management platform can receive the data reported by the resource reporting client, and store the data in the database of the central management platform.
  • the database of the central management platform can be a persistent storage system.
  • the database may be a Remote Dictionary Server (Redis for short) database.
  • the data received by the resource reporting server can be written into the persistent database.
  • an external interface may also be set in the central management platform, wherein, when the external interface is called, the database of the central management platform may be Obtain the data reported by the managed cluster from Redis.
  • the global scheduler may receive the user's application request, and by calling the external interface, query the managed cluster that can currently receive the application request, and schedule the application request to the corresponding managed cluster. Specifically, after receiving the user's application request, the global scheduler can obtain the data reported by each currently managed cluster through the external interface, and analyze the data to determine the managed cluster with idle resources. Then, the application request may be initially scheduled to the corresponding managed cluster, and the managed cluster will further schedule the application request to the edge machine for processing.
  • the present application further provides a managed cluster, where the managed cluster includes:
  • a scheduling list updating unit configured to receive resource information reported by an edge machine, and update a scheduling list according to the resource information, wherein the edge machines in the scheduling list have idle resources;
  • a resource reporting unit configured to store the resource information in the database, and report the data in the database to the central management platform, so that the central management platform determines the managed cluster for receiving the application request according to the reported data ;
  • the request scheduling unit is configured to receive the application request issued by the central management platform, and schedule the application request to the edge machine in the scheduling list, so as to execute the application request through the edge machine.
  • the managed cluster further includes a controller and a scheduler, wherein:
  • the controller is used to monitor the state information of the application running in the edge machine, and control the number of copies of the application in the edge machine according to the current idle resources of the edge machine;
  • the scheduler is configured to parse the data in the database to determine the current idle resources of each edge machine, and filter out the target edge machine from the scheduling list according to the determined idle resources, and assign the The application request issued by the central management platform is dispatched to the target edge machine.
  • the managed cluster is further configured to parse the resource information reported by the edge machine to determine the resource distribution period of the edge machine; wherein, if the resource distribution period of the edge machine represents a busy period, The edge machine is removed from the scheduling list; if the resource distribution period of the edge machine represents an idle period, the edge machine is added to the scheduling list.
  • the managed cluster is further configured to set a resource overrun threshold for the edge machine, wherein when the amount of resources currently used by the edge machine reaches the resource overrun threshold, the nanometer The scheduler in the management cluster stops scheduling application requests to the edge machine.
  • the managed cluster is further configured to set an overrun eviction policy for the edge machine, and when the amount of resources currently used by the edge machine reaches the eviction threshold represented by the overrun eviction policy, The managed cluster evicts the application services scheduled to the edge machine, so as to increase the amount of resources available in the edge machine.
  • the present application further provides a device management method, and the method includes the following steps.
  • S1 The edge machine reports resource information to the managed cluster.
  • the managed cluster stores the resource information in the database, and updates the scheduling list according to the resource information.
  • the edge machines in the scheduling list have idle resources, and reports the data in the database to the central management platform.
  • the central management platform receives the user's application request, and according to the data reported by each managed cluster, schedules the application request to the target managed cluster, so that the edge machine under the target managed cluster can execute the application request. application request.
  • the resource information includes at least one of a current hardware resource usage rate of the edge machine, a resource distribution period preset by the edge machine, and whether the edge machine is currently available.
  • an edge management client is installed in the edge machine, and the edge management client is used to report resource information to the managed cluster;
  • an edge management server is installed in the managed cluster, so The edge management server is configured to receive the resource information reported by the edge management client, and store the resource information in the database by calling a preset data interface.
  • the managed cluster includes a controller and a scheduler, and the method further includes:
  • the controller monitors the state information of the application running in the edge machine, and controls the number of copies of the application in the edge machine according to the current idle resources of the edge machine;
  • the scheduler parses the data in the database to determine the current idle resources of each edge machine, and selects the target edge machine from the scheduling list according to the determined idle resources, and assigns the central management platform to the target edge machine.
  • the issued application request is scheduled to the target edge machine.
  • the method further includes:
  • the managed cluster parses the resource information reported by the edge machine to determine the resource distribution period of the edge machine; wherein, if the resource distribution period of the edge machine represents a busy period, the edge machine is removed from the scheduler. Excluded from the list; if the resource distribution period of the edge machine represents an idle period, the edge machine is added to the scheduling list.
  • the method further includes:
  • the managed cluster sets a resource running high threshold for the edge machine, wherein, when the amount of resources currently used by the edge machine reaches the resource running high threshold, the scheduler in the managed cluster stops sending to all the resources.
  • the described edge machine schedules application requests.
  • the method further includes:
  • the managed cluster sets an overrun eviction policy for the edge machine, and when the amount of resources currently used by the edge machine reaches the expulsion threshold represented by the overrun eviction policy, the managed cluster is scheduled to The application services in the edge machine are evicted, so as to increase the amount of resources available in the edge machine.
  • the method further includes:
  • the managed cluster determines another edge machine from the scheduling list again, and reschedules the application request to the other edge machine edge machine.
  • a resource reporting client is installed in the managed cluster, and the resource reporting client is used to report data in the database to the central management platform; the central management platform is installed with and a resource reporting server, the resource reporting server is configured to receive the data reported by the resource reporting client, and store the data in the database of the central management platform.
  • the central management platform further includes an external interface and a global scheduler, wherein:
  • the global scheduler is configured to receive an application request from a user, and by invoking the external interface, query a managed cluster that can currently receive the application request, and schedule the application request to a corresponding managed cluster.
  • edge machines can also use idle resources to process user application requests.
  • the edge machine can report its own resource information to the managed cluster in real time.
  • the managed cluster can generate and update the scheduling list by analyzing the resource information.
  • the edge machines in the scheduling list can be edge machines with idle resources.
  • these edge machines can also process user application requests.
  • the managed cluster can store the collected resource information in the database, and further report the data in the database to the central management platform.
  • the central management platform can preliminarily schedule the user's application request to the target managed cluster, and the target managed cluster will be further dispatched to the lower-level edge machines for final processing by the edge machines.
  • the app requests can be seen from the above that the application can manage a large number of edge machines in a unified manner through the central management platform and the managed cluster, and analyze the real-time resource information of the edge machines, so that the edge machines can process normal CDN services and be able to. Use idle resources to process application requests of other users, so as to effectively use idle resources.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include non-persistent memory in computer-readable media, random access memory (Ramdom Access Memory, referred to as RAM) and/or non-volatile memory, such as read-only memory (Read-Only Memory, referred to as simply referred to as RAM). ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM Random Access Memory
  • ROM read-only memory
  • flash RAM flash RAM
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (Phase-Change Ramdom Access Memory, referred to as PRAM), static random access memory (Static Ramdom Access Memory, referred to as SRAM), dynamic random access memory (Dynamic random access memory) Ramdom Access Memory, referred to as DRAM), other types of random access memory (RAM), read-only memory (ROM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, referred to as EEPROM), Flash memory or other memory technologies, Compact Disc Read-Only Memory (CD-ROM), Digital Video Disc (DVD) or other optical storage, magnetic cartridges Format tape, tape-disk storage or other magnetic storage device or any other non-transmission medium that can be

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请公开了一种设备纳管方法、系统及纳管集群,其中,所述方法包括:边缘机器向纳管集群上报资源信息;所述纳管集群在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源,并将所述数据库中的数据上报至中心管理平台;所述中心管理平台接收用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。

Description

一种设备纳管方法、系统及纳管集群
交叉引用
本申请要求于2020年09月15日递交的名称为“一种设备纳管方法、系统及纳管集群”、申请号为202010967450.3的中国专利申请的优先权,其通过引用被全部并入本申请。
技术领域
本申请涉及互联网技术领域,特别涉及一种设备纳管方法、系统及纳管集群。
背景技术
当前的内容分发网络(Content Delivery Network,简称为CDN)系统为了能够为全球的用户提供服务,通常会部署海量的边缘机器。这些边缘机器在运行过程中,可能会存在较多的闲置资源。例如,作为备用节点的边缘机器通常不运行业务,只有在主节点出故障或者主节点需要紧急替换的时候才会运行一段时间业务。又例如,部分边缘机器服务的客户较少,或者需要处理的业务数据较少,因此长期处于低负载的运行模式。鉴于此,如何对海量的边缘机器进行管理,从而有效地利用边缘机器中的闲置资源,成为CDN中待解决的一个问题。
发明内容
本申请的目的在于提供一种设备纳管方法、系统及纳管集群,能够统一地对海量的边缘机器进行管理,并且能够有效地利用边缘机器中的闲置资源。
为实现上述目的,本申请一方面提供一种设备纳管方法,所述方法包括:边缘机器向纳管集群上报资源信息;所述纳管集群在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源,并将所述数据库中的数据上报至中心管理平台;所述中心管理平台接收 用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。
为实现上述目的,本申请另一方面还提供一种设备纳管系统,所述系统包括中心管理平台、纳管集群和边缘机器,其中:所述边缘机器,用于向所述纳管集群上报资源信息,并执行所述纳管集群调度的应用请求;所述纳管集群,用于在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源;将所述数据库中的数据上报至所述中心管理平台,并接收所述中心管理平台下发的应用请求,并将所述应用请求调度至所述调度列表中的边缘机器;所述中心管理平台,用于接收用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。
为实现上述目的,本申请另一方面还提供一种纳管集群,所述纳管集群包括:调度列表更新单元,用于接收边缘机器上报的资源信息,并根据所述资源信息更新调度列表,其中,所述调度列表中的边缘机器具备闲置资源;资源上报单元,用于在数据库中存储所述资源信息,并将所述数据库中的数据上报至中心管理平台,以使得所述中心管理平台根据上报的数据确定用于接收应用请求的纳管集群;请求调度单元,用于接收中心管理平台下发的应用请求,并将所述应用请求调度至所述调度列表中的边缘机器,以通过边缘机器执行所述应用请求。
由上可见,本申请一个或者多个实施例提供的技术方案,可以通过中心管理平台、纳管集群和边缘机器的协同运作,来实现海量边缘机器的管理和闲置资源的利用。具体地,边缘机器除了可以运行正常的CDN业务,还可以将闲置资源用来处理用户的应用请求。边缘机器可以实时将自身的资源信息上报至所属的纳管集群,纳管集群通过分析该资源信息,可以生成并更新调度列表,在调度列表中的边缘机器都可以是具备闲置资源的边缘机器,这些边缘机器在处理正常的CDN业务之余,还可以处理用户的应用请求。纳管集群可以将收集到的资源信息存储在数据库中,并将数据库中的数据进一步上报至中心管理平台。中心管理平台通过解析纳管集群上报的数据,可以将用户的应用请求初步调度至目标纳管集群中,并交由目标纳管集群进一步地调度至下级的边缘机器中,以通过边缘机器最终处理该应用请求。由上可见,本申请通过中心管理平 台和纳管集群,可以统一管理海量的边缘机器,并通过对边缘机器实时的资源信息进行分析,从而可以让边缘机器在处理正常的CDN业务之余,能够利用闲置资源处理其他用户的应用请求,从而有效地利用了闲置资源。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种设备纳管系统的架构示意图;
图2是本申请实施例提供的一种设备纳管系统中各个组件的结构示意图;
图3是本申请实施例提供的一种纳管集群的功能模块示意图;
图4是本申请实施例提供的一种设备纳管方法的步骤图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供一种设备纳管系统,请参阅图1和图2,所述系统包括中心管理平台、纳管集群和边缘机器。其中,边缘机器用于处理正常的内容分发网络(Content Delivery Network,简称为CDN)业务和用户的应用请求。根据处理的CDN业务的类型,或者根据边缘机器所处的地理位置,或者根据边缘机器所支持的运营商等,海量的边缘机器可以被划分至多个纳管集群中,由不同的纳管集群对边缘机器进行管理。例如,在图1中,可以按照地理位置对边缘机器进行划分。具体地,可以通过华东、华中等区域的纳管集群,对相应地理位置处的边缘机器进行管理。例如,华东的纳管集群,可以对上海机房、济南机房进行管理;而华中的纳管集群,则可以对开封机房、山东机房进行管理。中心管理平台则可以对所有的纳管集群进行管理,纳管集群中收集的信息,最终都 可以汇总于中心管理平台。
在一个实施例中,边缘机器中可以安装边缘管理客户端(edgecore),对应地,边缘机器所属的纳管集群中可以安装边缘管理服务端(cloudcore)。边缘管理客户端可以实时采集边缘机器中的资源信息,该资源信息可以是边缘机器当前的硬件资源使用率。例如,该硬件资源使用率可以是CPU使用率、内存使用率等。资源信息还可以是边缘机器预先设定的资源分布时段。具体地,边缘机器处理的CDN业务通常比较固定,业务的数据量通常只会在可控的范围内变动。例如,某个边缘机器主要负责对购物平台的流量进行加速,在平时,购物平台的流量都比较稳定,而当进入节假日时,购物平台的流量可能会出现突增的情况。通过对边缘机器上的CDN业务进行历史汇总分析,可以得出资源分布时段。该资源分布时段可以表明不同时段的资源利用率,通过分析资源利用率的多少,可以确定出边缘机器的繁忙时段和空闲时段。具体地,繁忙时段的平均资源利用率可以高于某个阈值,而空闲时段的平均资源利用率可以低于某个阈值。这样,通过识别当前时刻,便可以知晓边缘机器是处于繁忙时段还是处于空闲时段。此外,资源信息还可以表征边缘机器当前是否可用。边缘管理客户端可以探测边缘机器的运行状态,或者可以收集边缘机器的运行日志,该运行状态或者运行日志可以表征边缘机器当前是否处于可用状态。当然,在实际应用中,资源信息还可以包括边缘机器的一些其它信息,例如边缘机器的型号、边缘机器的IP地址、边缘机器的故障率等等,边缘机器的资源信息可以用于表征边缘机器当前的运行状态和边缘机器所处的环境,这里就不一一例举了。
在本实施例中,通过边缘管理客户端,边缘机器可以将自身的资源信息上报给纳管集群。纳管集群中的边缘管理服务端,可以与边缘管理客户端建立长连接,从而通过边缘管理服务端来接收边缘机器上报的资源信息。请参阅图2,纳管集群中可以部署用于存储资源信息的数据库,该数据库可以根据实际需求灵活选择。例如在一个实际应用示例中,该数据库可以是键值对(key/value)关系型数据库etcd。通过调用预设数据接口k8s-apiserver,边缘管理服务端可以将边缘机器上报的资源信息写入数据库中。这样,边缘管理服务器通过实时收集边缘机器中的资源信息,可以在数据库中保持更新各个边缘机器的资源信息。
在本实施例中,纳管集群根据收集到的资源信息,可以生成调度列表。该调度列表中可以包含具备闲置资源的边缘机器,后续,可以从该调度列表中 选择一个或者多个边缘机器,并通过选择的边缘机器的闲置资源来处理用户的应用请求。其中,边缘机器的闲置资源,可以指未处理CDN业务的那部分资源。在边缘机器中,通常都会有一部分资源处理正常的CDN业务,除去这一部分资源,其它的资源可以作为闲置资源,用于处理其他用户的应用请求。
具体地,纳管集群可以通过解析边缘机器上报的资源信息来确定调度列表中应当包含哪些可供调度的边缘机器。一方面,纳管集群可以识别各个边缘机器当前的硬件资源利用率,然后将硬件资源利用率低于某个阈值的边缘机器添加至调度列表中。例如,纳管集群可以将硬件资源利用率低于50%的边缘机器添加至调度列表中。此外,还可以识别边缘机器当前所处的资源分布时段。如果边缘机器当前处于空闲时段,则可以将边缘机器添加至调度列表中。而如果边缘机器当前处于繁忙时段,则可以将边缘机器从调度列表中剔除。
在一个实施例中,针对调度列表中的边缘机器,纳管集群可以将用户的应用请求调度至其中的一个或者多个边缘机器,以通过这些边缘机器的闲置资源来处理用户的应用请求。其中,用户的应用请求,可以是区别于正常CDN业务以外的请求,利用闲置资源处理用户的应用请求,一方面不会与正常的CDN业务发生冲突,另一方面也可以最大化地利用边缘机器的资源。然而,由于CDN业务的流量可能会随着时间的推移发生变化,从而导致边缘机器的闲置资源也会不断变化。为了不影响正常的CDN业务,需要保证边缘机器的总体负载不会过高,并且在CDN业务的流量突增时,边缘机器要能够驱逐应用请求的流量。鉴于此,纳管集群可以为纳管的边缘机器设定资源跑高阈值。该资源跑高阈值可以是一个相对较高的资源利用率,该资源跑高阈值可以用来判断是否继续向该边缘机器调度用户的应用请求。举例来说,纳管集群为边缘机器设定的资源跑高阈值可以是80%,当边缘机器当前已使用的资源量达到80%时,表征边缘机器目前的负载较高,此时纳管集群可以中止向该边缘机器调度应用请求。
在本实施例中,纳管集群可以实时监控边缘机器已使用的资源量,如果在一定的时长内,边缘机器已使用的资源量始终低于该资源跑高阈值,那么纳管集群可以恢复向该边缘机器调度应用请求。。
在一个实施例中,如果在中止向边缘机器调度应用请求之后,边缘机器已使用的资源量依然在不断增长,说明正常的CDN业务的流量出现了突增情况,此时,为了不影响CDN业务,需要对边缘机器中应用请求的业务进行驱逐,以 保留更多的资源用来处理突增的CDN业务。具体地,纳管集群可以为边缘机器设定跑高驱逐策略,在该跑高驱逐策略中,可以设置比上述的资源跑高阈值更高的驱逐阈值。例如,资源跑高阈值为80%,那么驱逐阈值就可以是90%。这样,当边缘机器当前已使用的资源量达到所述跑高驱逐策略表征的驱逐阈值时,纳管集群可以对调度至该边缘机器中的应用业务进行驱逐,以提高该边缘机器中可使用的资源量。在实际应用中,也可以按照应用的优先级对应用业务进行驱逐。例如,可以按照优先级从低到高的顺序依次驱逐应用业务。
在一个实施例中,由于存在资源跑高阈值和跑高驱逐策略,原本预计调度至边缘机器的应用请求可能无法被边缘机器执行,此时,纳管集群可以再次从所述调度列表中确定另一个边缘机器,并将所述应用请求重新调度至所述另一个边缘机器处,以保证应用请求能够被正常执行。
在实际应用中,纳管集群中可以包括控制器和调度器,其中,控制器可以监控边缘机器中运行的应用的状态信息,并根据边缘机器当前的闲置资源,控制边缘机器中应用的副本数量。具体地,应用的状态信息可以包括应用所占的资源量、应用运行的时长、应用运行的进度等一系列信息。相同的应用,可能需要同时运行多个副本,此时,控制器可以根据边缘机器剩余的闲置资源,动态地控制应用的副本数量,以达到充分利用闲置资源,并不影响正常CDN业务的目的。
上述的调度器可以解析数据库中的数据,以确定各个边缘机器当前的闲置资源,并根据确定的所述闲置资源,从所述调度列表中筛选出目标边缘机器,并将所述中心管理平台下发的应用请求调度至所述目标边缘机器处。具体地,调度器可以通过解析数据库etcd中的数据,从而实时监控各个边缘机器的闲置资源。在筛选目标边缘机器时,可以将调度列表中当前闲置资源最多的边缘机器作为目标边缘机器。此外,还可以根据应用请求的资源要求来筛选出目标边缘机器。举例来说,应用请求对于闲置资源中CPU资源和内存资源有一定的比例要求,或者应用请求对于边缘机器的型号(例如amd64型号)有要求,那么此时可以识别应用请求对应的资源需求,并从调度列表中将符合该资源需求的边缘机器作为筛选出的目标边缘机器。当然,在实际应用中还可以根据更多的信息来筛选目标边缘机器,这里便不再一一例举。
请参阅图2,在一个实施例中,纳管集群中可以安装资源上报客户端 (agent-edge),中心管理平台中可以安装资源上报服务端(agent-manager),其中,资源上报客户端可以将纳管集群的数据库etcd中的数据上报至中心管理平台。当然,在实际应用中,资源上报客户端可以先对数据库中的数据进行筛选和统计,例如可以去除数据库中的重复数据,并统计边缘机器中应用所占的资源信息,以及各个边缘机器当前的运行状态、各个边缘机器的型号等信息,并将统计的数据上报给中心管理平台。
中心管理平台中的资源上报服务端,可以接收所述资源上报客户端上报的数据,并将所述数据存储至所述中心管理平台的数据库中。在实际应用中,中心管理平台的数据库可以是持久化的存储系统。例如,该数据库可以是远程数据服务(Remote Dictionary Server,简称为Redis)数据库。资源上报服务端接收到的数据,均可以写入该持久化的数据库中。
请参阅图2,在一个实施例中,中心管理平台内还可以设置对外接口(pontus)和全局调度器(global-scheduler),其中,对外接口被调用时,可以从所述中心管理平台的数据库Redis中获取纳管集群上报的数据。全局调度器可以接收用户的应用请求,并通过调用所述对外接口,查询当前可接收所述应用请求的纳管集群,并将所述应用请求调度至对应的纳管集群处。具体地,全局调度器在接收到用户的应用请求后,可以通过对外接口获取当前各个纳管集群上报的数据,并通过分析这些数据,确定出具备闲置资源的纳管集群。然后,可以将该应用请求初步调度至对应的纳管集群中,并由纳管集群进一步地将应用请求调度至边缘机器中进行处理。
请参阅图3,本申请还提供一种纳管集群,所述纳管集群包括:
调度列表更新单元,用于接收边缘机器上报的资源信息,并根据所述资源信息更新调度列表,其中,所述调度列表中的边缘机器具备闲置资源;
资源上报单元,用于在数据库中存储所述资源信息,并将所述数据库中的数据上报至中心管理平台,以使得所述中心管理平台根据上报的数据确定用于接收应用请求的纳管集群;
请求调度单元,用于接收中心管理平台下发的应用请求,并将所述应用请求调度至所述调度列表中的边缘机器,以通过边缘机器执行所述应用请求。
在一个实施例中,所述纳管集群中还包括控制器和调度器,其中:
所述控制器,用于监控所述边缘机器中运行的应用的状态信息,并根据所 述边缘机器当前的闲置资源,控制所述边缘机器中应用的副本数量;
所述调度器,用于解析所述数据库中的数据,以确定各个边缘机器当前的闲置资源,并根据确定的所述闲置资源,从所述调度列表中筛选出目标边缘机器,并将所述中心管理平台下发的应用请求调度至所述目标边缘机器处。
在一个实施例中,所述纳管集群还用于解析所述边缘机器上报的资源信息,以确定所述边缘机器的资源分布时段;其中,若所述边缘机器的资源分布时段表征繁忙时段,将所述边缘机器从所述调度列表中剔除;若所述边缘机器的资源分布时段表征空闲时段,将所述边缘机器添加至所述调度列表中。
在一个实施例中,所述纳管集群还用于为所述边缘机器设定资源跑高阈值,其中,当所述边缘机器当前已使用的资源量达到所述资源跑高阈值,所述纳管集群中的调度器中止向所述边缘机器调度应用请求。
在一个实施例中,所述纳管集群还用于为所述边缘机器设定跑高驱逐策略,当所述边缘机器当前已使用的资源量达到所述跑高驱逐策略表征的驱逐阈值时,所述纳管集群对调度至所述边缘机器中的应用业务进行驱逐,以提高所述边缘机器中可使用的资源量。
基于相同的发明构思,请参阅图4,本申请还提供一种设备纳管方法,所述方法包括以下步骤。
S1:边缘机器向纳管集群上报资源信息。
S3:所述纳管集群在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源,并将所述数据库中的数据上报至中心管理平台。
S5:所述中心管理平台接收用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。
在一个实施例中,所述资源信息包括边缘机器当前的硬件资源使用率、所述边缘机器预先设定的资源分布时段、所述边缘机器当前是否可用中的至少一种。
在一个实施例中,所述边缘机器中安装有边缘管理客户端,所述边缘管理客户端用于向所述纳管集群上报资源信息;所述纳管集群中安装有边缘管理服务端,所述边缘管理服务端用于接收所述边缘管理客户端上报的资源信息, 并通过调用预设数据接口将所述资源信息存储至所述数据库中。
在一个实施例中,所述纳管集群中包括控制器和调度器,所述方法还包括:
所述控制器监控所述边缘机器中运行的应用的状态信息,并根据所述边缘机器当前的闲置资源,控制所述边缘机器中应用的副本数量;
所述调度器解析所述数据库中的数据,以确定各个边缘机器当前的闲置资源,并根据确定的所述闲置资源,从所述调度列表中筛选出目标边缘机器,并将所述中心管理平台下发的应用请求调度至所述目标边缘机器处。
在一个实施例中,所述方法还还包括:
所述纳管集群解析所述边缘机器上报的资源信息,以确定所述边缘机器的资源分布时段;其中,若所述边缘机器的资源分布时段表征繁忙时段,将所述边缘机器从所述调度列表中剔除;若所述边缘机器的资源分布时段表征空闲时段,将所述边缘机器添加至所述调度列表中。
在一个实施例中,所述方法还包括:
所述纳管集群为所述边缘机器设定资源跑高阈值,其中,当所述边缘机器当前已使用的资源量达到所述资源跑高阈值,所述纳管集群中的调度器中止向所述边缘机器调度应用请求。
在一个实施例中,所述方法还包括:
所述纳管集群为所述边缘机器设定跑高驱逐策略,当所述边缘机器当前已使用的资源量达到所述跑高驱逐策略表征的驱逐阈值时,所述纳管集群对调度至所述边缘机器中的应用业务进行驱逐,以提高所述边缘机器中可使用的资源量。
在一个实施例中,所述方法还包括:
当调度至所述边缘机器的应用请求无法被所述边缘机器执行时,所述纳管集群再次从所述调度列表中确定另一个边缘机器,并将所述应用请求重新调度至所述另一个边缘机器处。
在一个实施例中,所述纳管集群中安装有资源上报客户端,所述资源上报客户端用于将所述数据库中的数据上报至所述中心管理平台;所述中心管理平台中安装有资源上报服务端,所述资源上报服务端用于接收所述资源上报客户端上报的数据,并将所述数据存储至所述中心管理平台的数据库中。
在一个实施例中,所述中心管理平台中还包括对外接口和全局调度器,其中:
所述对外接口被调用时,从所述中心管理平台的数据库中获取所述纳管集群上报的数据;
所述全局调度器,用于接收用户的应用请求,并通过调用所述对外接口,查询当前可接收所述应用请求的纳管集群,并将所述应用请求调度至对应的纳管集群处。
由上可见,本申请一个或者多个实施例提供的技术方案,可以通过中心管理平台、纳管集群和边缘机器的协同运作,来实现海量边缘机器的管理和闲置资源的利用。具体地,边缘机器除了可以运行正常的CDN业务,还可以将闲置资源用来处理用户的应用请求。边缘机器可以实时将自身的资源信息上报至所属的纳管集群,纳管集群通过分析该资源信息,可以生成并更新调度列表,在调度列表中的边缘机器都可以是具备闲置资源的边缘机器,这些边缘机器在处理正常的CDN业务之余,还可以处理用户的应用请求。纳管集群可以将收集到的资源信息存储在数据库中,并将数据库中的数据进一步上报至中心管理平台。中心管理平台通过解析纳管集群上报的数据,可以将用户的应用请求初步调度至目标纳管集群中,并交由目标纳管集群进一步地调度至下级的边缘机器中,以通过边缘机器最终处理该应用请求。由上可见,本申请通过中心管理平台和纳管集群,可以统一管理海量的边缘机器,并通过对边缘机器实时的资源信息进行分析,从而可以让边缘机器在处理正常的CDN业务之余,能够利用闲置资源处理其他用户的应用请求,从而有效地利用了闲置资源。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,针对设备和方法的实施例来说,均可以参照前述系统的实施例的介绍对照解释。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Ramdom Access Memory,简称为RAM)和/或非易失性内存等形式,如只读存储器(Read-Only Memory,简称为ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Ramdom Access Memory,简称为PRAM)、静态随机存取存储器(Static Ramdom Access Memory,简称为SRAM)、动态随机存取存储器(Dynamic Ramdom Access Memory,简称为DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称为EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,简称为CD-ROM)、 数字多功能光盘(Digital Video Disc,简称为DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (16)

  1. 一种设备纳管方法,包括:
    边缘机器向纳管集群上报资源信息;
    所述纳管集群在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源,并将所述数据库中的数据上报至中心管理平台;
    所述中心管理平台接收用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。
  2. 根据权利要求1所述的方法,其中,所述资源信息包括边缘机器当前的硬件资源使用率、所述边缘机器预先设定的资源分布时段、所述边缘机器当前是否可用中的至少一种。
  3. 根据权利要求1所述的方法,其中,所述边缘机器中安装有边缘管理客户端,所述边缘管理客户端用于向所述纳管集群上报资源信息;
    所述纳管集群中安装有边缘管理服务端,所述边缘管理服务端用于接收所述边缘管理客户端上报的资源信息,并通过调用预设数据接口将所述资源信息存储至所述数据库中。
  4. 根据权利要求1所述的方法,其中,所述纳管集群中包括控制器和调度器,所述方法还包括:
    所述控制器监控所述边缘机器中运行的应用的状态信息,并根据所述边缘机器当前的闲置资源,控制所述边缘机器中应用的副本数量;
    所述调度器解析所述数据库中的数据,以确定各个边缘机器当前的闲置资源,并根据确定的所述闲置资源,从所述调度列表中筛选出目标边缘机器,并将所述中心管理平台下发的应用请求调度至所述目标边缘机器处。
  5. 根据权利要求1或4所述的方法,其中,所述方法还还包括:
    所述纳管集群解析所述边缘机器上报的资源信息,以确定所述边缘机器的 资源分布时段;其中,若所述边缘机器的资源分布时段表征繁忙时段,将所述边缘机器从所述调度列表中剔除;若所述边缘机器的资源分布时段表征空闲时段,将所述边缘机器添加至所述调度列表中。
  6. 根据权利要求1或4所述的方法,其中,所述方法还包括:
    所述纳管集群为所述边缘机器设定资源跑高阈值,其中,当所述边缘机器当前已使用的资源量达到所述资源跑高阈值,所述纳管集群中的调度器中止向所述边缘机器调度应用请求。
  7. 根据权利要求1或4所述的方法,其中,所述方法还包括:
    所述纳管集群为所述边缘机器设定跑高驱逐策略,当所述边缘机器当前已使用的资源量达到所述跑高驱逐策略表征的驱逐阈值时,所述纳管集群对调度至所述边缘机器中的应用业务进行驱逐,以提高所述边缘机器中可使用的资源量。
  8. 根据权利要求1或4所述的方法,其中,所述方法还包括:
    当调度至所述边缘机器的应用请求无法被所述边缘机器执行时,所述纳管集群再次从所述调度列表中确定另一个边缘机器,并将所述应用请求重新调度至所述另一个边缘机器处。
  9. 根据权利要求1所述的方法,其中,所述纳管集群中安装有资源上报客户端,所述资源上报客户端用于将所述数据库中的数据上报至所述中心管理平台;
    所述中心管理平台中安装有资源上报服务端,所述资源上报服务端用于接收所述资源上报客户端上报的数据,并将所述数据存储至所述中心管理平台的数据库中。
  10. 根据权利要求1或9所述的方法,其中,所述中心管理平台中还包括对外接口和全局调度器,其中:
    所述对外接口被调用时,从所述中心管理平台的数据库中获取所述纳管集 群上报的数据;
    所述全局调度器,用于接收用户的应用请求,并通过调用所述对外接口,查询当前可接收所述应用请求的纳管集群,并将所述应用请求调度至对应的纳管集群处。
  11. 一种设备纳管系统,包括中心管理平台、纳管集群和边缘机器,其中:
    所述边缘机器,用于向所述纳管集群上报资源信息,并执行所述纳管集群调度的应用请求;
    所述纳管集群,用于在数据库中存储所述资源信息,并根据所述资源信息更新调度列表,所述调度列表中的边缘机器具备闲置资源;将所述数据库中的数据上报至所述中心管理平台,并接收所述中心管理平台下发的应用请求,并将所述应用请求调度至所述调度列表中的边缘机器;
    所述中心管理平台,用于接收用户的应用请求,并根据各个纳管集群上报的数据,将所述应用请求调度至目标纳管集群,以通过所述目标纳管集群下的边缘机器执行所述应用请求。
  12. 一种纳管集群,包括:
    调度列表更新单元,用于接收边缘机器上报的资源信息,并根据所述资源信息更新调度列表,其中,所述调度列表中的边缘机器具备闲置资源;
    资源上报单元,用于在数据库中存储所述资源信息,并将所述数据库中的数据上报至中心管理平台,以使得所述中心管理平台根据上报的数据确定用于接收应用请求的纳管集群;
    请求调度单元,用于接收中心管理平台下发的应用请求,并将所述应用请求调度至所述调度列表中的边缘机器,以通过边缘机器执行所述应用请求。
  13. 根据权利要求12所述的纳管集群,其中,所述纳管集群中还包括控制器和调度器,其中:
    所述控制器,用于监控所述边缘机器中运行的应用的状态信息,并根据所述边缘机器当前的闲置资源,控制所述边缘机器中应用的副本数量;
    所述调度器,用于解析所述数据库中的数据,以确定各个边缘机器当前的 闲置资源,并根据确定的所述闲置资源,从所述调度列表中筛选出目标边缘机器,并将所述中心管理平台下发的应用请求调度至所述目标边缘机器处。
  14. 根据权利要求12或13所述的纳管集群,其中,所述纳管集群还用于解析所述边缘机器上报的资源信息,以确定所述边缘机器的资源分布时段;其中,若所述边缘机器的资源分布时段表征繁忙时段,将所述边缘机器从所述调度列表中剔除;若所述边缘机器的资源分布时段表征空闲时段,将所述边缘机器添加至所述调度列表中。
  15. 根据权利要求12或13所述的纳管集群,其中,所述纳管集群还用于为所述边缘机器设定资源跑高阈值,其中,当所述边缘机器当前已使用的资源量达到所述资源跑高阈值,所述纳管集群中的调度器中止向所述边缘机器调度应用请求。
  16. 根据权利要求12或13所述的纳管集群,其中,所述纳管集群还用于为所述边缘机器设定跑高驱逐策略,当所述边缘机器当前已使用的资源量达到所述跑高驱逐策略表征的驱逐阈值时,所述纳管集群对调度至所述边缘机器中的应用业务进行驱逐,以提高所述边缘机器中可使用的资源量。
PCT/CN2020/122548 2020-09-15 2020-10-21 一种设备纳管方法、系统及纳管集群 WO2022057001A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010967450.3A CN112272201B (zh) 2020-09-15 2020-09-15 一种设备纳管方法、系统及纳管集群
CN202010967450.3 2020-09-15

Publications (1)

Publication Number Publication Date
WO2022057001A1 true WO2022057001A1 (zh) 2022-03-24

Family

ID=74348766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122548 WO2022057001A1 (zh) 2020-09-15 2020-10-21 一种设备纳管方法、系统及纳管集群

Country Status (2)

Country Link
CN (1) CN112272201B (zh)
WO (1) WO2022057001A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117155933A (zh) * 2023-10-31 2023-12-01 北京比格大数据有限公司 一种多集群纳管方法及平台、设备及存储介质
WO2024088025A1 (zh) * 2022-10-25 2024-05-02 中电信数智科技有限公司 一种基于多维数据的5gc网元自动化纳管方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590324B (zh) * 2021-07-30 2022-12-13 广东省机电设备招标中心有限公司 一种面向云边端协同计算的启发式任务调度方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170452A (zh) * 2007-11-30 2008-04-30 中国电信股份有限公司 增强管理能力的内容分发网络业务提供点系统及所属网络
US20130198755A1 (en) * 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for managing resources in cluster computing environment
CN104320487A (zh) * 2014-11-11 2015-01-28 网宿科技股份有限公司 内容分发网络的http调度系统和方法
CN104461740A (zh) * 2014-12-12 2015-03-25 国家电网公司 一种跨域集群计算资源聚合和分配的方法
CN110688213A (zh) * 2018-07-05 2020-01-14 深圳先进技术研究院 一种基于边缘计算的资源管理方法、系统及电子设备
CN111611074A (zh) * 2020-05-14 2020-09-01 北京达佳互联信息技术有限公司 一种集群资源的调度方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150421B (zh) * 2006-09-22 2011-05-04 华为技术有限公司 一种分布式内容分发方法、边缘服务器和内容分发网
US8180720B1 (en) * 2007-07-19 2012-05-15 Akamai Technologies, Inc. Content delivery network (CDN) cold content handling
CN105681387A (zh) * 2015-11-26 2016-06-15 乐视云计算有限公司 一种直播视频的上传方法、装置及系统
CN110633144A (zh) * 2019-08-23 2019-12-31 成都华为技术有限公司 一种边缘云的融合管理的方法及装置
CN111176697B (zh) * 2020-01-02 2024-02-13 广州虎牙科技有限公司 服务实例部署方法、数据处理方法及集群联邦
CN111262906B (zh) * 2020-01-08 2021-05-25 中山大学 分布式边缘计算服务系统下的移动用户终端任务卸载方法
CN111638935B (zh) * 2020-04-15 2022-07-01 阿里巴巴集团控股有限公司 镜像管理方法、网络系统、设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170452A (zh) * 2007-11-30 2008-04-30 中国电信股份有限公司 增强管理能力的内容分发网络业务提供点系统及所属网络
US20130198755A1 (en) * 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for managing resources in cluster computing environment
CN104320487A (zh) * 2014-11-11 2015-01-28 网宿科技股份有限公司 内容分发网络的http调度系统和方法
CN104461740A (zh) * 2014-12-12 2015-03-25 国家电网公司 一种跨域集群计算资源聚合和分配的方法
CN110688213A (zh) * 2018-07-05 2020-01-14 深圳先进技术研究院 一种基于边缘计算的资源管理方法、系统及电子设备
CN111611074A (zh) * 2020-05-14 2020-09-01 北京达佳互联信息技术有限公司 一种集群资源的调度方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024088025A1 (zh) * 2022-10-25 2024-05-02 中电信数智科技有限公司 一种基于多维数据的5gc网元自动化纳管方法及装置
CN117155933A (zh) * 2023-10-31 2023-12-01 北京比格大数据有限公司 一种多集群纳管方法及平台、设备及存储介质
CN117155933B (zh) * 2023-10-31 2024-02-27 北京比格大数据有限公司 一种多集群纳管方法及平台、设备及存储介质

Also Published As

Publication number Publication date
CN112272201B (zh) 2022-05-27
CN112272201A (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
WO2022057001A1 (zh) 一种设备纳管方法、系统及纳管集群
WO2020253347A1 (zh) 一种容器集群管理方法、装置及系统
US10042772B2 (en) Dynamic structural management of a distributed caching infrastructure
US10284650B2 (en) Method and system for dynamic handling in real time of data streams with variable and unpredictable behavior
CN112199194A (zh) 基于容器集群的资源调度方法、装置、设备和存储介质
CN106452818B (zh) 一种资源调度的方法和系统
US20160210061A1 (en) Architecture for a transparently-scalable, ultra-high-throughput storage network
WO2018072687A1 (zh) 一种资源调度的方法、装置和过滤式调度器
EP3002924B1 (en) Stream-based object storage solution for real-time applications
US9479413B2 (en) Methods and policies to support a quality-of-storage network
US8706858B2 (en) Method and apparatus for controlling flow of management tasks to management system databases
CN110677274A (zh) 一种基于事件的云网络服务调度方法及装置
TW202133055A (zh) 透過多層次相關性建立系統資源預測及資源管理模型的方法
CN112579304A (zh) 基于分布式平台的资源调度方法、装置、设备及介质
US20210406053A1 (en) Rightsizing virtual machine deployments in a cloud computing environment
CN111381957B (zh) 面向分布式平台的服务实例精细化调度方法及系统
CN114003377A (zh) 一种基于es服务的内存熔断方法、装置、设备及可读介质
WO2024164894A1 (zh) 流量控制与数据复制方法、节点、系统及存储介质
US10348814B1 (en) Efficient storage reclamation for system components managing storage
CN103561092A (zh) 私有云环境下管理资源的方法及装置
CN114296891A (zh) 任务的调度方法、系统、计算设备、存储介质及程序产品
CN115617468A (zh) 一种租户的资源管理方法及租户管理系统
CN115562933A (zh) 作业监控数据的处理方法及装置、存储介质、电子设备
US20240340343A1 (en) Data processing system and method and device
Zhao et al. Yadoop: an elastic resource management solution of yarn

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20953858

Country of ref document: EP

Kind code of ref document: A1