WO2021259064A1 - 集群的缩扩容方法及系统、缩扩容控制终端和介质 - Google Patents

集群的缩扩容方法及系统、缩扩容控制终端和介质 Download PDF

Info

Publication number
WO2021259064A1
WO2021259064A1 PCT/CN2021/099171 CN2021099171W WO2021259064A1 WO 2021259064 A1 WO2021259064 A1 WO 2021259064A1 CN 2021099171 W CN2021099171 W CN 2021099171W WO 2021259064 A1 WO2021259064 A1 WO 2021259064A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance data
virtual host
target cluster
cluster
cloud platform
Prior art date
Application number
PCT/CN2021/099171
Other languages
English (en)
French (fr)
Inventor
张宙
王寿林
马友昌
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to JP2022577666A priority Critical patent/JP2023530996A/ja
Priority to EP21828729.0A priority patent/EP4167085A4/en
Priority to US18/011,667 priority patent/US20230236866A1/en
Publication of WO2021259064A1 publication Critical patent/WO2021259064A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/505Clust
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication

Definitions

  • the present disclosure relates to, but is not limited to, the field of communication technology.
  • the present disclosure provides a cluster scaling method, including: acquiring performance data of a target cluster; judging whether the target cluster needs to be expanded or contracted according to the performance data; when the target cluster is determined When capacity expansion is needed, control the cloud platform to create a first virtual host and add the first virtual host to the target cluster; when it is determined that the target cluster needs to be scaled down, control the cloud platform to remove The second virtual host in the target cluster.
  • the present disclosure also provides a shrink-and-expansion control terminal, including: one or more processors; a storage device configured to store one or more programs; when the one or more programs are controlled by the one or more When executed by multiple processors, the one or more processors implement any method described herein.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement any of the methods described herein.
  • the present disclosure also provides a cluster scaling system, including: a scaling control terminal and a cloud platform; wherein the scaling control terminal adopts any scaling control terminal described herein.
  • FIG. 1 is a flowchart of a method for shrinking and expanding a cluster provided by the present disclosure
  • Fig. 2 is a flowchart of an exemplary implementation method of step S1 in Fig. 1;
  • Fig. 3 is a flowchart of an exemplary implementation method of step S101 in Fig. 2;
  • Fig. 4 is a flowchart of an exemplary implementation method of step S3 in Fig. 1;
  • FIG. 5 is a flowchart of a method for shrinking and expanding a cluster provided by the present disclosure
  • FIG. 6 is a structural block diagram of a cluster shrinking and expanding system provided by the present disclosure.
  • Fig. 7 is a structural block diagram of a performance data collector provided by the present disclosure.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. Therefore, without departing from the teachings of the present disclosure, the first element, the first component, or the first module discussed below may be referred to as the second element, the second component, or the second module.
  • the system In order to cope with the impact of shrinking and expansion caused by business fluctuations, the system generally adopts the form of clusters.
  • the stock cluster system especially the stock Kubernetes cluster system
  • the underlying layer uses bare metal and does not have the conditions to add new hardware.
  • the stock cloud platform it uses is not fully supported and compatible, and it is difficult to scale and expand.
  • Cluster expansion at this stage is limited to the formulation of scaling strategies based on the processor and memory margins of each node in the cluster. It is not possible to perform adaptive scaling based on business changes, and there are also existing systems that use bare metal without dynamic expansion capabilities.
  • the cluster scaling method, scaling control terminal, computer readable medium, and cluster scaling system provided in the present disclosure can be used to make scaling judgments based on the acquired performance data, and use virtualization capabilities to create or migrate on a cloud platform In addition to the corresponding nodes, the dynamic shrinking and expansion of the cluster is realized.
  • FIG. 1 is a flowchart of a method for shrinking and expanding a cluster provided by the present disclosure. As shown in Fig. 1, the method may include step S1 to step S4.
  • step S1 the performance data of the target cluster is obtained.
  • step S1 the performance data of the target cluster is obtained from different dimensions and levels, and the performance data can characterize the characteristic index of the target cluster, and provide historical data for the construction of the expansion strategy and the shrinking strategy.
  • the method before acquiring the performance data of the target cluster, the method further includes: analyzing the deployment form of the target cluster in stock, and acquiring its docking authentication information and information record format during business operation; wherein, the docking authentication information may include Secure Shell (SSH) information and authentication information of each host.
  • SSH Secure Shell
  • step S2 it is determined whether the target cluster needs to be expanded or reduced based on the performance data.
  • the capacity expansion strategy and contraction strategy of the target cluster constructed according to the performance data are used to determine whether expansion or contraction is needed.
  • step S2 when it is determined that the target cluster needs to be expanded, step S3 is executed; when it is determined that the target cluster needs to be scaled down, step S4 is executed; when it is determined that the target cluster does not need to be expanded or reduced, continue Step S1 is executed.
  • step S3 the cloud platform is controlled to create a first virtual host, and the first virtual host is added to the target cluster.
  • the cloud platform while controlling the cloud platform to create the first virtual host, it also controls the cloud platform to create virtual resources corresponding to the first virtual host.
  • the virtual resources include mirror resources, virtual port resources, cloud storage resources, and the like.
  • the cloud platform is notified to report the first virtual host identifier and virtual resource information, and the reported first virtual host identifier and virtual resource information Persistence, through file databases such as BoltDB and SQlite or in the form of files such as json format, csv format, and xml format, is saved locally as a persistent information file and stored in the persistent information table.
  • the first virtual host is started, it is sent to and controlled to install the corresponding support kit.
  • the scaling control terminal corresponding to the cluster scaling method can be installed on the cloud platform side.
  • step S4 the cloud platform is controlled to remove the second virtual host in the target cluster.
  • the second virtual host is removed through a RESTful interface method or a command line interface method.
  • the second virtual host is the virtual host newly added to the target cluster.
  • the present disclosure provides a cluster scaling method, which can be used to determine whether scaling is required according to the acquired performance data, and by controlling the cloud platform to create or remove virtual hosts to achieve dynamic scaling of the cluster, Therefore, this method is not only suitable for general inventory systems, but also for inventory systems that use bare metal without dynamic capacity expansion.
  • Fig. 2 is a flowchart of an exemplary implementation method of step S1 in Fig. 1.
  • the target cluster is a Kubernetes cluster; as shown in FIG. 2, step S1, the step of obtaining performance data of the target cluster, may include step S101 and step S102.
  • step S101 real-time performance data of the target cluster is acquired.
  • real-time performance data includes operating system performance data, Kubernetes performance data, and business performance data.
  • Operating system performance data can include system logs, processor information, memory information, disk usage, disk performance indicators, etc.
  • Kubernetes performance data can include information about nodes in the cluster, deployment information, and replication controllers.
  • RC for short
  • RS Replica Set
  • container information etc.
  • business performance data may include application logs and other performance data when the business is running.
  • step S102 data cleaning is performed on the real-time performance data, and aggregated to generate performance data.
  • data cleaning can include: deleting obviously abnormal data, such as percentage type data with a value exceeding 100%; marking potentially abnormal data, such as data with a historical year-on-year deviation greater than 15%; for severely judging data types, deleting the task report To ensure data consistency; if multiple pieces of data are reported at a single point in time, de-duplicate to retain one piece of data or merge multiple pieces of data according to the strategy. After that, save the cleaned real-time performance data in the form of temporary files to a temporary folder or cache.
  • every preset time period such as 5 minutes, add read-only tags for temporary files or real-time performance data in the cache, and aggregate the data of different physical nodes and different Kubernetes nodes into one in a unified format Or multiple records, plus a time stamp, after the aggregation is completed, the read-only label is removed to generate performance data.
  • the processing operation has timed out without a response, the read-only label will also be removed.
  • the performance data generated after aggregation is persisted, and saved locally as a persistent information file through file databases such as BoltDB and SQlite, or in the form of files such as json format and csv format, and stored in persistence Information sheet.
  • file databases such as BoltDB and SQlite, or in the form of files such as json format and csv format, and stored in persistence Information sheet.
  • Fig. 3 is a flowchart of an exemplary implementation method of step S101 in Fig. 2.
  • the real-time performance data is acquired in a non-intrusive manner; as shown in FIG. 3, step S101, the step of acquiring real-time performance data of the target cluster, may include step S1011 to step S1013.
  • step S1011 the operating system performance data is acquired through the first non-invasive manner.
  • the first non-intrusive method includes the remote terminal protocol (Telnet) method, the secure shell protocol method, the secure file transfer protocol (Secret File Transfer Protocol or SSH File Transfer Protocol, referred to as SFTP), and the file transfer protocol (File Transfer Protocol, referred to as SFTP). At least one of FTP) mode and RESTful interface mode.
  • the non-intrusive method used in the present disclosure is different from collecting characteristic index information and performance data through intrusive methods in the prior art.
  • the traditional method is to reside in each host. Leave the agent software (Agent), update the topology information of the entire cluster through the on-off state of the link between the agent software, and use the non-intrusive method applied in this disclosure, for example, use system commands on each node through the secure shell protocol tunnel
  • the data can be collected by detecting the connection status of the link.
  • step S1012 the Kubernetes performance data is acquired through a second non-invasive manner.
  • the second non-intrusive manner includes at least one of a RESTful interface manner and a command-line interface (Command-Line Interface, CLI for short) manner.
  • step S1013 the business performance data is obtained from the data storage volume.
  • the data storage volume includes a given path storage volume (HostPath) or a persistent storage volume (Persistent Volume, PV), etc.;
  • the step of obtaining business performance data from the data storage volume is to obtain data from the data storage volume according to the corresponding Kubernetes specification. Remotely pull business performance data in a custom format from the volume.
  • the application log is also read from the persistent information file or obtained from the log system (such as the ELK system) interface.
  • the present disclosure provides a method for scaling down and expanding a cluster.
  • the method can be used to obtain operating system performance data, Kubernetes performance data, and business performance data for a Kubernetes cluster in a non-intrusive way. While deploying hardware information such as resource margins to scale and expand, the cluster can be dynamically scaled and scaled according to business changes.
  • Fig. 4 is a flowchart of an exemplary implementation method of step S3 in Fig. 1.
  • the method further includes: step S301, adding the first virtual host to the node resource pool, and managing in the node resource pool Register the first virtual host on the server.
  • the registration of the first virtual host is performed through a RESTful interface.
  • step S3 the step of adding the first virtual host to the target cluster may include: step S302, according to the node information allocated by the node resource pool manager after the registration is completed, the first virtual host is added through a RESTful interface or a command line interface. The virtual host is added to the target cluster.
  • the node information may include the node IP address and the node label.
  • step S3 after the step of adding the first virtual host to the target cluster, the method further includes: step S303, persisting node information, and storing it in a persistent information table.
  • file databases such as BoltDB and SQlite or in the form of files such as json format, csv format and xml format are saved locally as persistent information files and stored in the persistent information table.
  • Fig. 5 is a flowchart of a method for shrinking and expanding a cluster provided by the present disclosure.
  • the method shown in FIG. 5 is proposed on the basis of the method shown in FIG. 1.
  • steps S1 to S4 please refer to the above description.
  • the method may further include step S5 and step S6.
  • step S5 the second virtual host is removed from the node resource pool, and the second virtual host is deregistered on the node resource pool manager.
  • the logout of the second virtual host is performed through a RESTful interface.
  • step S6 query the node information corresponding to the second virtual host from the persistent information table, and control the cloud platform to reclaim the second virtual host.
  • the resource information corresponding to the second virtual host is also queried from the persistence information table, and the cloud platform is controlled to delete the virtualized resource corresponding to the second virtual host.
  • the present disclosure provides a method for shrinking and expanding a cluster, which can be used to correspondingly shrink and expand a node resource pool.
  • Fig. 6 is a structural block diagram of a cluster shrinking and expanding system provided by the present disclosure.
  • the scaling system includes a scaling control terminal and a cloud platform.
  • the scaling control terminal includes a performance data collector, a performance data storage, a decision maker, a policy model and an executor;
  • the cloud platform can create and manage multiple virtual hosts;
  • the corresponding target cluster is a Kubernetes cluster, which exists in the stock Kubernetes system , Its bottom layer uses bare metal and includes multiple nodes.
  • the performance data collector first collects cluster performance data in a non-invasive manner, and transmits it to the performance data storage as historical data for storage.
  • Fig. 7 is a structural block diagram of a performance data collector provided by the present disclosure.
  • the performance data collector includes a data aggregation module, a data cleaning module, a business collection module, a Kubernetes collection module, an operating system collection module, and a collection task management module.
  • the collection task management module is configured to construct collection tasks according to user real-time configuration or preset task specifications, and to manage and distribute collection tasks of business collection modules, Kubernetes collection modules, and operating system collection modules; business collection modules are configured to pass Access cluster business services to collect cluster business performance data and application logs, the Kubernetes collection module is configured to collect cluster Kubernetes performance data, and the operating system collection module is configured to use remote terminal protocol (telnet), secure shell protocol (SSH), The secure file transfer protocol (SFTP) or RESTful interface method collects the operating system performance data of the cluster; the data cleaning module is configured to clean the collected real-time performance data; the data aggregation module is configured to aggregate the cleaned real-time performance data To generate performance data.
  • business collection modules are configured to pass Access cluster business services to collect cluster business performance data and application logs
  • the Kubernetes collection module is configured to collect cluster Kubernetes performance data
  • the operating system collection module is configured to use remote terminal protocol (telnet), secure shell protocol (SSH), The secure
  • the decision maker judges whether the cluster needs to be scaled and expanded according to the performance data and the scaling strategy generated by the strategy model.
  • the executor performs the scaling of the cluster according to the decision result of the decision maker, where, when the cluster is judged When capacity expansion is needed, the executor controls the cloud platform to create a virtual host or specify a created idle virtual host, and then add the virtual host to the node resource pool. After completing the registration on the node resource pool manager, Add the virtual host to the cluster to achieve expansion; when it is determined that the cluster needs to be scaled down, the executor controls the cloud platform to remove the last virtual host added to the cluster and move it to the node resource pool at the same time In addition, log it out on the node resource pool manager to achieve shrinkage.
  • the present disclosure also provides a shrink-and-expansion control terminal, including: one or more processors; a storage device configured to store one or more programs; when the one or more programs are executed by the one or more processors , So that the one or more processors implement any method as in the foregoing embodiments.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, where the program is executed by a processor to implement any method in the above-mentioned embodiments.
  • the present disclosure also provides a cluster scaling system, including: a scaling control terminal and a cloud platform; wherein the scaling control terminal adopts the scaling control terminal in the above-mentioned embodiment.
  • the present disclosure provides a cluster scaling method, a scaling control terminal, a computer readable medium, and a cluster scaling system.
  • the cluster scaling method can be applied to the scaling control terminal, by performing a process based on the acquired performance data Scaling and scaling judgments, using virtualization capabilities to create or remove corresponding nodes on the cloud platform to achieve dynamic scaling of the cluster, which solves the problem that the cluster cannot be adaptively scaled according to business changes.
  • This method is also suitable for the use of bare metal.
  • the inventory system with dynamic capacity expansion will be scaled and expanded accordingly.
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and non-volatile data implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other storage technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种缩扩容控制终端、计算机可读介质和集群的缩扩容系统。集群的缩扩容方法,包括:获取目标集群的性能数据(S1);根据性能数据判断目标集群是否需要进行扩容或缩容(S2);当判断出目标集群需要进行扩容时,控制云平台创建第一虚拟主机,并将第一虚拟主机添加至目标集群中(S3);当判断出目标集群需要进行缩容时,控制云平台移除目标集群中的第二虚拟主机(S4)。

Description

集群的缩扩容方法及系统、缩扩容控制终端和介质
相关申请的交叉引用
本申请要求2020年6月24日提交给中国专利局的第202010589248.1号专利申请的优先权,其全部内容通过引用合并于此。
技术领域
本公开涉及但不限于通信技术领域。
背景技术
随着通信技术和网络技术的发展,大多数通信系统和网络系统的负载要求变得更高,且会随着业务变化而出现缩扩容的需求,例如,针对5G通信中的核心网管理域,由于5G网络切片的大规模部署,核心网管理域将面临管理更多虚拟化业务单元,以应对更快的切片变化;针对提供微服务的服务平台,由于微服务中部署应用的特性,服务平台同样需要面临管理更多的虚拟化业务单元,以应对应用部署需求。
发明内容
第一方面,本公开提供了一种集群的缩扩容方法,包括:获取目标集群的性能数据;根据所述性能数据判断所述目标集群是否需要进行扩容或缩容;当判断出所述目标集群需要进行扩容时,控制云平台创建第一虚拟主机,并将所述第一虚拟主机添加至所述目标集群中;当判断出所述目标集群需要进行缩容时,控制所述云平台移除所述目标集群中的第二虚拟主机。
第二方面,本公开还提供了一种缩扩容控制终端,包括:一个或多个处理器;存储装置,配置为存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现本文所述的任一方法。
第三方面,本公开还提供了一种计算机可读介质,其上存储有 计算机程序,其中,所述程序被处理器执行时实现本文所述的任一方法。
第四方面,本公开还提供了一种集群的缩扩容系统,包括:缩扩容控制终端和云平台;其中,所述缩扩容控制终端采用本文所述的任一缩扩容控制终端。
附图说明
图1为本公开提供的一种集群的缩扩容方法的流程图;
图2为图1中的步骤S1的一种示例性实施方法流程图;
图3为图2中的步骤S101的一种示例性实施方法流程图;
图4为图1中的步骤S3的一种示例性实施方法流程图;
图5为本公开提供的一种集群的缩扩容方法的流程图;
图6为本公开提供的一种集群的缩扩容系统的结构框图;
图7为本公开提供的一种性能数据采集器的结构框图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的集群的缩扩容方法、缩扩容控制终端、计算机可读介质和集群的缩扩容系统进行详细描述。
在下文中将参考附图更充分地描述示例实施方式,但是所述示例实施方式可以以不同形式来体现且不应当被解释为限于本文阐述的实施方式。反之,提供这些实施方式的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
本文所使用的术语仅用于描述特定实施方式,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其他特征、整体、步骤、操作、元件、组件和/或其群组。
将理解的是,虽然本文可以使用术语第一、第二等来描述各种 元件,但这些元件不应当受限于这些术语。这些术语仅用于区分一个元件和另一元件。因此,在不背离本公开的指教的情况下,下文讨论的第一元件、第一组件或第一模块可称为第二元件、第二组件或第二模块。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
随着通信技术和网络技术的发展,大多数通信系统和网络系统的负载要求变得更高,且会随着业务变化而出现缩扩容的需求,例如,针对5G通信中的核心网管理域,由于5G网络切片的大规模部署,核心网管理域将面临管理更多虚拟化业务单元,以应对更快的切片变化;针对提供微服务的服务平台,由于微服务中部署应用的特性,服务平台同样需要面临管理更多的虚拟化业务单元,以应对应用部署需求。
业务变化实时影响系统的负载,多体现在系统的性能统计、告警和资源拓扑等功能上。其中,其对性能统计的影响最为直接,系统需要更多的节点用于存储和计算性能指标数据。
为了应对业务波动造成的缩扩容影响,系统一般采用集群的形式。而对于存量集群系统,特别地,存量Kubernetes集群系统,其底层使用裸金属,不具备添加新硬件的条件,同时,其使用的存量云平台支持性和兼容性不完善,缩扩容难度大。
现阶段的集群扩容仅限于根据集群内各节点的处理器和内存余量制定缩扩容策略,无法根据业务变化进行自适应的缩扩容,并且,还存在使用裸金属无动态扩容能力的存量系统。
本公开所提供的集群的缩扩容方法、缩扩容控制终端、计算机可读介质和集群的缩扩容系统,可用于根据获取到的性能数据进行缩扩容判断,使用虚拟化能力在云平台创建或移除相应的节点,实现集群的动态缩扩容。
图1为本公开提供的一种集群的缩扩容方法的流程图。如图1所示,该方法可以包括步骤S1至步骤S4。
在步骤S1,获取目标集群的性能数据。
在步骤S1中,从不同维度和层面获取目标集群的性能数据,该性能数据可表征目标集群的特征指标,为扩容策略和缩容策略的构建提供历史数据。
在一些实施方式中,在获取目标集群的性能数据之前,所述方法还包括:分析存量目标集群的部署形态,获取其对接认证信息和业务运行时的信息记录格式;其中,对接认证信息可包括各主机的安全外壳协议(Secure Shell,简称SSH)信息和鉴权信息等。
在步骤S2,根据性能数据判断目标集群是否需要进行扩容或缩容。
其中,通过根据性能数据构建的目标集群的扩容策略和缩容策略判断是否需要进行扩容或缩容。
在步骤S2中,当判断出目标集群需要进行扩容时,执行步骤S3;当判断出目标集群需要进行缩容时,执行步骤S4;当判断出目标集群即不需要进行扩容和缩容时,继续执行步骤S1。
在步骤S3,控制云平台创建第一虚拟主机,并将第一虚拟主机添加至目标集群中。
在一些实施方式中,控制云平台创建第一虚拟主机的同时,还控制云平台创建与第一虚拟主机对应的虚拟资源,该虚拟资源包括镜像资源、虚端口资源和云存储资源等。在一些实施方式中,在云平台创建第一虚拟主机和对应的虚拟资源后,通知云平台将第一虚拟主机标识和虚拟资源信息进行上报,并将上报的第一虚拟主机标识和虚拟资源信息持久化,通过BoltDB和SQlite等文件数据库或者以json格式、csv格式和xml格式等文件形式作为持久化信息文件保存至本地,并存储至持久化信息表中。
在一些实施方式中,在第一虚拟主机启动后,向其发送并控制其安装对应的支撑套件。
在一些实施方式中,该集群的缩扩容方法对应的缩扩容控制终 端可安装在云平台侧。
在步骤S4,控制云平台移除目标集群中的第二虚拟主机。
在一些实施方式中,通过RESTful接口方式或命令行接口方式将第二虚拟主机移除。
在一些实施方式中,第二虚拟主机为最新添加至目标集群的虚拟主机。
本公开提供了一种集群的缩扩容方法,该方法可用于根据获取到的性能数据判断是否需要进行缩扩容,并通过控制云平台创建或移除虚拟主机的方式,实现集群的动态缩扩容,由此,该方法不仅适用于普遍的存量系统,还适用于使用裸金属无动态扩容能力的存量系统。
图2为图1中的步骤S1的一种示例性实施方法流程图。示例性地,目标集群为Kubernetes集群;如图2所示,步骤S1,获取目标集群的性能数据的步骤,可以包括步骤S101和步骤S102。
在步骤S101,获取目标集群的实时性能数据。
其中,实时性能数据包括操作系统性能数据、Kubernetes性能数据和业务性能数据。操作系统性能数据可包括系统日志、处理器信息、内存信息、磁盘使用率和磁盘性能指标等;Kubernetes性能数据可包括集群内节点的相关信息、部署(Deployment)信息、副本控制器(Replication Controller,简称RC)信息、副本设置(Replica Set,简称RS)信息和容器信息等;业务性能数据可包括应用日志及其他运行业务时的性能数据。
在步骤S102,对实时性能数据进行数据清洗,并进行汇聚,生成性能数据。
其中,数据清洗可包括:删除明显异常的数据,如数值超过100%的百分比类型数据;标记可能异常的数据,如历史同比偏差大于15%的数据;针对严判数据类型,连带删除所在任务上报的其他数据,以保证数据一致性;针对单个时间点上报多条数据的,去重以保留其中一条数据或者根据策略合并多条数据。此后,将清洗后的实时性能数据以临时文件的形式进行保存至临时文件夹或缓存中。
在进行数据汇聚时,每隔预设时间段,如5分钟,针对临时文 件或缓存中的实时性能数据添加只读标签,将不同物理节点和不同Kubernetes节点的数据,按照统一的格式汇聚成一条或多条记录,并加上时间戳,汇聚完成后将只读标签去除,生成性能数据。另外,若处理操作超时未响应完成,同样将只读标签去除。
在一些实施方式中,针对汇聚后生成的性能数据,将其持久化,通过BoltDB和SQlite等文件数据库或者以json格式和csv格式等文件形式作为持久化信息文件保存至本地,并存储至持久化信息表中。
图3为图2中的步骤S101的一种示例性实施方法流程图。示例性地,实时性能数据是通过非侵入方式获取到的;如图3所示,步骤S101,获取目标集群的实时性能数据的步骤,可以包括步骤S1011至步骤S1013。
在步骤S1011,通过第一非侵入方式获取操作系统性能数据。
其中,第一非侵入方式包括远程终端协议(Telnet)方式、安全外壳协议方式、安全文件传输协议(Secret File Transfer Protocol或SSH File Transfer Protocol,简称SFTP)方式、文件传输协议(File Transfer Protocol,简称FTP)方式和RESTful接口方式中的至少一者。
本公开中所应用的非侵入方式,有别于现有技术中通过侵入方式采集特征指标信息和性能数据,例如,针对节点间网络通断数据的采集,传统的做法是在每个主机中驻留代理软件(Agent),通过代理软件间的链路通断状态更新整个集群的拓扑信息,而使用本公开所应用的非侵入方式,例如,通过安全外壳协议隧道在每个节点分别使用系统命令探测链路的连通状态即可采集该数据。
在步骤S1012,通过第二非侵入方式获取Kubernetes性能数据。
其中,第二非侵入方式包括RESTful接口方式和命令行接口(Command-Line Interface,简称CLI)方式中的至少一者。
在步骤S1013,从数据存储卷中获取业务性能数据。
其中,数据存储卷包括给定路径存储卷(HostPath)或持久化存储卷(Persistent Volume,简称PV)等;从数据存储卷中获取业务性能数据的步骤,即根据相应的Kubernetes规范,从数据存储卷中远 程拉取业务自定义格式的业务性能数据。
在一些实施方式中,还从持久化信息文件中读取或从日志系统(如ELK系统)接口获取应用日志。
本公开提供了一种集群的缩扩容方法,该方法可用于针对Kubernetes集群,通过非侵入方式获取其操作系统性能数据、Kubernetes性能数据和业务性能数据,通过多纬度的性能数据,在能够进行根据部署资源余量等硬件信息进行缩扩容的同时,实现根据业务变化进行集群的动态缩扩容。
图4为图1中的步骤S3的一种示例性实施方法流程图。示例性地,在步骤S3中,将第一虚拟主机添加至目标集群中的步骤之前,所述方法还包括:步骤S301、将第一虚拟主机添加至节点资源池中,并在节点资源池管理器上进行第一虚拟主机的注册。
其中,通过RESTful接口方式进行第一虚拟主机的注册。
在步骤S3中,将第一虚拟主机添加至目标集群中的步骤,可以包括:步骤S302、根据注册完成后节点资源池管理器分配的节点信息,通过RESTful接口方式或命令行接口方式将第一虚拟主机添加至目标集群中。
其中,节点信息可包括节点IP地址和节点标签等。
在步骤S3中,将第一虚拟主机添加至目标集群中的步骤之后,所述方法还包括:步骤S303、持久化节点信息,并存储至持久化信息表中。
其中,通过BoltDB和SQlite等文件数据库或者以json格式、csv格式和xml格式等文件形式作为持久化信息文件保存至本地,并存储至持久化信息表中。
图5为本公开提供的一种集群的缩扩容方法的流程图。图5所示的方法在图1所示方法的基础上提出,关于步骤S1至步骤S4的描述可以参见以上描述。如图5所示,在步骤S4,控制云平台移除目标集群中的第二虚拟主机的步骤之后,所述方法还可以包括步骤S5和步骤S6。
在步骤S5,将第二虚拟主机从节点资源池中移除,并在节点资 源池管理器上注销第二虚拟主机。
其中,通过RESTful接口方式进行第二虚拟主机的注销。
在步骤S6,从持久化信息表中查询第二虚拟主机对应的节点信息,控制云平台回收第二虚拟主机。
在一些实施方式中,还从持久化信息表中查询第二虚拟主机对应的资源信息,控制云平台将第二虚拟主机对应的虚拟化资源删除。
本公开提供了一种集群的缩扩容方法,该方法可用于对应进行节点资源池的缩扩容。
下面对本公开提供的集群的缩扩容方法结合实际应用进行详细描述。
图6为本公开提供的一种集群的缩扩容系统的结构框图。如图6所示,该缩扩容系统包括缩扩容控制终端和云平台。其中,缩扩容控制终端包括性能数据采集器、性能数据存储器、决策器、策略模型和执行器;云平台能创建并管理多个虚拟主机;对应的目标集群为Kubernetes集群,存在于存量Kubernetes系统中,其底层使用裸金属,包括多个节点。
首先,由性能数据采集器首先进行集群性能数据的非侵入方式采集,并传输至性能数据存储器作为历史数据进行存储。
图7为本公开提供的一种性能数据采集器的结构框图。如图7所示,性能数据采集器包括数据汇聚模块、数据清洗模块、业务采集模块、Kubernetes采集模块、操作系统采集模块和采集任务管理模块。
其中,采集任务管理模块配置为根据用户实时配置或预先设置的任务规范构建采集任务,并进行业务采集模块、Kubernetes采集模块和操作系统采集模块的采集任务的管理和分发;业务采集模块配置为通过访问集群的业务服务采集集群的业务性能数据和应用日志,Kubernetes采集模块配置为采集集群的Kubernetes性能数据,操作系统采集模块配置为通过远程终端协议方式(telnet)、安全外壳协议方式(SSH)、安全文件传输协议方式(SFTP)或RESTful接口方式采集集群的操作系统性能数据;数据清洗模块配置为对采集到的实时性能数据进行数据清洗;数据汇聚模块配置为对清洗后的实时性能 数据进行汇聚,生成性能数据。
返回参照图6,由决策器根据性能数据以及策略模型生成的缩扩容策略判断该集群是否需要进行缩扩容;最后执行器根据决策器的判断结果进行集群的缩扩容,其中,当判断出该集群需要进行扩容时,执行器控制云平台创建一个虚拟主机或指定一个已创建的空闲的虚拟主机,随后将该虚拟主机添加至节点资源池中,在完成在节点资源池管理器上的注册后,将该虚拟主机添加至集群中,实现扩容;当判断出该集群需要进行缩容时,执行器控制云平台移除最后添加至该集群中的虚拟主机,并同时将其在节点资源池中移除,在节点资源池管理器上将其注销,实现缩容。
本公开还提供了一种缩扩容控制终端,包括:一个或多个处理器;存储装置,配置为存储一个或多个程序;当该一个或多个程序被该一个或多个处理器执行时,使得该一个或多个处理器实现如上述实施方式中的任一方法。
本公开还提供了一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如上述实施方式中的任一方法。
本公开还提供了一种集群的缩扩容系统,包括:缩扩容控制终端和云平台;其中,缩扩容控制终端采用如上述实施方式中的缩扩容控制终端。
本公开提供了一种集群的缩扩容方法、缩扩容控制终端、计算机可读介质和集群的缩扩容系统,该集群的缩扩容方法可应用于缩扩容控制终端,通过根据获取到的性能数据进行缩扩容判断,使用虚拟化能力在云平台创建或移除相应的节点,实现集群的动态缩扩容,解决了集群无法根据业务变化进行自适应缩扩容的问题,该方法还适用于使用裸金属无动态扩容能力的存量系统进行相应的缩扩容。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组 件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本文已经公开了示例实施方式,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施方式相结合描述的特征、特性和/或元素,或可与其他实施方式相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (10)

  1. 一种集群的缩扩容方法,包括:
    获取目标集群的性能数据;
    根据所述性能数据判断所述目标集群是否需要进行扩容或缩容;
    当判断出所述目标集群需要进行扩容时,控制云平台创建第一虚拟主机,并将所述第一虚拟主机添加至所述目标集群中;
    当判断出所述目标集群需要进行缩容时,控制所述云平台移除所述目标集群中的第二虚拟主机。
  2. 根据权利要求1所述的方法,其中,所述目标集群为Kubernetes集群;所述获取目标集群的性能数据的步骤,包括:
    获取所述目标集群的实时性能数据,所述实时性能数据包括:操作系统性能数据、Kubernetes性能数据和业务性能数据;
    对所述实时性能数据进行数据清洗,并进行汇聚,生成所述性能数据。
  3. 根据权利要求2所述的方法,其中,所述实时性能数据是通过非侵入方式获取到的;
    所述获取所述目标集群的实时性能数据的步骤,包括:
    通过第一非侵入方式获取所述操作系统性能数据,所述第一非侵入方式包括:远程终端协议方式、安全外壳协议方式、安全文件传输协议方式、文件传输协议方式和RESTful接口方式中的至少一者;
    通过第二非侵入方式获取所述Kubernetes性能数据,所述第二非侵入方式包括:RESTful接口方式和命令行接口方式中的至少一者;
    从数据存储卷中获取所述业务性能数据。
  4. 根据权利要求1所述的方法,其中,在所述将所述第一虚拟主机添加至所述目标集群中的步骤之前,所述方法还包括:
    将所述第一虚拟主机添加至节点资源池中,并在节点资源池管理器上进行所述第一虚拟主机的注册;
    所述将所述第一虚拟主机添加至所述目标集群中的步骤,包括:
    根据注册完成后所述节点资源池管理器分配的节点信息,通过 RESTful接口方式或命令行接口方式将所述第一虚拟主机添加至所述目标集群中。
  5. 根据权利要求4所述的方法,还包括:
    持久化所述节点信息,并存储至持久化信息表中。
  6. 根据权利要求5所述的方法,其中,在所述控制所述云平台移除所述目标集群中的第二虚拟主机的步骤之后,所述方法还包括:
    将所述第二虚拟主机从所述节点资源池中移除,并在所述节点资源池管理器上注销所述第二虚拟主机;
    从所述持久化信息表中查询所述第二虚拟主机对应的节点信息,并控制所述云平台回收所述第二虚拟主机。
  7. 根据权利要求1所述的方法,其中,所述第二虚拟主机为最新添加至所述目标集群的虚拟主机。
  8. 一种缩扩容控制终端,包括:
    一个或多个处理器;
    存储装置,配置为存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。
  9. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一所述的方法。
  10. 一种集群的缩扩容系统,包括:缩扩容控制终端和云平台;
    其中,所述缩扩容控制终端采用如权利要求8中所述的缩扩容控制终端。
PCT/CN2021/099171 2020-06-24 2021-06-09 集群的缩扩容方法及系统、缩扩容控制终端和介质 WO2021259064A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022577666A JP2023530996A (ja) 2020-06-24 2021-06-09 クラスタの容量縮小・拡張方法及びシステム、容量縮小・拡張制御端末、及び媒体
EP21828729.0A EP4167085A4 (en) 2020-06-24 2021-06-09 METHOD AND SYSTEM FOR REDUCTION/INCREASE OF CAPACITY OF A CLUSTER, TERMINAL FOR CONTROLLING CAPACITY REDUCTION/INCREASE AND MEDIUM
US18/011,667 US20230236866A1 (en) 2020-06-24 2021-06-09 Capacity reduction and capacity expansion method and system for cluster, capacity reduction and capacity expansion control terminal, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010589248.1A CN113835824A (zh) 2020-06-24 2020-06-24 集群的缩扩容方法及系统、缩扩容控制终端和介质
CN202010589248.1 2020-06-24

Publications (1)

Publication Number Publication Date
WO2021259064A1 true WO2021259064A1 (zh) 2021-12-30

Family

ID=78964667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/099171 WO2021259064A1 (zh) 2020-06-24 2021-06-09 集群的缩扩容方法及系统、缩扩容控制终端和介质

Country Status (5)

Country Link
US (1) US20230236866A1 (zh)
EP (1) EP4167085A4 (zh)
JP (1) JP2023530996A (zh)
CN (1) CN113835824A (zh)
WO (1) WO2021259064A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361281A (zh) * 2022-08-19 2022-11-18 浙江极氪智能科技有限公司 一种多个云集群节点扩容的处理方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108469989A (zh) * 2018-03-13 2018-08-31 广州西麦科技股份有限公司 一种基于集群性能的反馈式自动扩缩容方法及系统
CN109062666A (zh) * 2018-07-27 2018-12-21 浪潮电子信息产业股份有限公司 一种虚拟机集群管理方法及相关装置
CN109739549A (zh) * 2018-12-28 2019-05-10 武汉长光科技有限公司 一种基于微服务的设备性能采集方法
CN110401695A (zh) * 2019-06-12 2019-11-01 北京因特睿软件有限公司 云资源动态调度方法、装置和设备
CN110557267A (zh) * 2018-05-30 2019-12-10 中国移动通信集团浙江有限公司 基于网络功能虚拟化nfv的容量修改方法及装置
EP3584998A1 (en) * 2017-03-24 2019-12-25 Huawei Technologies Co., Ltd. Method for virtual machine capacity expansion and reduction and virtual management device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10079711B1 (en) * 2014-08-20 2018-09-18 Pure Storage, Inc. Virtual file server with preserved MAC address

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3584998A1 (en) * 2017-03-24 2019-12-25 Huawei Technologies Co., Ltd. Method for virtual machine capacity expansion and reduction and virtual management device
CN108469989A (zh) * 2018-03-13 2018-08-31 广州西麦科技股份有限公司 一种基于集群性能的反馈式自动扩缩容方法及系统
CN110557267A (zh) * 2018-05-30 2019-12-10 中国移动通信集团浙江有限公司 基于网络功能虚拟化nfv的容量修改方法及装置
CN109062666A (zh) * 2018-07-27 2018-12-21 浪潮电子信息产业股份有限公司 一种虚拟机集群管理方法及相关装置
CN109739549A (zh) * 2018-12-28 2019-05-10 武汉长光科技有限公司 一种基于微服务的设备性能采集方法
CN110401695A (zh) * 2019-06-12 2019-11-01 北京因特睿软件有限公司 云资源动态调度方法、装置和设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4167085A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361281A (zh) * 2022-08-19 2022-11-18 浙江极氪智能科技有限公司 一种多个云集群节点扩容的处理方法、装置、设备及介质
CN115361281B (zh) * 2022-08-19 2023-09-22 浙江极氪智能科技有限公司 一种多个云集群节点扩容的处理方法、装置、设备及介质

Also Published As

Publication number Publication date
CN113835824A (zh) 2021-12-24
US20230236866A1 (en) 2023-07-27
JP2023530996A (ja) 2023-07-20
EP4167085A4 (en) 2023-11-22
EP4167085A1 (en) 2023-04-19

Similar Documents

Publication Publication Date Title
KR101638436B1 (ko) 클라우드 스토리지 및 그의 관리 방법
JP4462969B2 (ja) フェイルオーバクラスタシステム及びフェイルオーバ方法
US11086898B2 (en) Token-based admission control for replicated writes
CN103763383B (zh) 一体化云存储系统及其存储方法
CN111522636B (zh) 应用容器的调整方法、调整系统、计算机可读介质及终端设备
US20160283129A1 (en) Method, apparatus, and system for calculating identification threshold to distinguish cold data and hot data
WO2022007552A1 (zh) 处理节点的管理方法、配置方法及相关装置
US9684467B2 (en) Management of pinned storage in flash based on flash-to-disk capacity ratio
CN102882909B (zh) 云计算服务监控系统及方法
US20130024421A1 (en) File storage system for transferring file to remote archive system
CN108848170B (zh) 一种基于nagios监控的雾集群管理系统与方法
CN103532731A (zh) 一种防止虚拟机网络配置丢失的方法和装置
WO2021259064A1 (zh) 集群的缩扩容方法及系统、缩扩容控制终端和介质
WO2014114089A1 (zh) 分布式文件系统优化负载均衡的方法及系统
US10235062B1 (en) Selection of computer resources to perform file operations in a power-efficient manner
CN108200151B (zh) 一种分布式存储系统中ISCSI Target负载均衡方法和装置
CN102904917A (zh) 海量图片的处理系统及其方法
CN114296909A (zh) 一种根据kubernetes事件的节点自动扩容缩容方法及系统
US8819481B2 (en) Managing storage providers in a clustered appliance environment
US20230205638A1 (en) Active-active storage system and data processing method thereof
US9836329B2 (en) Decentralized processing of worker threads
CN116112569A (zh) 微服务调度方法及管理系统
CN115604294A (zh) 一种管理存储资源的方法及装置
US11537634B2 (en) Methods for hierarchical propagation in tree structures and devices thereof
JP2007257645A (ja) 資産情報の一元管理を行うコンピュータシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21828729

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022577666

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021828729

Country of ref document: EP

Effective date: 20230111

NENP Non-entry into the national phase

Ref country code: DE