CN115129423A - Resource management method, device, equipment and storage medium - Google Patents

Resource management method, device, equipment and storage medium Download PDF

Info

Publication number
CN115129423A
CN115129423A CN202210734666.4A CN202210734666A CN115129423A CN 115129423 A CN115129423 A CN 115129423A CN 202210734666 A CN202210734666 A CN 202210734666A CN 115129423 A CN115129423 A CN 115129423A
Authority
CN
China
Prior art keywords
service
resource
target
resource pool
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210734666.4A
Other languages
Chinese (zh)
Inventor
钱存峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Envision Innovation Intelligent Technology Co Ltd
Envision Digital International Pte Ltd
Original Assignee
Shanghai Envision Innovation Intelligent Technology Co Ltd
Envision Digital International Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Envision Innovation Intelligent Technology Co Ltd, Envision Digital International Pte Ltd filed Critical Shanghai Envision Innovation Intelligent Technology Co Ltd
Priority to CN202210734666.4A priority Critical patent/CN115129423A/en
Publication of CN115129423A publication Critical patent/CN115129423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a resource management method, a resource management device, resource management equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring service demand information, wherein the service demand information is used for indicating the resource demand condition of a target service; creating a target resource pool for the target service based on the service demand information, wherein the target resource pool is used for providing service of the target service; establishing a connection relation between a target resource pool and at least two service nodes, wherein the service nodes are hosts of application examples for running target services; and adding the service node corresponding to the target resource pool into the application arranging cluster, wherein the application arranging cluster is used for carrying out logic management on the application instance from the service dimension. Namely, a corresponding resource pool is created according to the service requirement, so that the application deployment cluster performs logic management on the application instance corresponding to the service from the service dimension, the complex service is scheduled, and the management efficiency of the application deployment cluster on the service application is improved.

Description

Resource management method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for resource management.
Background
Kubernets (K8s) is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. A K8s cluster consists of a set of machines called nodes (nodes) on which the containerized applications managed by K8s run.
The native scheduling of K8s includes scheduling resource units (pods) to run on nodes with sufficient resources, scheduling pods to be spread across different nodes to equalize cluster node resources, scheduling pods onto a specified range of worker nodes based on tag configurations, not scheduling pods onto specified nodes based on inverse affinity, and so on.
However, the resource scheduling and scheduling of complex services by the same cluster has limitations, for example, different services require different hardware resource ratios, different node packages, and service applications in different regions (regions), and the native scheduling management cannot meet the requirements of upper-layer complex services, that is, resource management corresponding to the native scheduling of K8s has certain limitations.
Disclosure of Invention
The embodiment of the application provides a resource management method, a resource management device and a storage medium, which can expand the scheduling of an application arrangement cluster to complex services. The technical scheme is as follows:
in one aspect, a method of resource management, the method comprising:
acquiring service demand information, wherein the service demand information is used for indicating the resource demand condition of a target service;
creating a target resource pool for the target service based on the service demand information, wherein the target resource pool is used for providing service of the target service;
establishing a connection relation between the target resource pool and at least two service nodes, wherein the service nodes are hosts of application examples for running the target service;
and adding the service node corresponding to the target resource pool into an application arranging cluster, wherein the application arranging cluster is used for carrying out logic management on an application instance from a service dimension.
In another aspect, an apparatus for resource management, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring service demand information which is used for indicating the resource demand condition of a target service;
a creating module, configured to create a target resource pool for the target service based on the service demand information, where the target resource pool is used to provide a service of the target service;
the establishing module is used for establishing a connection relation between the target resource pool and at least two service nodes, wherein the service nodes are hosts of application examples for running the target service;
and the deployment module is used for adding the service nodes corresponding to the target resource pool into an application arranging cluster, and the application arranging cluster is used for carrying out logic management on application instances from service dimensions.
In another aspect, a computer device is provided, where the terminal includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the resource management method according to any of the embodiments of the present application.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the resource management method described in any of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the resource management method described in any of the above embodiments.
The technical scheme provided by the application at least comprises the following beneficial effects:
and establishing a corresponding resource pool according to the resource demand condition of the service, establishing a connection relation between the resource pool and a host of the application example for running the service, and then carrying out logic management on the application example corresponding to the service from the service dimension by using the application arranging cluster, thereby realizing the scheduling of complex service and improving the management efficiency of the application arranging cluster on service application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a resource management method provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a control module provided in an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a resource management method provided by another exemplary embodiment of the present application;
FIG. 5 is a block diagram of a business resource application framework provided by an exemplary embodiment of the present application;
FIG. 6 is a business model diagram provided by an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a sub-service source application framework provided by another exemplary embodiment of the present application;
FIG. 8 is a block diagram of a resource management device according to an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a resource management device according to another exemplary embodiment of the present application;
fig. 10 is a schematic structural diagram of a server according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
kubernets (K8s) is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. The K8s realizes application deployment through containers, each container is isolated from each other, each container has a file system, processes between the containers cannot influence each other, and computing resources can be distinguished. The container can be deployed quickly, migration can be achieved among operating systems of different clouds and different versions due to the fact that the container is decoupled with underlying facilities and a machine file system, and application in the cloud platform can be better monitored and managed through the K8 s.
A container: the System is provided with a file System, a Central Processing Unit (CPU), a memory, a process control and the like, is separated from a basic framework, can be transplanted across cloud and OS (Operating System) release versions, has a widened isolation attribute, and can share the OS among application programs.
In conjunction with the above noun explanations, the environment of implementation of the present application is schematically explained. Referring to fig. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. The implementation environment includes: a control end 110, a deployment end 120, and a communication network 130.
The control end 110 is a terminal device for controlling the deployment end 120 to perform service deployment. The control terminal 110 includes various types of terminal devices such as a mobile phone, a tablet computer, a desktop computer, and a laptop computer. Illustratively, the control end 110 is provided with a target application capable of implementing deployment and application call of the K8s application cluster, where the target application may be an independent application program, a web application, or an applet in a host program, and is not limited herein.
The deployment terminal 120 is used for providing backend services of K8s application cluster deployment. The deployment terminal 120 receives the service deployment request from the control terminal 110, creates a corresponding resource pool according to the service deployment request, connects at least two service nodes to the resource pool, adds the service nodes to the K8s application cluster, and provides the service resources when the control terminal 110 or other terminals apply for the service resources.
It is to be noted that the deployment end 120 may be disposed in a server, where the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
The Cloud Technology (Cloud Technology) is a hosting Technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system. The Blockchain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain, which is essentially a decentralized database, is a string of data blocks associated using cryptography, each data block containing information about a batch of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering execution according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of canceling contract upgrading logout; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation.
Illustratively, the control end 110 and the deployment end 120 are connected via a communication network 130.
Referring to fig. 2, a resource management method according to an embodiment of the present application is shown, in the embodiment of the present application, the method is applied to a deployment end as shown in fig. 1, and the method includes:
step 201, obtaining service requirement information.
The service requirement information is used for indicating the resource requirement condition of the target service.
In some embodiments, the service requirement information is carried in a service deployment request, and the service deployment request is sent to the deployment end by the control end.
In some embodiments, the service requirement information includes at least one of hardware resource requirements, node package requirements, region information, user quantity information, and the like.
The hardware resource requirement is used to indicate resource requirement information for hardware such as CPU, memory, external memory, Input/Output (I/O) device, and resource allocation information between multiple pieces of hardware, such as CPU-Mem (memory display program) allocation.
The node package requirement is used for indicating node requirement information of an application instance running a target service, such as at least one of node quantity requirement information, node size requirement information, node function collocation information and the like, and the application instance indicates an actual use process of the application, for example, when a user needs to complete a certain function through an application arranging cluster, a containerization application (resource unit Pod) corresponding to the function needs to run, and each run corresponds to one application instance.
The region information is used to indicate the region requirement information of the host for running the application instance, and illustratively, the region information may be determined according to the location information of the control end, or may be determined according to the location information of the application resource calling end indicated by the control end.
The user quantity information is used for indicating the quantity condition of the application resource calling end, and the deployment end can determine the node quantity deployed for the target service according to the user quantity information so as to ensure the reasonable distribution of the application resource.
Step 202, a target resource pool is created for the target service based on the service requirement information.
The target resource pool is used for providing business services of the target business.
In some embodiments, the deployment end creates a target resource pool for the target service, and in one example, the creation of the target resource pool is implemented by creating a corresponding resource pool identifier for the target resource pool, where the resource pool identifier is used to uniquely identify the target resource pool.
In some embodiments, the deployment terminal determines information such as the number of service nodes, the functions of the service nodes, the distribution of the service nodes, and the like required by the target resource pool according to the service requirement information in the target service.
Illustratively, a service Node function is determined according to a Node package requirement, where the service Node function is used to indicate a function of an application instance implementing a target service, so as to determine a service Node of a corresponding function from a service Node library according to the service Node function, where the service Node is a host of the application instance for running the target service, and the host is a hardware environment provided when a containerized application runs the application instance, for example, the service Node is a Node in K8 s. In one example, the service requirement information carries a function identifier corresponding to the target service, and the deployment terminal queries in the service node library according to the function identifier to obtain at least two service nodes, where the functions implemented by the at least two service nodes may be the same or different, and are not limited herein.
Illustratively, the number of service nodes is determined according to at least one of the hardware resource requirement and the user number information. In one example, the deployment end determines the total number of service nodes in the target resource pool and the number of service nodes of each function to determine the resource pool size of the target resource pool.
Step 203, establishing a connection relationship between the target resource pool and at least two service nodes.
After the deployment terminal determines at least two service nodes according to the service demand information, a connection relationship needs to be established between the deployment terminal and the at least two service nodes.
Optionally, determining at least two service nodes based on the service demand information; and adding resource pool labels corresponding to the target resource pool for at least two service nodes through a first label marking command, wherein the nodes with the same label attribute of the resource pool labels belong to the same resource pool. In one example, a target resource pool-a is created for a service a, a service node is a node-1 for example, and a first tag marking command is as follows:
# Node Add Label Command
kubectl label node node-1pool.kubernetes.io/pool-a:"true"
In some embodiments, at least two service nodes are determined based on the service demand information; acquiring node identifications corresponding to the at least two service nodes respectively; and generating a mapping list by the node identification and the resource pool identification of the target resource pool. That is, the mapping list is used to record the corresponding relationship between the service node and the resource pool.
And step 204, adding the service node corresponding to the target resource pool into the application arranging cluster.
The application orchestration cluster described above is used to logically manage application instances from the business dimension. The application orchestration cluster may be a K8s cluster, or may be another cluster capable of implementing application management, which is not limited herein.
Schematically, as shown in fig. 3, a schematic diagram of a control module 300 in a K8s cluster is shown, where the control module 300 includes a first resource pool 310 corresponding to a service a and a second resource pool 320 corresponding to a service B, where the first resource pool 310 includes three service nodes, Node-1, Node-2 and Node-3, and the second resource pool 320 includes three service nodes, Node-4, Node-5 and Node-6, that is, the first resource pool 310 and the second resource pool 320 may select nodes of different packages, and the service nodes in the first resource pool 310 are all labeled with a first resource pool label 311, and the service nodes in the second resource pool 320 are all labeled with a second resource pool label 321.
To sum up, the resource management method provided in the embodiment of the present application creates a corresponding resource pool according to the resource requirement condition of the service, and establishes a connection relationship between the resource pool and the host of the application instance for running the service, and then the application orchestration cluster can perform logic management on the application instance corresponding to the service from the service dimension, thereby implementing scheduling of the complex service and improving the management efficiency of the application orchestration cluster on the service application.
Referring to fig. 4, a resource management method according to an embodiment of the present application is shown, where in the embodiment of the present application, a process of applying for a service resource after a target resource pool is created is schematically described, and the method includes:
step 401, receiving a service resource application request.
The service resource application request is used for applying for the resources of the target service.
Illustratively, after steps 401 to 403 run from step 201 to step 204, that is, after the creation of the target resource pool is completed and the service node in the target resource pool is added to the application orchestration cluster, the control terminal or other terminal may apply for the resource of the target service to the application orchestration cluster through the service resource application request, so as to implement the service of the target service.
In some embodiments, the deployment end further includes a resource Pool Management Service (Pool Management Service) module, where the resource Pool Management Service module is configured to manage resource information of the target resource Pool. Illustratively, after step 204, the deployment end connects the target resource pool to the resource pool management service module.
In some embodiments, the resource pool information is provided by the resource pool management service module to a service layer, where the service layer may be a control end or other terminal, or may be a gateway server between the control end or other terminal and a deployment end, which is not limited herein.
Illustratively, a service resource application request sent by a service layer is received, in some embodiments, the service resource application request includes a service attribute of a target service, and the target service is a service combination determined by the service layer according to resource pool information and used for implementing tenant requirements.
Optionally, the resource pool information includes static information and dynamic information. The static information includes at least one of resource pool limitation amount information and hardware total resource information, where the resource pool limitation amount information is used to Limit a Request amount served by a target resource pool, for example, the resource pool limitation amount information is Request & Limit (Request & Limit) information, and the hardware total resource information is used to indicate a total hardware resource condition that can be provided by the target resource pool. The dynamic information comprises at least one of resource pool request amount information, resource pool metering information, resource pool real-time operation information, resource pool over-sale information and hardware resource information. The resource pool request amount information may be determined according to the number of application service requests processed by the target resource pool. The resource pool metering information is used to determine the usage of the CPU/Mem. The real-time running information of the resource pool can be used for determining the service node in a running state in the target resource pool. The resource pool over-selling information is used for determining whether the target resource pool is in an over-selling state. The hardware resource information includes at least one of hardware related information such as a hardware resource scheduling condition in the service node, a Graphics Processing Unit (GPU) usage condition, a memory usage condition, and the like.
In some embodiments, the deployment side further includes an Application Programming Interface Server (API Server), which provides an add/drop modify service for K8s resource objects, which are data buses of the entire system, wherein the resource objects include, but are not limited to, Pod, copy Controller (RC), and the like.
Schematically, as shown in fig. 5, which shows a schematic diagram of a service resource application framework provided by an exemplary embodiment, where a deployment end 510 includes a resource pool management service module 511 and a control module (Kubernnetes Master)512, where the control module 512 further includes an API Server513, the control module 512 includes a resource pool a514 and a resource pool B515, and a service layer 520 obtains resource pool information from the resource pool management service module 511, determines a service combination according to the resource pool information, and sends a service resource application request to the API Server513 in the control module 512, so as to implement application of a service resource.
Step 402, at least one service name space corresponding to the target service is created, wherein the service name space comprises at least one resource unit.
The resource unit is used for completing the application example of the target service. In one example, the resource element is Pod in the K8s cluster.
In some embodiments, the resource unit corresponds to a Namespace (Namespace) that is used to isolate resources and provide a scope, for example, when a Pod is accessed by establishing a service (service), if the Namespace of the service is not specified correctly, the Pod cannot be associated with the service by a tag. Illustratively, Pod in the same namespace are all instances of the same service, and a service may have multiple namespaces, and a namespace cannot belong to two services. For example, please refer to fig. 6, which shows a schematic diagram of a service model, wherein a Node1 and a Node2 belong to a resource pool a610 corresponding to a service a, a Node3 belongs to a resource pool B620 corresponding to a service B, a namespace a1(NS-a1)611 and a namespace a2(NS-a2)612 also belong to the service a, and then a Pod in the two namespaces can be dispatched to the Node1 and/or the Node2 in the resource pool a610, and a namespace B1(NS-B1)621 belongs to the service B, and then the Pod in the namespace can be dispatched to the Node3 in the resource pool B620.
In some embodiments, the deployment terminal creates a corresponding namespace according to the service resource application request, and does not mark a namespace label corresponding to the namespace on the resource unit.
In some embodiments, the deployment end further includes an Admission network callback (Admission Webhook) module, and when the resource unit corresponds to the namespace tag, the Admission Webhook intercepts the resource unit in the namespace, and adds a scheduling policy (Patch) to the namespace tag of the resource unit through a second tag tagging command, where the scheduling policy indicates that the resource unit is scheduled to the service node corresponding to the target service.
And step 403, scheduling the resource units in the service name space to the service nodes.
In some embodiments, the Pod is dispatched to the service node of the target resource pool by the API server directly according to the service resource application request.
In some embodiments, when the deployment end further includes an Admission Webhook, the Admission Webhook puts on a scheduling policy according to a namespace tag to which the Pod belongs, so that the service resource is scheduled to a Node of the target resource pool, that is, the resource is created in the corresponding resource pool, and the scheduling attribute is set through the tag attribute.
Fig. 7 is a schematic diagram showing a service resource application framework provided in another exemplary embodiment, where a control module (kubernets Master)710 includes an administration Webhook711, an API server712, a resource pool a713, and a resource pool B714, a service layer 720 sends a service request 701 to the control module 710, where the service request 701 is used to create a Pod, the administration Webhook711 intercepts the service request, and marks a corresponding scheduling policy 702 according to a namespace tag to which the Pod created by the service request 701 belongs, and then sends the scheduling policy 702 to the API server712, and the API server712 schedules the Pod to a Node in the resource pool according to the service request 701.
To sum up, the resource management method provided in this embodiment of the present application creates a corresponding namespace according to a service resource application request corresponding to a target service, and schedules resources in the namespace to service nodes corresponding to a target resource pool, so as to implement application and creation processes of service resources, implement scheduling of complex services, and improve management efficiency of an application orchestration cluster on service applications.
Referring to fig. 8, a block diagram of a resource management device according to an exemplary embodiment of the present application is shown, where the device includes the following modules:
an obtaining module 810, configured to obtain service requirement information, where the service requirement information is used to indicate a resource requirement condition of a target service;
a creating module 820, configured to create a target resource pool for the target service based on the service demand information, where the target resource pool is used to provide a service of the target service;
an establishing module 830, configured to establish a connection relationship between the target resource pool and at least two service nodes, where the service nodes are hosts of application instances for running the target service;
a deployment module 840, configured to add the service node corresponding to the target resource pool to an application orchestration cluster, where the application orchestration cluster is configured to perform logic management on an application instance from a service dimension.
In some embodiments, as shown in FIG. 9, the creation module 820 further comprises:
a determining unit 821, configured to determine the at least two service nodes based on the service requirement information;
a first adding unit 822, configured to add resource pool labels corresponding to the target resource pool to the at least two service nodes through a first label marking command, where nodes with the same label attribute of the resource pool labels belong to the same resource pool.
In some embodiments, the obtaining module 810 is further configured to receive a service resource application request, where the service resource application request is used to apply for a resource of the target service;
the creating module 820 is further configured to create at least one service name space corresponding to the target service, where the service name space includes at least one resource unit, and the resource unit is used to complete an application instance of the target service;
the deployment module 840 is further configured to schedule the resource unit in the service namespace to the service node.
In some embodiments, the apparatus further comprises:
a sending module 850, configured to provide the resource pool information to the service layer through the resource pool management service module;
the obtaining module 810 is further configured to receive the service resource application request sent by the service layer, where the service resource application request includes a service attribute of the target service, and the target service is a service combination determined by the service layer according to the resource pool information and used for implementing tenant requirements.
In some embodiments, the establishing module 830 is further configured to connect the target resource pool to the resource pool management service module, where the resource pool management service module is configured to manage resource information of the target resource pool.
In some embodiments, the resource pool information includes at least one of resource pool request amount information, resource pool limit amount information, resource pool metering information, resource pool real-time operation information, resource pool over-sell information, and hardware resource information.
In some embodiments, the resource units correspond to namespace tags;
the establishing module 830 further comprises:
an intercepting unit 831, configured to intercept the resource unit in the namespace;
a second adding unit 832, configured to add, through a second tag marking command, a scheduling policy to the namespace tag of the resource unit, where the scheduling policy indicates that the resource unit is scheduled to a service node corresponding to the target service.
To sum up, the resource management method provided in the embodiment of the present application creates a corresponding resource pool according to the resource requirement condition of the service, and establishes a connection relationship between the resource pool and the host of the application instance for running the service, and then the application orchestration cluster can perform logic management on the application instance corresponding to the service from the service dimension, thereby implementing scheduling of the complex service and improving the management efficiency of the application orchestration cluster on the service application.
It should be noted that: the resource management apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the resource management apparatus and the resource management method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 10 shows a schematic structural diagram of a server provided in an exemplary embodiment of the present application. Specifically, the structure includes the following structure.
The server 1000 includes a Central Processing Unit (CPU) 1001, a system Memory 1004 including a Random Access Memory (RAM) 1002 and a Read Only Memory (ROM) 1003, and a system bus 1005 connecting the system Memory 1004 and the Central Processing Unit 1001. The server 1000 also includes a mass storage device 1006 for storing an operating system 1013, application programs 1014, and other program modules 1015.
The mass storage device 1006 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1006 and its associated computer-readable media provide non-volatile storage for the server 1000. That is, the mass storage device 1006 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read Only Memory (CD-ROM) drive.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1006 described above may be collectively referred to as memory.
According to various embodiments of the present application, the server 1000 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 1000 may be connected to the network 1012 through a network interface unit 1011 connected to the system bus 1005, or the network interface unit 1011 may be used to connect to another type of network or a remote computer system (not shown).
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
Embodiments of the present application further provide a computer device, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the biometric identification method provided by the above-mentioned method embodiments. Optionally, the computer device may be a terminal or a server.
Embodiments of the present application further provide a computer-readable storage medium having at least one instruction, at least one program, code set, or instruction set stored thereon, loaded and executed by a processor, to implement the biometric identification method provided by the above method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the biometric method described in any of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for resource management, the method comprising:
acquiring service demand information, wherein the service demand information is used for indicating the resource demand condition of a target service;
creating a target resource pool for the target service based on the service demand information, wherein the target resource pool is used for providing service of the target service;
establishing a connection relation between the target resource pool and at least two service nodes, wherein the service nodes are hosts of application examples for running the target service;
and adding the service node corresponding to the target resource pool into an application arranging cluster, wherein the application arranging cluster is used for carrying out logic management on an application instance from a service dimension.
2. The method of claim 1, wherein the creating a target resource pool for the target service based on the service demand information comprises:
determining the at least two service nodes based on the service demand information;
and adding resource pool labels corresponding to the target resource pool to the at least two service nodes through a first label marking command, wherein the nodes with the same label attribute of the resource pool labels belong to the same resource pool.
3. The method according to any of claims 1 to 2, wherein after adding the service node corresponding to the target resource pool to the application orchestration cluster, further comprising:
receiving a service resource application request, wherein the service resource application request is used for applying for the resources of the target service;
creating at least one service name space corresponding to the target service, wherein the service name space comprises at least one resource unit, and the resource unit is used for completing an application instance of the target service;
and scheduling the resource units in the service name space to the service nodes.
4. The method of claim 3, wherein the receiving a service resource application request comprises:
providing resource pool information to a service layer through a resource pool management service module;
and receiving the service resource application request sent by the service layer, wherein the service resource application request comprises the service attribute of the target service, and the target service is a service combination which is determined by the service layer according to the resource pool information and is used for realizing the tenant requirement.
5. The method of claim 4, further comprising:
and connecting the target resource pool to the resource pool management service module, wherein the resource pool management service module is used for managing the resource information of the target resource pool.
6. The method of claim 5, wherein the resource pool information comprises at least one of resource pool request amount information, resource pool limit amount information, resource pool metering information, resource pool real-time operation information, resource pool over-selling information, and hardware resource information.
7. The method of claim 3, wherein the resource unit corresponds to a namespace tag;
after the creating at least one namespace corresponding to the target service, the method further comprises:
intercepting the resource units in the namespace;
and adding a scheduling policy to the namespace label of the resource unit through a second label marking command, wherein the scheduling policy indicates that the resource unit is scheduled to a service node corresponding to the target service.
8. An apparatus for resource management, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring service demand information which is used for indicating the resource demand condition of a target service;
a creating module, configured to create a target resource pool for the target service based on the service demand information, where the target resource pool is used to provide a service of the target service;
the establishing module is used for establishing a connection relation between the target resource pool and at least two service nodes, wherein the service nodes are hosts of application examples for running the target service;
and the deployment module is used for adding the service nodes corresponding to the target resource pool into an application arranging cluster, and the application arranging cluster is used for carrying out logic management on application instances from service dimensions.
9. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of resource management according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored therein at least one program code, the program code being loaded and executed by a processor to implement the resource management method according to any one of claims 1 to 7.
CN202210734666.4A 2022-06-27 2022-06-27 Resource management method, device, equipment and storage medium Pending CN115129423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210734666.4A CN115129423A (en) 2022-06-27 2022-06-27 Resource management method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210734666.4A CN115129423A (en) 2022-06-27 2022-06-27 Resource management method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115129423A true CN115129423A (en) 2022-09-30

Family

ID=83379708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210734666.4A Pending CN115129423A (en) 2022-06-27 2022-06-27 Resource management method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115129423A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN116954822A (en) * 2023-07-26 2023-10-27 中科驭数(北京)科技有限公司 Container arranging system and use method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN116954822A (en) * 2023-07-26 2023-10-27 中科驭数(北京)科技有限公司 Container arranging system and use method thereof

Similar Documents

Publication Publication Date Title
US11700296B2 (en) Client-directed placement of remotely-configured service instances
US9432350B2 (en) System and method for intelligent workload management
US8065676B1 (en) Automated provisioning of virtual machines for a virtual machine buffer pool and production pool
US9965724B2 (en) System and method for determining fuzzy cause and effect relationships in an intelligent workload management system
US10102018B2 (en) Introspective application reporting to facilitate virtual machine movement between cloud hosts
US20120239825A1 (en) Intercloud Application Virtualization
US9270703B1 (en) Enhanced control-plane security for network-accessible services
AU2014209611B2 (en) Instance host configuration
CN109313564A (en) For supporting the server computer management system of the highly usable virtual desktop of multiple and different tenants
CN115129423A (en) Resource management method, device, equipment and storage medium
US8966025B2 (en) Instance configuration on remote platforms
CN103685608A (en) Method and device for automatically configuring IP (Internet Protocol) address of security virtual machine
US10182104B1 (en) Automatic propagation of resource attributes in a provider network according to propagation criteria
CN112256439B (en) Service directory dynamic updating system and method based on cloud computing resource pool
Grandinetti Pervasive cloud computing technologies: future outlooks and interdisciplinary perspectives: future outlooks and interdisciplinary perspectives
US20240160488A1 (en) Dynamic microservices allocation mechanism
Saravanakumar et al. An Efficient On-Demand Virtual Machine Migration in Cloud Using Common Deployment Model.
CN109286617B (en) Data processing method and related equipment
US11886921B2 (en) Serverless runtime container allocation
Feng et al. Elastic stream cloud (ESC): A stream-oriented cloud computing platform for Rich Internet Application
JP2024501005A (en) Management method and device for container clusters
Nagaprasad et al. Reviewing some platforms in cloud computing
WO2023274014A1 (en) Storage resource management method, apparatus, and system for container cluster
Guo Introduction to cloud computing
Pham et al. Autonomic fine-grained replication and migration at component level on multicloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination