WO2021227954A1 - 基于容器集群的应用访问请求处理 - Google Patents

基于容器集群的应用访问请求处理 Download PDF

Info

Publication number
WO2021227954A1
WO2021227954A1 PCT/CN2021/092172 CN2021092172W WO2021227954A1 WO 2021227954 A1 WO2021227954 A1 WO 2021227954A1 CN 2021092172 W CN2021092172 W CN 2021092172W WO 2021227954 A1 WO2021227954 A1 WO 2021227954A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
computing
memory area
node
trusted memory
Prior art date
Application number
PCT/CN2021/092172
Other languages
English (en)
French (fr)
Inventor
吴秉哲
陈超超
王力
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021227954A1 publication Critical patent/WO2021227954A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • One or more embodiments of this specification relate to the field of computer technology, and in particular to a method and device for processing application access requests based on a container cluster.
  • One or more embodiments of this specification describe a method and device for processing application access requests based on a container cluster, which can effectively process application access requests.
  • a method for processing application access requests based on a container cluster includes: receiving a user's access request for the first application; The request for obtaining the usage amount; receiving the usage amount of the trusted memory area returned by each computing node; based on the usage amount, determining the remaining amount of the trusted memory area of each computing node; if each computing node If the remaining amount of the trusted memory area is less than a predetermined threshold, then the first application is expanded.
  • the expansion includes starting in the trusted memory area of the memory of other computing nodes except for the part of the computing node corresponding to The newly created container of the first application; the access request is allocated to the other computing node, and the other computing node responds to the access request.
  • a method for processing application access requests based on a container cluster including: receiving a request for obtaining usage of a trusted memory area sent by the master node; Sent when the user’s access request for the first application; obtains the usage amount of the trusted memory area of the first computing node; returns the usage amount of the trusted memory area to the master node, so that the master node When it is determined that the remaining amount of the trusted memory area of the part of the computing node is less than a predetermined threshold, the first application is expanded. Starting a new container corresponding to the first application in the information memory area; and causing the master node to allocate the access request to the other computing nodes, and the other computing nodes respond to the access request.
  • an apparatus for processing application access requests based on a container cluster including: a receiving unit, configured to receive a user's access request for the first application; and a sending unit, configured to send to the partial computing nodes Each computing node sends a request for obtaining the usage amount of a trusted memory area; the receiving unit is further configured to receive the usage amount of the trusted memory area returned by each computing node; and the determining unit is configured to receive The usage amount received by the unit determines the remaining amount of the trusted memory area of each computing node; the capacity expansion unit is used for if the remaining amount of the trusted memory area of each computing node determined by the determining unit is less than A predetermined threshold value, expansion is performed for the first application, and the expansion includes starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except for the part of the computing nodes; The allocating unit is configured to allocate the access request received by the receiving unit to the other computing node, and the other computing node will respond to the access
  • an apparatus for processing application access requests based on a container cluster including: a receiving unit configured to receive a request for obtaining usage of a trusted memory area sent by the master node; the obtaining request is determined by the The master node sends when it receives a user's access request for the first application; the acquiring unit is used to acquire the usage of the trusted memory area of the first computing node; the sending unit is used to return all information to the master node
  • the amount of use of the trusted memory area is such that when the master node determines that the remaining amount of the trusted memory area of the part of the computing node is less than a predetermined threshold, the first application is expanded, and the expansion includes Start a new container corresponding to the first application in the trusted memory area of the memory of other computing nodes other than the part of the computing nodes; and make the master node distribute the access request to the other computing nodes , And the other computing node responds to the access request.
  • a computer storage medium is provided with a computer program stored thereon, and when the computer program is executed in a computer, the computer is caused to execute the method of the first aspect or the second aspect.
  • a computing device including a memory and a processor, the memory stores executable code, and the processor implements the method of the first aspect or the second aspect when the executable code is executed by the processor.
  • the container cluster-based application access request processing method and device provided by one or more embodiments of this specification first collect the trusted information of each computing node on which the first application is deployed when the user’s access request for the first application is received.
  • the memory area is used, and the remaining amount of the trusted memory area of each computing node is further determined. If the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold, then the first application is expanded.
  • the expansion here includes: starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except the computing node where the first application is deployed. After that, the access request is distributed to other computing nodes, and the other computing nodes respond to the access request. As a result, the processing efficiency of the access request of the first application can be greatly improved.
  • Figure 1 is a schematic diagram of the container cluster provided in this manual
  • FIG. 2 is a flowchart of the application deployment method provided in this manual
  • FIG. 3 is a flowchart of a method for processing application access requests based on a container cluster provided by an embodiment of this specification
  • FIG. 4 is a flowchart of a method for processing application access requests based on a container cluster according to another embodiment of this specification
  • FIG. 5 is a schematic diagram of a container cluster-based application access request processing apparatus provided by an embodiment of this specification
  • Fig. 6 is a schematic diagram of a container cluster-based application access request processing apparatus provided by another embodiment of this specification.
  • the master node in the container cluster receives the user's access request for the first application.
  • the remaining amount of the trusted memory area of each computing node is determined. If the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold, then the first application is expanded.
  • the expansion includes: starting a new container corresponding to the first application in the trusted memory area of the memory of other computing nodes except the computing node where the first application is deployed. After that, the access request is distributed to other computing nodes, and the other computing nodes respond to the access request. As a result, the processing efficiency of the access request of the first application can be greatly improved.
  • FIG 1 is a schematic diagram of the container cluster provided in this specification.
  • the container cluster can be managed by k8s (full English name: Kubernetes) (a container orchestration tool), so the container cluster can also be referred to as a k8s container cluster.
  • the container cluster may include several hosts, one of which is the master node, and the other hosts are all computing nodes.
  • the master node is used to manage several computing nodes.
  • the memory of several computing nodes includes a trusted memory area, where the trusted memory area is EPC memory, which has a size limit of 128MB.
  • a first application is deployed in some of the several computing nodes, and the first application is a containerized application, and its corresponding container runs in a trusted memory area of the memory of the several computing nodes.
  • the containerized application here refers to an application running in a container. That is, there is a one-to-one relationship between the container and the application.
  • the containers in the k8s container cluster are managed by groups. Specifically, in the same computing node, multiple closely related containers are usually divided into a group. This group of containers constitutes the basic scheduling unit of the container cluster: pod. It should be understood that at least one pod runs on a computing node. For each of the above computing nodes, the following components are also running on it: Kubelet, Proxy, and Docker daemon. The three components are used to manage the life cycle of the Pod on the computing node (for example, create a pod or destroy a pod, etc.), and process application access requests.
  • the following components are running on the above-mentioned master node: etcd, API Server, Controller Manager, and Scheduler.
  • the latter three components constitute the master control center of the container cluster, which is used for resource management, Pod scheduling, and elasticity of the entire cluster. Management functions such as scaling, safety control, system monitoring and error correction.
  • application deployment can be performed in a container cluster, and in addition, access requests for deployed applications can be processed.
  • FIG. 1 is only an example of a container cluster.
  • ordinary applications may also be deployed therein, but these ordinary applications run in ordinary memory.
  • only the deployed first application will run in the trusted memory area of its memory, that is, the first application will exclusively occupy the trusted memory area.
  • ordinary applications that is, applications running in ordinary memory may be deployed.
  • FIG. 2 is a flowchart of the application deployment method provided in this specification. As shown in Figure 2, the method may include the following steps:
  • Step 202 The master node receives an application deployment request.
  • the application deployment request may include a container image corresponding to the first application.
  • the container image here can be obtained by the developer by packaging the first application and the dependent packages of the first application through a container (Docker) (an open source application container engine).
  • Docker an open source application container engine
  • the above application deployment request may also include the configuration file of the container image.
  • the configuration file here can be used to define container parameters, such as the container's CPU usage and storage resource usage.
  • Step 204 Select part of the computing nodes on which the first application is deployed from the plurality of computing nodes at least according to the resource occupancy of the plurality of computing nodes.
  • the resource usage here may include, but is not limited to, CPU usage, memory usage, and storage resource usage.
  • the master node may select a computing node whose resource usage meets a predetermined condition from a number of computing nodes through its master control center as a partial computing node for deploying the first application.
  • the predetermined conditions here may include, but are not limited to, the CPU occupancy rate being less than the first threshold, the memory usage being less than the second threshold, and the storage resource occupancy rate being less than the third threshold, and so on.
  • the first threshold, the second threshold, and the third threshold here are set according to empirical values.
  • the master node may select part of the computing node for deploying the first application from the plurality of computing nodes according to the resource usage of several computing nodes and the above-mentioned configuration file through its master control center.
  • the remaining amount of CPU and the remaining amount of storage resources of a number of computing nodes can be determined according to the CPU occupancy rate and the occupancy rate of storage resources of a number of computing nodes. Then, from a number of computing nodes, select the computing node whose corresponding CPU remaining amount is greater than the CPU usage defined in the configuration file and the corresponding storage resource remaining amount is greater than the storage resource usage defined in the configuration file as the part of the computing node deploying the first application .
  • Step 206 The master node sends the container image to each of the partial computing nodes, so that each of the partial computing nodes starts the corresponding container of the first application by running the container image, and runs in the started corresponding container The first application.
  • the corresponding container of the above-mentioned first application refers to a Docker container. It should be noted that, based on the image files corresponding to different applications, there will not be any interfaces between the started Docker containers, that is, the Docker containers are isolated from each other. In addition, the first application mentioned above runs in a Docker container, just like it runs on a real physical machine.
  • the deployment of the first application in the container cluster is completed, because the first application is in the container cluster.
  • Each computing node corresponds to a container and runs in the corresponding container. Therefore, the first application may also be referred to as a containerized application.
  • Fig. 3 is a flowchart of a method for processing application access requests based on a container cluster provided by an embodiment of this specification.
  • the method execution subject may be a device with processing capability: a server or a system or a host.
  • a server or a system or a host For example, it can be the master node in Figure 1.
  • the method may specifically include:
  • Step 302 Receive a user's access request for the first application.
  • the access request may include the unique identification of the first application. Therefore, based on the unique identifier, the first application requested to be accessed by the user can be determined.
  • Step 304 Send a request for obtaining the usage amount of the trusted memory area to each of the computing nodes in the partial computing nodes.
  • some computing nodes on which the first application is deployed may be selected from several computing nodes in the container cluster.
  • each computing node where the first application is deployed can be selected from N computing nodes of computing node 1-computing node N. Assuming that both computing node i and computing node j are deployed with the first application, then computing node i and computing node j can be selected as the foregoing partial computing nodes. Wherein, i and j are both positive integers, and 1 ⁇ i ⁇ N, 1 ⁇ j ⁇ N.
  • each computing node After receiving the above-mentioned acquisition request, each computing node can obtain the usage amount of their respective trusted memory area by calling the hardware interface of the trusted memory area, and return the obtained result to the master node.
  • the hardware interface of the trusted memory area here is the SGX interface, which is usually also called an SGX driver.
  • Step 306 Receive the usage amount of the trusted memory area returned by each computing node.
  • Step 308 Determine the remaining amount of the trusted memory area of each computing node based on each received usage amount.
  • the remaining amount of the trusted memory area of the first computing node may be based on the upper limit of its trusted memory area usage (for example, 128MB) and the corresponding usage amount The difference is obtained.
  • Step 310 If the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold, then expand the capacity for the first application.
  • the expansion includes: in the trusted memory area of the memory of other computing nodes except some computing nodes To start the new container corresponding to the first application.
  • a corresponding predetermined threshold can be set for each computing node in advance.
  • the predetermined threshold corresponding to each computing node may be the same or different. Taking the same predetermined threshold value corresponding to each computing node as an example, it can be set according to the type of the first application deployed in the container cluster.
  • the step of determining whether the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold may include: determining the maximum remaining amount from the remaining amount of the trusted memory area of each computing node. It is judged whether the maximum remaining amount is less than a predetermined threshold. If so, it is determined that the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold.
  • the newly created container mentioned in step 310 can be obtained by duplicating the corresponding pod of the corresponding container of the first application on the deployed computing node; in other words, it can be obtained by generating the first container on other computing nodes. A copy of the pod of the corresponding container of the application is obtained.
  • the number of the above-mentioned other computing nodes can be one or more, and the specific number can be combined with target information by the master node's master control center (for example, the current usage and predetermined usage of the trusted memory of each computing node) set up.
  • step 312 the access request is allocated to other computing nodes, and the other computing nodes respond to the access request.
  • the access request may be allocated to one of the other computing nodes.
  • one of the other computing nodes here may be randomly selected.
  • the remaining amount of the trusted memory area of each computing node where the first application is deployed is less than the predetermined threshold, and the remaining amount of the trusted memory area of at least one of the computing nodes in each computing node is not less than the predetermined
  • the threshold is set, the computing node corresponding to the largest remaining amount among the at least one computing node is used as the target computing node that responds to the access request, and the access request is sent to the target computing node.
  • the target computing node After receiving the access request, the target computing node can process the access request and return the processing result of the access request to the master node. After that, the master node forwards the processing result to the user.
  • computing node i corresponds to the maximum remaining amount, and the maximum remaining amount is not less than a predetermined threshold, then an access request can be sent to computing node i, and computing node i Process the access request, and return the processing result of the access request to the master node.
  • the container cluster-based application access request processing method provided by an embodiment of this specification first collects the trusted memory area of each computing node on which the first application is deployed when the user's access request for the first application is received. And further determine the remaining amount of the trusted memory area of each computing node. If the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold, then the first application is expanded.
  • the expansion here includes: starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except the computing node where the first application is deployed. After that, the access request is distributed to other computing nodes, and the other computing nodes respond to the access request. In this way, it is possible to quickly respond to the user's access request, thereby improving the user experience.
  • Fig. 4 is a flowchart of a method for processing application access requests based on a container cluster according to another embodiment of this specification.
  • the method execution subject may be a device with processing capability: a server or a system or a host.
  • it may be any first computing node among the partial computing nodes on which the first application is deployed in FIG. 1.
  • the method may specifically include:
  • Step 402 Receive a request for obtaining the usage amount of the trusted memory area sent by the master node.
  • the acquisition request may be sent by the master node when receiving the user's access request for the first application.
  • Step 404 Obtain the usage amount of the trusted memory area of the first computing node.
  • the first computing node may obtain the usage amount of its trusted memory area by calling the hardware interface of the trusted memory area.
  • the hardware interface of the trusted memory area here is the SGX interface, which is usually also called an SGX driver.
  • Step 406 Return the usage amount of the trusted memory area to the master node.
  • the master node may determine the remaining amount of the trusted memory area of each computing node based on the received usage amount. Taking any first computing node in each computing node as an example, the remaining amount of the trusted memory area of the first computing node may be based on the upper limit of its trusted memory area usage (for example, 128MB) and the corresponding usage amount The difference is obtained.
  • the upper limit of its trusted memory area usage for example, 128MB
  • the master node can determine whether the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold.
  • a corresponding predetermined threshold can be set for each computing node in advance.
  • the predetermined threshold corresponding to each computing node may be the same or different. Taking the same predetermined threshold value corresponding to each computing node as an example, it can be set according to the type of the first application deployed in the container cluster.
  • the step of determining whether the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold may include: determining the maximum remaining amount from the remaining amount of the trusted memory area of each computing node. It is judged whether the maximum remaining amount is less than a predetermined threshold. If so, it is determined that the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold.
  • the master node determines that the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold, it expands the capacity of the first application.
  • the expansion includes: starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except for some computing nodes.
  • the new container mentioned here can be obtained by copying the corresponding pod of the corresponding container of the first application on the deployed computing node; in other words, it can be obtained by generating the pod containing the corresponding container of the first application on other computing nodes. Get a copy.
  • the number of other computing nodes mentioned above can be one or more, and the specific number can be combined by the master node’s master control center with target information (for example, the current usage of the trusted memory of each computing node and the predetermined number). Usage) setting.
  • the master node can distribute the access request to other computing nodes, and the other computing nodes can respond to the access request.
  • the access request may be allocated to one of the other computing nodes.
  • one of the other computing nodes here may be randomly selected.
  • the remaining amount of the trusted memory area of each computing node where the first application is deployed is less than the predetermined threshold, and the remaining amount of the trusted memory area of at least one of the computing nodes in each computing node is not less than the predetermined
  • the threshold is set, the computing node corresponding to the largest remaining amount among the at least one computing node is used as the target computing node that responds to the access request, and the access request is sent to the target computing node.
  • the first computing node can receive the access request, and after receiving the access request, it can process the access request and return the access request to the master node. The processing result of the access request. After that, the master node forwards the processing result to the user.
  • the container cluster-based application access request processing method provided by an embodiment of this specification first collects the trusted memory area of each computing node on which the first application is deployed when the user's access request for the first application is received. And further determine the remaining amount of the trusted memory area of each computing node. If the remaining amount of the trusted memory area of each computing node is less than a predetermined threshold, then the first application is expanded.
  • the expansion here includes: starting a new container corresponding to the first application in the trusted memory area of the memory of other computing nodes except the computing node where the first application is deployed. After that, the access request is distributed to other computing nodes, and the other computing nodes respond to the access request. In this way, it is possible to quickly respond to the user's access request, thereby improving the user experience.
  • an embodiment of this specification also provides an application access request processing device based on a container cluster.
  • the container cluster includes a master node and several computing nodes.
  • the master node is used to manage the several computing nodes.
  • a first application is deployed in some of the several computing nodes, so the first application is a containerized application, and its corresponding container runs in a trusted memory area of the memory of some computing nodes.
  • the device is set on the master node, as shown in FIG. 5, the device may include:
  • the receiving unit 502 is configured to receive a user's access request for the first application.
  • the sending unit 504 is configured to send a request for obtaining the usage amount of the trusted memory area to each computing node in some computing nodes.
  • the receiving unit 502 is also configured to receive the usage amount of the trusted memory area returned by each computing node. Among them, the usage amount of the trusted memory area of each computing node is obtained by each computing node by calling the hardware interface of the trusted memory area.
  • the determining unit 506 is configured to determine the remaining amount of the trusted memory area of each computing node based on the usage amount received by the receiving unit 502.
  • the capacity expansion unit 508 is configured to perform capacity expansion for the first application if the remaining amount of the trusted memory area of each computing node determined by the determining unit 506 is less than a predetermined threshold.
  • the expansion includes: starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except for some computing nodes.
  • the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold value includes: the maximum remaining amount of the remaining amount of the trusted memory area of each computing node is less than the predetermined threshold value.
  • the allocating unit 510 is configured to allocate the access request received by the receiving unit 502 to other computing nodes, and the other computing nodes will respond to the access request.
  • the sending unit 504 is further configured to: if the remaining amount of the trusted memory area of at least one of the computing nodes in each computing node is not less than a predetermined threshold, take the computing node corresponding to the largest remaining amount among the at least one computing node as a response to the access request And send an access request to the target computing node.
  • the device may further include: an selecting unit (not shown in the figure).
  • the receiving unit 502 is further configured to receive an application deployment request, where the application deployment request includes a container image corresponding to the first application.
  • the selection unit is used to select some computing nodes from a number of computing nodes at least according to the resource occupancy conditions of a number of computing nodes.
  • the sending unit 504 is further configured to send a container image to each of the computing nodes selected by the selecting unit, so that each computing node in each computing node starts the corresponding container of the first application by running the container image, and Run the first application in the started corresponding container.
  • the apparatus for processing application access requests based on a container cluster provided in an embodiment of the present specification can realize rapid response to user access requests, thereby improving user experience.
  • an embodiment of this specification also provides an application access request processing device based on a container cluster.
  • the container cluster includes a master node and several computing nodes.
  • the master node is used to manage the several computing nodes.
  • a first application is deployed in some of the several computing nodes, so the first application is a containerized application, and its corresponding container runs in a trusted memory area of the memory of some computing nodes.
  • the device is set on any first computing node among the above-mentioned partial computing nodes. As shown in FIG. 6, the device may include:
  • the receiving unit 602 is configured to receive a request for obtaining the usage amount of the trusted memory area sent by the master node, and the obtaining request is sent by the master node when receiving the user's access request for the first application.
  • the obtaining unit 604 is configured to obtain the usage amount of the trusted memory area of the first computing node.
  • the obtaining unit 604 is specifically configured to: call the hardware interface of the trusted memory area to obtain the usage amount of the trusted memory area of the first computing node.
  • the sending unit 606 is configured to return the usage amount of the trusted memory area to the master node, so that the master node performs capacity expansion for the first application when it determines that the remaining amount of the trusted memory area of some computing nodes is less than a predetermined threshold.
  • the expansion includes: starting a new container corresponding to the first application in a trusted memory area of the memory of other computing nodes except for some computing nodes. And it makes the master node distribute the access request to other computing nodes, and the other computing nodes respond to the access request.
  • the first computing node corresponds to the maximum remaining amount among the foregoing remaining amounts
  • the apparatus may further include: a processing unit (not shown in the figure).
  • the receiving unit 602 is further configured to receive an access request for the first application sent by the master node.
  • the processing unit is configured to process the access request received by the receiving unit 602, and return the corresponding processing result to the master node.
  • the device may further include: an operating unit (not shown in the figure).
  • the receiving unit 602 is further configured to receive the container image of the first application sent by the master node.
  • the running unit is configured to run the container image in the trusted memory area of the first computing node to start the corresponding container of the first application.
  • the running unit is also used to run the first application in the started corresponding container.
  • the apparatus for processing application access requests based on a container cluster provided in an embodiment of the present specification can realize rapid response to user access requests, thereby improving user experience.
  • the embodiments of this specification provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is caused to execute the method shown in FIG. 3 or FIG. 4.
  • the embodiment of the present specification provides a computing device, including a memory and a processor, the memory stores executable code, and when the processor executes the executable code, it implements the steps shown in FIG. 3 or FIG. 4 Indicates the method.
  • the steps of the method or algorithm described in conjunction with the disclosure of this specification can be implemented in a hardware manner, or can be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, which can be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, mobile hard disk, CD-ROM or any other form of storage known in the art Medium.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the ASIC may be located in the server.
  • the processor and the storage medium may also exist as discrete components in the server.
  • the functions described in the present invention can be implemented by hardware, software, firmware, or any combination thereof.
  • these functions can be stored in a computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

本说明书实施例提供一种基于容器集群的应用访问请求处理方法及装置,该容器集群包括主节点和若干计算节点。部分计算节点中部署有第一应用,该第一应用对应容器运行在可信内存区域中。该方法由主节点执行,包括:接收用户的针对第一应用的访问请求。向各计算节点发送可信内存区域的使用量的获取请求。接收返回的可信内存区域的使用量。基于接收的使用量,确定各计算节点的可信内存区域的剩余量。若剩余量均小于预定阈值,则针对第一应用进行扩容,包括:在其它计算节点的可信内存区域中,启动第一应用的新建容器。将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。由此,可实现在可信执行环境中对私有数据的访问请求的处理。

Description

基于容器集群的应用访问请求处理 技术领域
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种基于容器集群的应用访问请求处理方法及装置。
背景技术
传统技术中,容器集群中部署的应用,通常运行在普通内存中。这里的容器集群用于基于其中部署的应用为用户提供服务。因此,传统的容器集群的服务响应机制,也即应用访问请求的响应机制,通常只考量CPU占用率和普通内存使用量。
发明内容
本说明书一个或多个实施例描述了一种基于容器集群的应用访问请求处理方法及装置,可以有效地对应用访问请求进行处理。
第一方面,提供了一种基于容器集群的应用访问请求处理方法,包括:接收用户的针对所述第一应用的访问请求;向所述部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求;接收所述各计算节点各自返回的可信内存区域的使用量;基于所述使用量,确定所述各计算节点的可信内存区域的剩余量;若所述各计算节点的可信内存区域的剩余量均小于预定阈值,则针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中启动对应于所述第一应用的新建容器;将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
第二方面,提供了一种基于容器集群的应用访问请求处理方法,包括:接收所述主节点发送的可信内存区域的使用量的获取请求;所述获取请求由所述主节点在接收到用户的针对第一应用的访问请求时发送;获取所述第一计算节点的可信内存区域的使用量;向所述主节点返回所述可信内存区域的使用量,以使得所述主节点在判断所述部分计算节点的可信内存区域的剩余量均小于预定阈值时,针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中启动对应于所述第一应用的新建容器;并使得所述主节点将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
第三方面,提供了一种基于容器集群的应用访问请求处理装置,包括:接收单元,用于接收用户的针对所述第一应用的访问请求;发送单元,用于向所述部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求;所述接收单元,还用于接收所述各计算节点各自返回的可信内存区域的使用量;确定单元,用于基于所述接收单元接收的所述使用量,确定所述各计算节点的可信内存区域的剩余量;扩容单元,用于若所述确定单元确定的所述各计算节点的可信内存区域的剩余量均小于预定阈值,则针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;分配单元,用于将所述接收单元接收的所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
第四方面,提供了一种基于容器集群的应用访问请求处理装置,包括:接收单元,用于接收所述主节点发送的可信内存区域的使用量的获取请求;所述获取请求由所述主节点在接收到用户的针对第一应用的访问请求时发送;获取单元,用于获取所述第一计算节点的可信内存区域的使用量;发送单元,用于向所述主节点返回所述可信内存区域的使用量,以使得所述主节点在判断所述部分计算节点的可信内存区域的剩余量均小于预定阈值时,针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;并使得所述主节点将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
第五方面,提供了一种计算机存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行第一方面或者第二方面的方法。
第六方面,提供了一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现第一方面或者第二方面的方法。
本说明书一个或多个实施例提供的基于容器集群的应用访问请求处理方法及装置,在接收到用户的针对第一应用的访问请求时,先收集部署有第一应用的各计算节点的可信内存区域的使用量,并进一步确定出各计算节点的可信内存区域的剩余量,若各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容。这里的扩容包括:在除部署有第一应用的计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。之后,将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。由此,可以大大提升第一应用的访问请求的处理效率。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本说明书提供的容器集群示意图;
图2为本说明书提供的应用部署方法流程图;
图3为本说明书一个实施例提供的基于容器集群的应用访问请求处理方法流程图;
图4为本说明书另一个实施例提供的基于容器集群的应用访问请求处理方法流程图;
图5为本说明书一个实施例提供的基于容器集群的应用访问请求处理装置示意图;
图6为本说明书另一个实施例提供的基于容器集群的应用访问请求处理装置示意图。
具体实施方式
下面结合附图,对本说明书提供的方案进行描述。
在描述本说明书提供的方案之前,先对本方案的发明构思作以下说明。
由背景技术的内容可知,传统技术中,容器集群中的应用运行在普通内存中。然而随着SGX(英文全称:Software Guard Extensions)(Intel基于其CPU硬件提供的一套软件保护技术)技术的普遍流行,以及部分应用在安全方面要求的提高,本申请的发明人提出,将SGX技术引入容器集群中。如,针对容器集群中部署的对安全性要求比较高的应用,其可以运行在SGX的安全内存(即EPC内存)(由SGX在物理内存中提供的一块基于安全硬件保护的硬件区域)中。然而由于EPC内存有128MB的限制,因此,对运行在安全内存中的应用(以下简称第一应用)的访问请求进行有效管理就成为要解决的问题。
为实现第一应用的访问请求的有效管理,本申请的发明人提出,容器集群中的主节点接收用户的针对第一应用的访问请求。向容器集群中部署有第一应用的各计算节点发送可信内存区域的使用量的获取请求。接收各计算节点各自返回的可信内存区域的使用量。基于接收的使用量,确定各计算节点的可信内存区域的剩余量。若各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容。该扩容包括:在除部 署有第一应用的计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。之后,将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。由此,可以大大提升第一应用的访问请求的处理效率。
以上就是本说明书提供的发明构思,基于该发明构思就可以得到本方案,以下对本方案进行详细阐述。
图1为本说明书提供的容器集群示意图。该容器集群可由k8s(英文全称:Kubernetes)(一种容器编排的工具)进行管理,从而该容器集群也可以称为k8s的容器集群。图1中,该容器集群可以包括若干主机,其中的一个主机为主节点,其它各主机均为计算节点。主节点用于管理若干计算节点。若干计算节点的内存均包括可信内存区域,这里的可信内存区域即为EPC内存,其具有128MB的大小限制。此外,若干计算节点中的部分计算节点中部署有第一应用,该第一应用为容器化应用,其对应容器运行在若干计算节点的内存的可信内存区域中。这里的容器化应用是指运行在容器中的应用。也即,容器和应用之间是一对一的关系。
需要说明的是,在k8s的容器集群中的各容器是按组管理的。具体地,同一计算节点中,多个紧密相关的容器通常会被划分到一组。该一组容器构成容器集群的基本调度单位:pod。应理解,一个计算节点中至少运行一个pod。对于上述每个计算节点,其上还运行着如下组件:Kubelet、Proxy、Docker daemon。该三个组件用于负责对本计算节点上的Pod的生命周期进行管理(如,创建pod或者销毁pod等),以及进行应用的访问请求的处理等。
此外,上述主节点上运行着如下组件:etcd、API Server、Controller Manager、Scheduler,其中,后三个组件构成了容器集群的总控中心,其用于进行整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理功能。
需要说明的是,上述所提及的主节点的组件以及若干计算节点的组件均为k8s的容器集群中常用组件,因此,其功能在此不再详述。
总之,基于主节点以及若干计算节点上的组件,可以在容器集群中进行应用部署,此外,还可以对已部署应用的访问请求进行处理。
应理解,图1仅仅为容器集群的一种示例,在实际应用中,对于部署有第一应用的各计算节点,其中还可以部署有普通应用,但这些普通应用在普通内存中运行。也就是说,对于图1中的各计算节点,只有所部署的第一应用会在其内存的可信内存区域中运 行,也即第一应用会独占可信内存区域。此外,对于图1中除部署有第一应用外的其它计算节点,其中可以部署有普通应用(即运行在普通内存中的应用)。
以下对上述第一应用在图1所示的容器集群中的部署过程进行说明。
图2为本说明书提供的应用部署方法流程图。如图2所示,所述方法可以包括如下步骤:
步骤202,主节点接收应用部署请求。
该应用部署请求可以包括对应于第一应用的容器镜像。这里的容器镜像可以是由开发者通过集装箱(Docker)(一个开源的应用容器引擎)将第一应用以及第一应用的依赖包进行打包得到。
此外,上述应用部署请求还可以包括容器镜像的配置文件。这里的配置文件可以用于定义容器参数,如,容器的CPU占用量以及存储资源占用量等。
步骤204,至少根据若干计算节点的资源占用情况,从若干计算节点中选取部署第一应用的部分计算节点。
这里的资源使用情况可以包括但不限于CPU占用率、内存使用量以及存储资源占用率等。
在一个示例中,可以是由主节点通过其总控中心,从若干计算节点中,选择资源使用情况满足预定条件的计算节点作为部署第一应用的部分计算节点。这里的预定条件可以包括但不限于CPU占用率小于第一阈值,内存使用量小于第二阈值以及存储资源占用率小于第三阈值等。这里的第一阈值、第二阈值以及第三阈值根据经验值设定。
在另一个示例中,可以是由主节点通过其总控中心,根据若干计算节点的资源使用情况以及上述配置文件,从若干计算节点中选取部署第一应用的部分计算节点。
如,首先可以根据若干计算节点的CPU占用率以及存储资源占用率,分别确定若干计算节点的CPU剩余量以及存储资源剩余量。之后从若干计算节点中,选取对应CPU剩余量大于配置文件中定义的CPU占用量,且对应存储资源剩余量大于配置文件中定义的存储资源占用量的计算节点作为部署第一应用的部分计算节点。
步骤206,主节点向部分计算节点中的各计算节点发送容器镜像,以使得部分计算节点中的每个计算节点,通过运行容器镜像启动第一应用的对应容器,并在启动的对应容器中运行第一应用。
上述第一应用的对应容器是指Docker容器。需要说明的是,基于对应于不同应用的镜像文件,所启动的各Docker容器之间不会有任何接口,也即Docker容器之间是相互隔离的。此外,上述第一应用在Docker容器中运行,就像在真实的物理机上运行一样。
上述部分计算节点中的每个计算节点在启动第一应用的对应容器,并在启动的对应容器中运行第一应用之后,就完成了第一应用在容器集群中的部署,由于第一应用在每个计算节点上都对应于一个容器,并在对应容器中运行,因此,该第一应用也可以称为容器化应用。
以上是对第一应用在图1示出的容器集群中的部署过程的说明,以下对用户对第一应用的访问过程进行说明。
图3为本说明书一个实施例提供的基于容器集群的应用访问请求处理方法流程图。所述方法执行主体可以为具有处理能力的设备:服务器或者系统或者主机。如,可以为图1中的主节点。如图3所示,所述方法具体可以包括:
步骤302,接收用户的针对第一应用的访问请求。
在一个例子中,该访问请求可以包括第一应用的唯一标识。从而,可以基于该唯一标识,确定用户所请求访问的第一应用。
步骤304,向部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求。
应理解,由于第一应用只部署在容器集群的部分计算节点中,因此,在执行步骤304之前,可以先从容器集群的若干计算节点中,选取部署有第一应用的部分计算节点。
结合图1来说,可以从计算节点1-计算节点N的N个计算节点中,选取部署有第一应用的各计算节点。假设计算节点i和计算节点j均部署有该第一应用,那么可以计算节点i和计算节点j选取为上述部分计算节点。其中,i和j均为正整数,且1≤i≤N,1≤j≤N。
各计算节点在接收到上述获取请求之后,可以通过调用可信内存区域的硬件接口,获取各自的可信内存区域的使用量,并向主节点返回所获取的结果。这里的可信内存区域的硬件接口即为SGX接口,其通常也称为SGX驱动(driver)。
步骤306,接收各计算节点各自返回的可信内存区域的使用量。
步骤308,基于接收的各使用量,确定各计算节点的可信内存区域的剩余量。
以各计算节点中任意的第一计算节点为例来说,该第一计算节点的可信内存区域的 剩余量可以基于其可信内存区域的使用量上限(如,128MB)和对应的使用量的差值得到。
步骤310,若各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容,该扩容包括:在除部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。
在本说明书中,可以预先为每个计算节点设定对应的预定阈值。其中,各计算节点各自对应的预定阈值可以相同,也可以不同。以各计算节点各自对应的预定阈值相同为例来说,其可以根据容器集群中所部署第一应用的类型等设定。
在一个示例中,各计算节点的可信内存区域的剩余量是否均小于预定阈值的判断步骤可以包括:从各计算节点的可信内存区域的剩余量中,确定出最大剩余量。判断最大剩余量是否小于预定阈值。若是,则确定各计算节点的可信内存区域的剩余量均小于预定阈值。
需要说明的是,步骤310中所提及的新建容器可以是通过复制第一应用的对应容器在已部署计算节点上对应的pod得到;或者说,可以通过在其它计算节点上生成包含该第一应用的对应容器的pod的副本得到。
此外,上述其它计算节点的个数可以为一个或多个,其具体个数可以由主节点的总控中心结合目标信息(如,各计算节点的可信内存的当前使用量以及预定使用量)设定。
步骤312,将访问请求分配至其它计算节点,并由其它计算节点对该访问请求进行响应。
应理解,在其它计算节点的个数为多个时,这里可以是向其中的一个其它计算节点分配访问请求。在一个示例中,这里的一个其它计算节点可以是随机选取得到。
以上是针对部署有第一应用的各计算节点的可信内存区域的剩余量均小于预定阈值的情况的说明,在各计算节点中的至少一个计算节点的可信内存区域的剩余量不小于预定阈值时,将至少一个计算节点中,对应于最大剩余量的计算节点作为响应访问请求的目标计算节点,并向目标计算节点发送访问请求。
目标计算节点在接收到该访问请求之后,可以对该访问请求进行处理,并向主节点返回该访问请求的处理结果。之后,再由主节点将该处理结果转发给用户。
如前述例子中,假设在计算节点i和计算节点j中,计算节点i对应于最大剩余量, 且该最大剩余量不小于预定阈值,则可以向计算节点i发送访问请求,并由计算节点i处理该访问请求,以及向主节点返回访问请求的处理结果。
综合以上,本说明书一个实施例提供的基于容器集群的应用访问请求处理方法,在接收到用户的针对第一应用的访问请求时,先收集部署有第一应用的各计算节点的可信内存区域的使用量,并进一步确定出各计算节点的可信内存区域的剩余量,若各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容。这里的扩容包括:在除部署有第一应用的计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。之后,将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。由此,可以实现快速地对用户的访问请求进行响应,进而可以提升用户体验。
图4为本说明书另一个实施例提供的基于容器集群的应用访问请求处理方法流程图。所述方法执行主体可以为具有处理能力的设备:服务器或者系统或者主机。如,可以为图1中部署有第一应用的部分计算节点中任意的第一计算节点。如图4所示,所述方法具体可以包括:
步骤402,接收主节点发送的可信内存区域的使用量的获取请求。
该获取请求可以是由主节点在接收到用户的针对第一应用的访问请求时发送。
步骤404,获取第一计算节点的可信内存区域的使用量。
在一个示例中,第一计算节点可以通过调用可信内存区域的硬件接口,获取其可信内存区域的使用量。这里的可信内存区域的硬件接口即为SGX接口,其通常也称为SGX驱动(driver)。
步骤406,向主节点返回可信内存区域的使用量。
主节点在接收到各计算节点各自返回的可信内存区域的使用量之后,可以基于接收的各使用量,确定各计算节点的可信内存区域的剩余量。以各计算节点中任意的第一计算节点为例来说,该第一计算节点的可信内存区域的剩余量可以基于其可信内存区域的使用量上限(如,128MB)和对应的使用量的差值得到。
之后,主节点可以判断各计算节点的可信内存区域的剩余量是否均小于预定阈值。需要说明的是,在本说明书中,可以预先为每个计算节点设定对应的预定阈值。其中,各计算节点各自对应的预定阈值可以相同,也可以不同。以各计算节点各自对应的预定阈值相同为例来说,其可以根据容器集群中所部署第一应用的类型等设定。
在一个示例中,各计算节点的可信内存区域的剩余量是否均小于预定阈值的判断步骤可以包括:从各计算节点的可信内存区域的剩余量中,确定出最大剩余量。判断最大剩余量是否小于预定阈值。若是,则确定各计算节点的可信内存区域的剩余量均小于预定阈值。
然后,主节点在判断各计算节点的可信内存区域的剩余量均小于预定阈值时,针对第一应用进行扩容。该扩容包括:在除部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。
这里所提及的新建容器可以是通过复制第一应用的对应容器在已部署计算节点上对应的pod得到;或者说,可以通过在其它计算节点上生成包含该第一应用的对应容器的pod的副本得到。
需要说明的是,上述其它计算节点的个数可以为一个或多个,其具体个数可以由主节点的总控中心结合目标信息(如,各计算节点的可信内存的当前使用量以及预定使用量)设定。
最后,主节点可以将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。
应理解,在其它计算节点的个数为多个时,这里可以是向其中的一个其它计算节点分配访问请求。在一个示例中,这里的一个其它计算节点可以是随机选取得到。
以上是针对部署有第一应用的各计算节点的可信内存区域的剩余量均小于预定阈值的情况的说明,在各计算节点中的至少一个计算节点的可信内存区域的剩余量不小于预定阈值时,将至少一个计算节点中,对应于最大剩余量的计算节点作为响应访问请求的目标计算节点,并向目标计算节点发送访问请求。
假设上述第一计算节点为对应于最大剩余量的目标计算节点,那么第一计算节点可以接收访问请求,并且在接收到该访问请求之后,可以对该访问请求进行处理,并向主节点返回该访问请求的处理结果。之后,再由主节点将该处理结果转发给用户。
综合以上,本说明书一个实施例提供的基于容器集群的应用访问请求处理方法,在接收到用户的针对第一应用的访问请求时,先收集部署有第一应用的各计算节点的可信内存区域的使用量,并进一步确定出各计算节点的可信内存区域的剩余量,若各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容。这里的扩容包括:在除部署有第一应用的计算节点外的其它计算节点的内存的可信内存区域中,启动 对应于第一应用的新建容器。之后,将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。由此,可以实现快速地对用户的访问请求进行响应,进而可以提升用户体验。
与上述基于容器集群的应用访问请求处理方法对应地,本说明书一个实施例还提供的一种基于容器集群的应用访问请求处理装置。该容器集群包括主节点和若干计算节点。主节点用于管理所述若干计算节点。若干计算节点中的部分计算节点中部署有第一应用,所该第一应用为容器化应用,其对应容器运行在部分计算节点的内存的可信内存区域中。该装置设置于主节点,如图5所示,该装置可以包括:
接收单元502,用于接收用户的针对第一应用的访问请求。
发送单元504,用于向部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求。
接收单元502,还用于接收各计算节点各自返回的可信内存区域的使用量。其中,各计算节点的可信内存区域的使用量由各计算节点通过调用可信内存区域的硬件接口获取。
确定单元506,用于基于接收单元502接收的使用量,确定各计算节点的可信内存区域的剩余量。
扩容单元508,用于若确定单元506确定的各计算节点的可信内存区域的剩余量均小于预定阈值,则针对第一应用进行扩容。该扩容包括:在除部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。各计算节点的可信内存区域的剩余量均小于预定阈值包括:各计算节点的可信内存区域的剩余量中的最大剩余量小于预定阈值。
分配单元510,用于将接收单元502接收的访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。
发送单元504,还用于若各计算节点中的至少一个计算节点的可信内存区域的剩余量不小于预定阈值,则将至少一个计算节点中,对应于最大剩余量的计算节点作为响应访问请求的目标计算节点,并向目标计算节点发送访问请求。
可选地,该装置还可以包括:选取单元(图中未示出)。接收单元502,还用于接收应用部署请求,该应用部署请求包括对应于第一应用的容器镜像。选取单元,用于至少根据若干计算节点的资源占用情况,从若干计算节点中选取部分计算节点。发送单元 504,还用于向选取单元选取的部分计算节点中的各计算节点发送容器镜像,以使得各计算节点中的每个计算节点,通过运行容器镜像启动第一应用的对应容器,并在启动的对应容器中运行第一应用。
本说明书上述实施例装置的各功能模块的功能,可以通过上述方法实施例的各步骤来实现,因此,本说明书一个实施例提供的装置的具体工作过程,在此不复赘述。
本说明书一个实施例提供的基于容器集群的应用访问请求处理装置,可以实现快速地对用户的访问请求进行响应,进而可以提升用户体验。
与上述基于容器集群的应用访问请求处理方法对应地,本说明书一个实施例还提供的一种基于容器集群的应用访问请求处理装置。该容器集群包括主节点和若干计算节点。主节点用于管理所述若干计算节点。若干计算节点中的部分计算节点中部署有第一应用,所该第一应用为容器化应用,其对应容器运行在部分计算节点的内存的可信内存区域中。该装置设置于上述部分计算节点中任意的第一计算节点,如图6所示,该装置可以包括:
接收单元602,用于接收主节点发送的可信内存区域的使用量的获取请求,该获取请求由主节点在接收到用户的针对第一应用的访问请求时发送。
获取单元604,用于获取第一计算节点的可信内存区域的使用量。
获取单元604具体用于:调用可信内存区域的硬件接口,以获取第一计算节点的可信内存区域的使用量。
发送单元606,用于向主节点返回可信内存区域的使用量,以使得主节点在判断部分计算节点的可信内存区域的剩余量均小于预定阈值时,针对第一应用进行扩容。该扩容包括:在除部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于第一应用的新建容器。并使得主节点将访问请求分配至其它计算节点,并由其它计算节点对访问请求进行响应。
可选地,第一计算节点对应于上述各剩余量中的最大剩余量,该装置还可以包括:处理单元(图中未示出)。
接收单元602,还用于接收主节点发送的针对第一应用的访问请求。
处理单元,用于处理接收单元602接收的访问请求,并向主节点返回对应的处理结果。
可选地,该装置还可以包括:运行单元(图中未示出)。接收单元602,还用于 接收主节点发送的第一应用的容器镜像。运行单元,用于在第一计算节点的可信内存区域中,运行容器镜像,以启动第一应用的对应容器。
运行单元,还用于在启动的对应容器中运行第一应用。
本说明书上述实施例装置的各功能模块的功能,可以通过上述方法实施例的各步骤来实现,因此,本说明书一个实施例提供的装置的具体工作过程,在此不复赘述。
本说明书一个实施例提供的基于容器集群的应用访问请求处理装置,可以实现快速地对用户的访问请求进行响应,进而可以提升用户体验。
另一方面,本说明书的实施例提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行图3或图4所示的方法。
另一方面,本说明书的实施例提供一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现图3或图4所示的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
结合本说明书公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于服务器中。当然,处理器和存储介质也可以作为分立组件存在于服务器中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机 能够存取的任何可用介质。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
以上所述的具体实施方式,对本说明书的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本说明书的具体实施方式而已,并不用于限定本说明书的保护范围,凡在本说明书的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本说明书的保护范围之内。

Claims (20)

  1. 一种基于容器集群的应用访问请求处理方法,所述容器集群包括主节点和若干计算节点;所述主节点用于管理所述若干计算节点;所述若干计算节点中的部分计算节点中部署有第一应用,所述第一应用为容器化应用,其对应容器运行在所述部分计算节点的内存的可信内存区域中;所述方法由所述主节点执行,包括:
    接收用户的针对所述第一应用的访问请求;
    向所述部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求;
    接收所述各计算节点各自返回的可信内存区域的使用量;
    基于所述使用量,确定所述各计算节点的可信内存区域的剩余量;
    若所述各计算节点的可信内存区域的剩余量均小于预定阈值,则针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;
    将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
  2. 根据权利要求1所述的方法,所述各计算节点的可信内存区域的使用量由所述各计算节点通过调用可信内存区域的硬件接口获取。
  3. 根据权利要求1所述的方法,所述各计算节点的可信内存区域的剩余量均小于预定阈值包括:所述各计算节点的可信内存区域的剩余量中的最大剩余量小于预定阈值。
  4. 根据权利要求1所述的方法,还包括:
    若所述各计算节点中的至少一个计算节点的可信内存区域的剩余量不小于预定阈值,则将所述至少一个计算节点中,对应于最大剩余量的计算节点作为响应所述访问请求的目标计算节点,并向所述目标计算节点发送所述访问请求。
  5. 根据权利要求1所述的方法,所述第一应用通过以下步骤部署得到:
    接收应用部署请求;所述应用部署请求包括对应于所述第一应用的容器镜像;
    至少根据所述若干计算节点的资源占用情况,从所述若干计算节点中选取所述部分计算节点;
    向所述部分计算节点中的各计算节点发送所述容器镜像,以使得所述各计算节点中的每个计算节点,通过运行所述容器镜像启动所述第一应用的对应容器,并在启动的对应容器中运行所述第一应用。
  6. 一种基于容器集群的应用访问请求处理方法,所述容器集群包括主节点和若干计算节点;所述主节点用于管理所述若干计算节点;所述若干计算节点中的部分计算节 点中部署有第一应用,所述第一应用为容器化应用,其对应容器运行在所述若干计算节点的内存的可信内存区域中;所述方法由所述部分计算节点中任意的第一计算节点执行,包括:
    接收所述主节点发送的可信内存区域的使用量的获取请求;所述获取请求由所述主节点在接收到用户的针对第一应用的访问请求时发送;
    获取所述第一计算节点的可信内存区域的使用量;
    向所述主节点返回所述可信内存区域的使用量,以使得所述主节点在判断所述部分计算节点的可信内存区域的剩余量均小于预定阈值时,针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;并
    使得所述主节点将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
  7. 根据权利要求6所述的方法,所述获取所述第一计算节点的可信内存区域的使用量,包括:
    调用可信内存区域的硬件接口,以获取所述第一计算节点的可信内存区域的使用量。
  8. 根据权利要求6所述的方法,所述第一计算节点对应于所述剩余量中的最大剩余量;所述方法还包括:
    接收所述主节点发送的针对所述第一应用的访问请求;
    处理所述访问请求,并向所述主节点返回对应的处理结果。
  9. 根据权利要求6所述的方法,所述第一应用通过以下步骤在所述第一计算节点中部署:
    接收所述主节点发送的所述第一应用的容器镜像;
    在所述第一计算节点的可信内存区域中,运行所述容器镜像,以启动所述第一应用的对应容器;
    在启动的对应容器中运行所述第一应用。
  10. 一种基于容器集群的应用访问请求处理装置,所述容器集群包括主节点和若干计算节点;所述主节点用于管理所述若干计算节点;所述若干计算节点中的部分计算节点中部署有第一应用,所述第一应用为容器化应用,其对应容器运行在所述部分计算节点的内存的可信内存区域中;所述装置设置于所述主节点,包括:
    接收单元,用于接收用户的针对所述第一应用的访问请求;
    发送单元,用于向所述部分计算节点中的各计算节点发送可信内存区域的使用量的获取请求;
    所述接收单元,还用于接收所述各计算节点各自返回的可信内存区域的使用量;
    确定单元,用于基于所述接收单元接收的所述使用量,确定所述各计算节点的可信内存区域的剩余量;
    扩容单元,用于若所述确定单元确定的所述各计算节点的可信内存区域的剩余量均小于预定阈值,则针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;
    分配单元,用于将所述接收单元接收的所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
  11. 根据权利要求10所述的装置,所述各计算节点的可信内存区域的使用量由所述各计算节点通过调用可信内存区域的硬件接口获取。
  12. 根据权利要求10所述的装置,所述各计算节点的可信内存区域的剩余量均小于预定阈值包括:所述各计算节点的可信内存区域的剩余量中的最大剩余量小于预定阈值。
  13. 根据权利要求10所述的装置,
    所述发送单元,还用于若所述各计算节点中的至少一个计算节点的可信内存区域的剩余量不小于预定阈值,则将所述至少一个计算节点中,对应于最大剩余量的计算节点作为响应所述访问请求的目标计算节点,并向所述目标计算节点发送所述访问请求。
  14. 根据权利要求10所述的装置,还包括:选取单元;
    所述接收单元,还用于接收应用部署请求;所述应用部署请求包括对应于所述第一应用的容器镜像;
    所述选取单元,用于至少根据所述若干计算节点的资源占用情况,从所述若干计算节点中选取所述部分计算节点;
    所述发送单元,还用于向选取单元选取的所述部分计算节点中的各计算节点发送所述容器镜像,以使得所述各计算节点中的每个计算节点,通过运行所述容器镜像启动所述第一应用的对应容器,并在启动的对应容器中运行所述第一应用。
  15. 一种基于容器集群的应用访问请求处理装置,所述容器集群包括主节点和若干计算节点;所述主节点用于管理所述若干计算节点;所述若干计算节点中的部分计算节点中部署有第一应用,所述第一应用为容器化应用,其对应容器运行在所述若干计算节点的内存的可信内存区域中;所述装置设置于所述部分计算节点中任意的第一计算节 点,包括:
    接收单元,用于接收所述主节点发送的可信内存区域的使用量的获取请求;所述获取请求由所述主节点在接收到用户的针对第一应用的访问请求时发送;
    获取单元,用于获取所述第一计算节点的可信内存区域的使用量;
    发送单元,用于向所述主节点返回所述可信内存区域的使用量,以使得所述主节点在判断所述部分计算节点的可信内存区域的剩余量均小于预定阈值时,针对所述第一应用进行扩容,所述扩容包括在除所述部分计算节点外的其它计算节点的内存的可信内存区域中,启动对应于所述第一应用的新建容器;并使得所述主节点将所述访问请求分配至所述其它计算节点,并由所述其它计算节点对所述访问请求进行响应。
  16. 根据权利要求15所述的装置,所述获取单元具体用于:
    调用可信内存区域的硬件接口,以获取所述第一计算节点的可信内存区域的使用量。
  17. 根据权利要求15所述的装置,所述第一计算节点对应于所述剩余量中的最大剩余量;所述装置还包括:处理单元;
    所述接收单元,还用于接收所述主节点发送的针对所述第一应用的访问请求;
    所述处理单元,用于处理所述接收单元接收的所述访问请求,并向所述主节点返回对应的处理结果。
  18. 根据权利要求15所述的装置,所述装置还包括:运行单元;
    所述接收单元,还用于接收所述主节点发送的所述第一应用的容器镜像;
    所述运行单元,用于在所述第一计算节点的可信内存区域中,运行所述容器镜像,以启动所述第一应用的对应容器;
    所述运行单元,还用于在启动的对应容器中运行所述第一应用。
  19. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求1-9中任一项所述的方法。
  20. 一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求1-9中任一项所述的方法。
PCT/CN2021/092172 2020-05-09 2021-05-07 基于容器集群的应用访问请求处理 WO2021227954A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010384200.7 2020-05-09
CN202010384200.7A CN111290838B (zh) 2020-05-09 2020-05-09 基于容器集群的应用访问请求处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021227954A1 true WO2021227954A1 (zh) 2021-11-18

Family

ID=71017389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092172 WO2021227954A1 (zh) 2020-05-09 2021-05-07 基于容器集群的应用访问请求处理

Country Status (2)

Country Link
CN (1) CN111290838B (zh)
WO (1) WO2021227954A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143315A (zh) * 2021-11-30 2022-03-04 阿里巴巴(中国)有限公司 边缘云系统、主机访问方法及设备
CN115269198A (zh) * 2022-08-10 2022-11-01 抖音视界有限公司 基于服务器集群的访问请求处理方法及相关设备
CN116055562A (zh) * 2022-10-26 2023-05-02 北京蔚领时代科技有限公司 一种云游戏存储空间自动扩容方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111290838B (zh) * 2020-05-09 2020-08-18 支付宝(杭州)信息技术有限公司 基于容器集群的应用访问请求处理方法及装置
CN111831447B (zh) * 2020-07-16 2024-04-30 中国民航信息网络股份有限公司 一种基于性能监控的应用弹性扩容方法及装置
CN117130718A (zh) * 2022-05-18 2023-11-28 中兴通讯股份有限公司 内存管理方法、网络设备及计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160026819A1 (en) * 2014-07-25 2016-01-28 Fiberlink Communications Corporation Use case driven granular application and browser data loss prevention controls
CN105933391A (zh) * 2016-04-11 2016-09-07 青岛海信传媒网络技术有限公司 一种节点扩容方法、装置及系统
CN106934303A (zh) * 2015-12-29 2017-07-07 大唐高鸿信安(浙江)信息科技有限公司 基于可信芯片的可信操作系统创建可信进程的系统及方法
CN107392011A (zh) * 2017-08-22 2017-11-24 致象尔微电子科技(上海)有限公司 一种内存页转移方法
CN107786358A (zh) * 2016-08-29 2018-03-09 中兴通讯股份有限公司 分布式系统及该分布式系统的扩容方法
CN108021823A (zh) * 2017-12-04 2018-05-11 北京元心科技有限公司 基于可信执行环境无痕运行应用程序的方法、装置和终端
CN110289982A (zh) * 2019-05-17 2019-09-27 平安科技(深圳)有限公司 容器应用的扩容方法、装置、计算机设备及存储介质
CN111290838A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 基于容器集群的应用访问请求处理方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015138245A1 (en) * 2014-03-08 2015-09-17 Datawise Systems, Inc. Methods and systems for converged networking and storage
CN108572867A (zh) * 2017-03-09 2018-09-25 株式会社日立制作所 为应用部署分布式容器集群且执行该应用的方法和装置
CN110782122B (zh) * 2019-09-16 2023-11-24 腾讯大地通途(北京)科技有限公司 数据处理方法、装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160026819A1 (en) * 2014-07-25 2016-01-28 Fiberlink Communications Corporation Use case driven granular application and browser data loss prevention controls
CN106934303A (zh) * 2015-12-29 2017-07-07 大唐高鸿信安(浙江)信息科技有限公司 基于可信芯片的可信操作系统创建可信进程的系统及方法
CN105933391A (zh) * 2016-04-11 2016-09-07 青岛海信传媒网络技术有限公司 一种节点扩容方法、装置及系统
CN107786358A (zh) * 2016-08-29 2018-03-09 中兴通讯股份有限公司 分布式系统及该分布式系统的扩容方法
CN107392011A (zh) * 2017-08-22 2017-11-24 致象尔微电子科技(上海)有限公司 一种内存页转移方法
CN108021823A (zh) * 2017-12-04 2018-05-11 北京元心科技有限公司 基于可信执行环境无痕运行应用程序的方法、装置和终端
CN110289982A (zh) * 2019-05-17 2019-09-27 平安科技(深圳)有限公司 容器应用的扩容方法、装置、计算机设备及存储介质
CN111290838A (zh) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 基于容器集群的应用访问请求处理方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143315A (zh) * 2021-11-30 2022-03-04 阿里巴巴(中国)有限公司 边缘云系统、主机访问方法及设备
CN115269198A (zh) * 2022-08-10 2022-11-01 抖音视界有限公司 基于服务器集群的访问请求处理方法及相关设备
CN116055562A (zh) * 2022-10-26 2023-05-02 北京蔚领时代科技有限公司 一种云游戏存储空间自动扩容方法及装置

Also Published As

Publication number Publication date
CN111290838A (zh) 2020-06-16
CN111290838B (zh) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2021227954A1 (zh) 基于容器集群的应用访问请求处理
WO2018149221A1 (zh) 一种设备管理方法及网管系统
US10942795B1 (en) Serverless call distribution to utilize reserved capacity without inhibiting scaling
US10956185B2 (en) Threading as a service
US11188391B1 (en) Allocating resources to on-demand code executions under scarcity conditions
EP3073374B1 (en) Thread creation method, service request processing method and related device
JP5965552B2 (ja) バーチャルマシーンのホットマイグレーションを実現する方法、装置及びシステム
WO2015196931A1 (zh) 基于磁盘io的虚拟资源分配方法及装置
JP5510556B2 (ja) 仮想マシンのストレージスペースおよび物理ホストを管理するための方法およびシステム
CN109564528B (zh) 分布式计算中计算资源分配的系统和方法
JP2015144020A5 (zh)
CN113037794B (zh) 计算资源配置调度方法、装置及系统
JP2006178969A (ja) 動作不能なマスタ作業負荷管理プロセスを代替するシステムおよび方法
US11010190B2 (en) Methods, mediums, and systems for provisioning application services
WO2021227999A1 (zh) 云计算服务系统和方法
CN110750336B (zh) 一种OpenStack虚拟机内存热扩容方法
CN111061432B (zh) 一种业务迁移方法、装置、设备及可读存储介质
WO2018107945A1 (zh) 一种实现硬件资源分配的方法、装置及存储介质
CN107566470B (zh) 云数据系统中管理虚拟机的方法和装置
CN113382077A (zh) 微服务调度方法、装置、计算机设备和存储介质
WO2021013185A1 (zh) 虚机迁移处理及策略生成方法、装置、设备及存储介质
CN106844035B (zh) 一种实现云服务器资源释放或恢复的方法及装置
WO2019196926A1 (zh) 一种设备切片的处理方法、装置及计算机可读存储介质
CN116467066A (zh) 数仓资源调配方法、装置、电子设备以及存储介质
CN115220993A (zh) 进程监控方法、装置、车辆及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21802911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21802911

Country of ref document: EP

Kind code of ref document: A1