CN113656142A - Container group pod-based processing method, related system and storage medium - Google Patents

Container group pod-based processing method, related system and storage medium Download PDF

Info

Publication number
CN113656142A
CN113656142A CN202110808331.8A CN202110808331A CN113656142A CN 113656142 A CN113656142 A CN 113656142A CN 202110808331 A CN202110808331 A CN 202110808331A CN 113656142 A CN113656142 A CN 113656142A
Authority
CN
China
Prior art keywords
pod
function
service
micro
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110808331.8A
Other languages
Chinese (zh)
Other versions
CN113656142B (en
Inventor
方振芳
迟建春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110808331.8A priority Critical patent/CN113656142B/en
Publication of CN113656142A publication Critical patent/CN113656142A/en
Priority to PCT/CN2022/104955 priority patent/WO2023284688A1/en
Application granted granted Critical
Publication of CN113656142B publication Critical patent/CN113656142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances

Abstract

The embodiment of the application provides a processing method based on a container group pod, a related system and a storage medium. The method comprises the following steps: the method comprises the steps that a first micro service receives a request for acquiring a pod from a second micro service, wherein the request carries function metadata of a first function; the first micro service acquires the description information of a target pod from the pod managed by the first micro service according to the function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is the pod which has run the function once; the first microservice returns description information for the target pod to the second microservice in response to the request. By adopting the method, the pod which runs the function once is taken as the target pod, so that the time for regenerating the pod is saved, and the cold start time delay of the pod is effectively reduced.

Description

Container group pod-based processing method, related system and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a processing method based on a container group pod, a related system, and a storage medium.
Background
A Function as a service (FaaS) is a computation execution model that is event-driven and implements a serverless computation method. It has a fully automatic, resilient, service provider-managed, lateral expansion capability that can help developers reduce operational and development costs. Developers only need to write a simple event processing function to construct own service and all other things are processed by the platform, FaaS users do not need to consider scaling at all, and how to improve the agility of scaling becomes one of the biggest technical challenges of the FaaS platform.
Observing the industry's practice of FaaS architecture, the practice based on the kubernets platform of the container cluster management system generally utilizes containers in pod as function execution environment, and the pod start-up time is generally above 3 seconds, thus seriously affecting FaaS agility.
At present, a general pod pool is created, and a pre-started general pod is stored in the general pod pool, and when a function request is received, a pod pool management micro-service acquires 1 pod from the pool and performs a business process, thereby executing a function. The pod is deleted after the function is executed. The pod pool management microservice reduces function cold start latency by supplementing the pool with a new generic pod. However, when the functions are started concurrently, the pre-start pod in the pool is used up quickly, and the subsequent functions still need to pass through the pod cold start process, so that the problem of function cold start delay is not solved well.
Disclosure of Invention
The application discloses a processing method based on a container group pod, a related system and a storage medium, which can effectively reduce the time delay of cold start of the pod.
In a first aspect, an embodiment of the present application provides a processing method based on a container group pod, including: the method comprises the steps that a first micro service receives a request for acquiring a pod from a second micro service, wherein the request carries function metadata of a first function; the first micro service acquires the description information of a target pod from the pod managed by the first micro service according to the function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is the pod which has run the function once; the first microservice returns description information for the target pod to the second microservice in response to the request.
The pod managed by the first micro-service may be understood as that the first micro-service has a right to acquire description information of the pod, and the first micro-service may clean the pod, or call an interface to delete the pod, or the like.
In the embodiment of the application, the first micro service acquires the description information of the target pod matched with the function metadata of the first function from the pod managed by the first micro service according to the function metadata of the first function, and the target pod is the pod where the function is operated once. By adopting the method, the target pod is obtained from the pod which has operated the function once, compared with the prior art that the pod which has operated the function is directly deleted, the time for regenerating the pod is saved and the cold start time delay of the pod is effectively reduced.
On the other hand, the scheme can reduce the time delay of the concurrent starting of the functions and control the total occupation amount of system resources. The scheme realizes a general technology based on the existing resource management platform, and avoids extra development and maintenance cost.
The description information of the target pod refers to the pod resource specification, static or dynamic information of the pod such as the manager to which the pod belongs, for example, cpu size, memory size, and other information of the target pod, which is not specifically limited in this embodiment.
The function metadata of the first function refers to data describing information such as function attributes and function operation resource specifications, for example, a cpu size and a memory size required by the first function.
The above description information of the target pod is matched with the function metadata of the first function, and it can be understood that: the cpu size of the target pod is matched with the cpu size required by the first function, and the memory size of the target pod is matched with the memory size required by the first function.
This matching can be understood as: and the cpu size of the target pod is not smaller than the cpu size required by the first function, and the memory size of the target pod is not smaller than the memory size required by the first function.
As an optional implementation manner, the target pod is obtained by the first microserver cleaning a function residual temporary file in the pod where the function has been run and residual data in a memory.
The method has the advantages that the function residual temporary file in the pod with the function running once and the residual data in the memory are cleaned, so that the pod can be recycled, a new pod does not need to be generated again when the pod is started next time, a cold start process is not needed, and the time delay of cold start is effectively reduced.
As an optional implementation manner, the method further includes: and the first micro-service cleans or deletes the pod which runs the function according to at least one item of the description information of the pod which runs the function, the state of the first micro-service and the total resource occupation amount of the system.
According to the scheme, only the pod which has run the function once is cleaned or deleted according to the information, and the pod which has run the function once can be cleaned or deleted based on other information, which is not specifically limited by the scheme.
The above-described description information of the pod on which the function has been run can be used to determine whether to perform cleaning based on the category of the pod. Specifically, the cpu threshold, the memory threshold, the user information (tenant information) to which the function belongs, and the like of the pod may be used, and of course, other information may also be included, which is not specifically limited in the present solution.
The status of the first micro service indicates at least one of the number and the type of the pod managed by the first micro service and the total resource occupied by the pod managed by the first micro service.
The total system resource occupation amount indicates at least one of a CPU size and a memory size occupied by a function operation system, wherein the function operation system comprises the first micro service and the second micro service.
The function operating system may further include other micro services, and the present solution is not particularly limited thereto.
For example, the function running system may be a FaaS system that implements serverless computing, which may be FaaS based on a kubernets platform, or FaaS based on another platform. Of course, the function operating system may also be other systems for implementing serverless computing, and this scheme is not particularly limited in this respect.
The occupancy amount and the like in the present scheme may be an actual occupancy amount, or may be a relative occupancy amount, for example, expressed by a percentage and the like, which is not specifically limited in the present scheme.
As an optional implementation manner, if the pod that has run the function once is not cleaned, the first interface is called to delete the pod that has run the function once. The first interface may be a kubernets interface or the like.
As an optional implementation manner, when the total system resource occupation amount is smaller than a first preset value and the number of the pod obtained by cleaning in the first micro service is not larger than a second preset value, the first micro service cleans function residual temporary files in the pod where the function has been run and residual data in the memory.
As an optional implementation manner, when the total resource occupancy of the system is greater than a first preset value, or the number of the pod obtained by cleaning in the first micro service is greater than a second preset value, the first micro service calls the first interface to delete the pod in which the function has been run.
As an optional implementation manner, when the survival duration of the target pod is reached, the first micro-service calls a first interface to delete the target pod, where the survival duration of the target pod is related to at least one of the description information of the pod where the function has been run, the state of the first micro-service, and the total resource occupation amount of the system.
By adopting the method, the pod is deleted when the survival time of the pod is reached, and the resource waste caused by idle pod is further prevented.
As an alternative implementation manner, the target pod stores therein the program code of the first function, so that the target pod runs the first function, i.e., executes the program code of the first function.
By adopting the method, the time for downloading the program code, namely the function package, is saved, the time for the pod business processing is further saved, and the pod business efficiency is effectively improved.
It should be understood that the above-mentioned program code of the first function refers to code for implementing the function of the first function, that is, code that can be used for implementing the function of the first function can be considered as program code of the first function, and the specific content and the specific implementation manner included in the present application are not limited. For example, the source code of the first function may be indicated, or the source code of the first function may be indicated in the form of an interface.
As an alternative implementation, the target pod maintains access to the network connection with the first pod.
Wherein the first pod has previously established a network connection with the pod that has run the function. By adopting the means, when the third micro-service in the first pod needs to be communicated, the third micro-service can be directly communicated without reestablishing a network connection flow, and the service processing efficiency is effectively improved. For example, the third microservice may be a function repository microservice, which communicates with the function repository microservice in the first pod to download program code for a function, and so on.
In a second aspect, an embodiment of the present application provides a pod processing apparatus, including: a receiving module, configured to receive a request for obtaining a pod from a second microservice, where the request carries function metadata of a first function; an obtaining module, configured to obtain description information of a target pod from a pod managed by the apparatus according to the function metadata of the first function, where the description information of the target pod matches the function metadata of the first function, and the target pod is a pod in which a function has been run; and the sending module is used for returning the description information of the target pod to the second microserver so as to respond to the request.
In the embodiment of the application, the first micro service acquires the description information of the target pod matched with the function metadata of the first function from the pod managed by the first micro service according to the function metadata of the first function, and the target pod is the pod where the function is operated once. By adopting the method, the target pod is obtained from the pod which has operated the function once, compared with the prior art that the pod which has operated the function is directly deleted, the time for regenerating the pod is saved and the cold start time delay of the pod is effectively reduced.
On the other hand, the scheme can reduce the time delay of the concurrent starting of the functions and control the total occupation amount of system resources. The scheme realizes a general technology based on the existing resource management platform, and avoids extra development and maintenance cost.
The description information of the target pod refers to the pod resource specification, static or dynamic information of the pod such as the manager to which the pod belongs, for example, cpu size, memory size, and other information of the target pod, which is not specifically limited in this embodiment.
The function metadata of the first function refers to data describing information such as function attributes and function operation resource specifications, for example, a cpu size and a memory size required by the first function.
The above description information of the target pod is matched with the function metadata of the first function, and it can be understood that: the cpu size of the target pod is matched with the cpu size required by the first function, and the memory size of the target pod is matched with the memory size required by the first function.
This matching can be understood as: and the cpu size of the target pod is not smaller than the cpu size required by the first function, and the memory size of the target pod is not smaller than the memory size required by the first function.
As an optional implementation manner, the target pod is obtained by cleaning up a function residual temporary file in the pod where the function was run and residual data in the memory.
The method has the advantages that the function residual temporary file in the pod with the function running once and the residual data in the memory are cleaned, so that the pod can be recycled, a new pod does not need to be generated again when the pod is started next time, a cold start process is not needed, and the time delay of cold start is effectively reduced.
As an optional implementation manner, the apparatus further includes a first processing module, configured to: and according to at least one item of the description information of the pod with the function once run, the state of the device and the total resource occupation amount of the system, cleaning or deleting the pod with the function once run.
As an optional implementation manner, the apparatus further includes a cleaning module, configured to: and when the total occupied amount of the system resources is less than a first preset value and the number of the pod obtained by cleaning in the device is not more than a second preset value, cleaning the function residual temporary file in the pod which has run the function once and the residual data in the memory.
As an optional implementation manner, when the total resource occupancy of the system is greater than a first preset value, or the number of the pod obtained by cleaning in the apparatus is greater than a second preset value, the first interface is called to delete the pod in which the function has been run.
As an optional implementation manner, the apparatus further includes a second processing module, configured to: and when the survival time length of the target pod is reached, calling a first interface to delete the target pod, wherein the survival time length of the target pod is related to at least one of the description information of the pod with the function run once, the state of the device and the total resource occupation amount of the system.
As an alternative implementation manner, the target pod stores therein the program code of the first function, so that the target pod runs the first function.
As an alternative implementation, the target pod maintains access to the network connection with the first pod.
In a third aspect, an embodiment of the present application provides a processing method based on a container group pod, including: the method comprises the steps that a first micro service receives a request for acquiring a pod from a second micro service, wherein the request carries function metadata of a first function; the first micro service acquires the description information of a target pod from the pod managed by the first micro service according to the function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is the pod which has run the function once; the first microservice returns the network address in the description information of the target pod to the second microservice in response to the request.
Wherein the first microservice may include the first microservice and the second microservice of the first aspect.
In the embodiment of the application, the first micro service acquires the description information of the target pod matched with the function metadata of the first function from the pod managed by the first micro service according to the function metadata of the first function, and the target pod is the pod where the function is operated once. By adopting the method, the target pod is obtained from the pod which has operated the function once, compared with the prior art that the pod which has operated the function is directly deleted, the time for regenerating the pod is saved and the cold start time delay of the pod is effectively reduced.
In a fourth aspect, an embodiment of the present application provides a processing apparatus based on a container group pod, including a processor and a memory; wherein the memory is configured to store program code, and the processor is configured to invoke the program code to perform the method as provided in any one of the possible embodiments of the first aspect and/or the method as provided in any one of the possible embodiments of the third aspect.
In a fifth aspect, the present application provides a computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method as provided in any of the possible embodiments of the first aspect and/or the method as provided in any of the possible embodiments of the third aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform the method as provided in any of the possible embodiments of the first aspect and/or the method as provided in any of the possible embodiments of the third aspect.
It will be appreciated that the apparatus of the second aspect, the apparatus of the fourth aspect, the computer storage medium of the fifth aspect or the computer program product of the sixth aspect provided above are adapted to perform the method provided by any of the first aspects and/or the method provided by any of the possible implementations of the third aspects. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1a is a schematic view of a pod provided by an embodiment of the present application;
FIG. 1b is a schematic view of a pod according to embodiments of the present application;
fig. 2 is a schematic flowchart of a processing method based on a container group pod according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a processing method based on a container group pod according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a processing method based on a container group pod according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a processing method based on a container group pod according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a pod processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a pod processing apparatus according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application.
Referring to fig. 1a, a schematic view of a container group pod according to an embodiment of the present disclosure is shown. The container cluster management system kubernets platform can build a container deployment service. Generally, a container is an operating system level virtualization technology, and different processes are isolated through an operating system isolation technology. The container technology is different from a hardware virtualization technology, virtual hardware is not provided, an operating system is not provided in a container, and only a process is provided.
A pod is a basic deployment unit of the kubernets platform, and one pod is composed of a group of containers working on the same node. The pod is a container group encapsulating a storage resource (volume), an independent network IP (IP), and a container management policy.
A pod may include a plurality of containers, logically identifying an instance of an application. For example, a web application is composed of three components, a front end, a back end and a database, which run in respective containers, and for this example, a pod containing three containers may correspond to the web application.
Referring to FIG. 1b, the pod may include: network name space, mounted volume, CPU declaration CPU contracts, memory declaration memory contracts, temporary file temporal files, data in memory and connection with system service.
The network name space network namespace is an important function for realizing network virtualization, can create a plurality of isolated network spaces, and has independent network stack information.
Mount volume mount, where volume is an attribute of pod resource.
Connection with system service, such as connection with function repository microservice, etc.
The pod may further include other information, which is not specifically limited in this embodiment.
It should be noted that the business processing specailize of the pod in the embodiment of the present application may be understood as: and the universal pod is changed into an execution environment of a certain function, and operations such as downloading and loading the function, injecting a function environment variable and the like are included. For example, the pod loads a function package (function code/program code) into the memory by downloading the function package, and services the function as an execution environment of the function by injecting a function environment variable into a function execution process.
The semi-business process semi-specialize of pod is different from the business process, and part of the process is reduced. For example, the pod directly multiplexes the stored downloaded function package without downloading the function package, loads the local function package into the memory, and injects the function environment variable into the function running process, thereby serving as the execution environment of the function.
It should be noted that the business processing may further include setting a tag for the pod, and the scheme is not particularly limited to this.
Fig. 2 is a schematic flow chart of a processing method based on a container group pod according to an embodiment of the present application. The method comprises the following steps 201-203:
201. the method comprises the steps that a first micro service receives a request for acquiring a pod from a second micro service, wherein the request carries function metadata of a first function;
the first micro-service may refer to a pod pool management micro-service, which is used to manage pods.
The pod managed by the first micro-service may be understood as that the first micro-service has a right to acquire description information of the pod, and the first micro-service may clean the pod, or call an interface to delete the pod, or the like. Wherein, the first micro service can manage a plurality of pod.
The second micro-Service may be a function instance management micro-Service Worker Manager Service, which is used to manage the creation of function instances, etc.
The first function may be any function within FaaS, or the like.
The function metadata of the first function refers to data describing information such as function attributes and function operation resource specifications, for example, a cpu size and a memory size required by the first function.
202. The first micro service acquires the description information of a target pod from the pod managed by the first micro service according to the function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is the pod which has run the function once;
the description information of the target pod is, for example, information such as a cpu size and a memory size of the target pod.
The above description information of the target pod is matched with the function metadata of the first function, and it can be understood that: the cpu size of the target pod is matched with the cpu size required by the first function, and the memory size of the target pod is matched with the memory size required by the first function.
This matching can be understood as: and the cpu size of the target pod is not smaller than the cpu size required by the first function, and the memory size of the target pod is not smaller than the memory size required by the first function.
The above is only an example, and it may also be matching of other information, and the present solution is not limited in this respect.
As an alternative implementation, there are common and recycled pods managed in the first microservice.
Wherein, the common pod can be understood as: pod without business processing.
The recovered pod can be understood as: and clearing the obtained pod according to the pod with the function once run.
The target pod in the scheme is obtained from the recycled pod.
As an optional implementation manner, the target pod is obtained by the first microserver cleaning a function residual temporary file in the pod where the function has been run and residual data in a memory.
The method has the advantages that the function residual temporary file in the pod with the function running once and the residual data in the memory are cleaned, so that the pod can be recycled, a new pod does not need to be generated again when the pod is started next time, a cold start process is not needed, and the time delay of cold start is effectively reduced.
Specifically, after the pod runs the function, the first microservice restarts a runtime container in the pod with the function already run so as to clean data, residual files and the like in a memory of the pod, wherein the function residual temporary files such as log files in a mount file system are cleaned.
The above is only an example, and certainly, other manners may also be used to process the pod that has run the function, and the present solution is not limited in this regard.
For example, in addition to cleaning the function residual temporary file and the residual data in the memory in the pod that has executed the function, the network connection in the pod that has executed the function may be cleaned, and the program code of the function stored in the pod that has executed the function may be cleaned. By adopting the method, resources can be saved.
As an optional implementation manner, the first micro service determines whether to perform cleanup processing on the pod that has run the function according to at least one of description information of the pod that has run the function, a state of the first micro service, and a total system resource occupation amount.
The first micro-service of the scheme can make a decision to reuse or destroy the pod, so that the total amount of the recycled pod is regulated and controlled, the total resource occupation amount of the system is maintained below a preset value, and resource waste is avoided.
The state of the first micro service indicates at least one of the quantity and the type of the pod managed by the first micro service and the total resources occupied by the pod managed by the first micro service, the total resource occupation amount of the system indicates at least one of the size of a CPU and the size of a memory occupied by a function operating system, wherein the function operating system comprises the first micro service and the second micro service.
The above-described description information of the pod on which the function has been run can be used to determine whether to perform cleaning based on the category of the pod. Specifically, the CPU threshold, the memory threshold, the user information (tenant information) to which the function belongs, and the like according to the pod may be used, and of course, other information may also be included, which is not specifically limited in this embodiment.
The state of the first micro service may be that the first micro service includes information such as the number of different types of pod, total resource occupation amount in the first micro service, and the like.
The total resource occupation information of the system may be total resource occupation information of kubernets.
At least one of the state of the first micro service and the total system resource occupation amount may be determined according to any one item, two items, or three items of the description information of the pod where the function has been run, which is not specifically limited in the present scheme.
Of course, the determination and the like may be made based on other information. If the pod number in the first micro service reaches the preset value based on a preset strategy, the recycling process is not performed any more.
And deleting (releasing) the pod which runs the function if the pod which runs the function does not need to be cleaned.
For example, by calling the first interface and deleting the pod. The first interface may be a kubernets interface or the like.
As an optional implementation manner, when the total resource occupancy of the system is greater than a first preset value, or the number of the pod obtained by cleaning in the first micro service is greater than a second preset value, the first micro service calls the first interface to delete the pod in which the function has been run.
As an optional implementation manner, when the survival duration of the target pod is reached, the first micro-service calls a first interface to delete the target pod, where the survival duration of the target pod is related to at least one of the description information of the pod where the function has been run, the state of the first micro-service, and the total resource occupation amount of the system.
That is, the survival time of the target pod is derived according to at least one item of information. When the survival time is longer than the survival time and the pod is not used, the pod is deleted, and the waste of resources caused by idle pod is prevented.
The survival time period may be, for example, 300ms or the like. Of course, it may have other time periods, and the scheme is not particularly limited in this regard.
As an alternative implementation manner, the target pod stores therein the program code of the first function, so that the target pod runs the first function.
The program code of the first function refers to a code for implementing the function of the first function, that is, a code that can be used for implementing the function of the first function may be considered as a program code of the first function, and the specific content and the specific implementation manner included in the present application are not limited. For example, the source code of the first function may be indicated, or the source code of the first function may be indicated in the form of an interface.
In particular, the container process in the pod executes the program code of the first function.
It can be understood that: the pod where the function was run once stores the downloaded program code of the first function, and when the pod where the function was run once is processed, the downloaded program code of the first function is not cleaned, so that the downloaded program code of the first function is reserved in a target pod obtained after processing.
By reserving the program codes of the downloaded functions when the pod which runs the functions once is cleaned, the downloading time of the function package is saved and the time of pod business processing is reduced when the processed pod is used next time. By adopting the method, when the function applies for the resource, the recycled pod is provided, and only the semi-specific flow is needed to be carried out by utilizing the resource generated in the pod first specific process, so that the time for the pod to be changed into the function execution environment is shortened.
As an alternative implementation manner, the first microserver obtains a pod whose description information matches with the function metadata of the first function from the managed pod, and determines the pod that holds the downloaded program code of the first function, so as to determine the target pod.
As an alternative implementation, the target pod maintains access to the network connection with the first pod.
Wherein the first pod has previously established a network connection with the pod that has run the function. By adopting the means, when the third micro-service in the first pod needs to be communicated, the third micro-service can be directly communicated without reestablishing a network connection flow, and the service processing efficiency is effectively improved. For example, the third microservice may be a function repository microservice that downloads program code for a function by communicating with the function repository microservice in the first pod.
The above is only an example, and other information may be retained, and the present solution is not particularly limited in this regard.
203. The first microservice returns description information for the target pod to the second microservice in response to the request.
After the first micro-service determines the target pod, the first micro-service further returns the description information of the target pod, so that the second micro-service manages the target pod based on the description information, and the target pod performs business processing and the like.
In the embodiment of the application, the first micro service acquires the description information of the target pod matched with the function metadata of the first function from the pod managed by the first micro service according to the function metadata of the first function, and the target pod is the pod where the function is operated once. By adopting the method, the target pod is obtained from the pod which has operated the function once, compared with the prior art that the pod which has operated the function is directly deleted, the time for regenerating the pod is saved and the cold start time delay of the pod is effectively reduced.
On the other hand, the scheme can reduce the time delay of the concurrent starting of the functions and control the total occupation amount of system resources. The scheme realizes a general technology based on the existing resource management platform, and avoids extra development and maintenance cost.
Fig. 3 is a schematic diagram illustrating a processing method based on a container group pod according to an embodiment of the present disclosure. As shown in fig. 3, when a user sends a trigger request of a function 1, a pod pool management micro-service obtains description information of a target pod from a native pool, and the pod is further commercialized into an execution environment corresponding to the function 1 based on function environment variables by downloading a function package of the function 1. After the function 1 is executed, the pod pool management micro-service determines whether to recycle the pod. If the pod is recovered, the pod is cleaned, and the default is that the pod after the recovery processing is placed in the delayed release resource pool lazy pool.
When the trigger request of the function 1 is received again, the description information of the target pod is obtained from the lazy pool because the recovered pod exists. Since the function package of the function 1 is retained in the pod, the pod does not need to be downloaded again, and only the semi-service processing is performed, and the pod is further serviced into the execution environment corresponding to the function 1 based on the function environment variable.
After the function 1 is executed, the pod pool management micro-service determines whether to recycle the pod. If the pod does not need to be reclaimed, the pod pool management microservice releases the pod.
The above is only an example, wherein when the function package of the function 1 is not stored in the pod acquired from the lazy pool, the pod further triggers the subsequent business processing by downloading the function package.
Fig. 4 is a schematic flow chart of a pod processing method according to an embodiment of the present disclosure. The method comprises steps 401 and 410, which are as follows:
401. the second micro service sends a request for acquiring the pod to the first micro service, wherein the request carries function metadata of the first function;
the second micro-Service may be a function instance management micro-Service Worker Manager Service.
The function metadata of the first function may be a cpu size, a memory size, and the like required by the first function.
The second micro service sends a request for acquiring the pod to the first micro service so as to execute the first function according to the pod provided by the first micro service.
The first micro-service may be a pod pool management micro-service.
402. The first micro-service receives a request sent by the second micro-service, and obtains description information of the first pod from the managed pod according to the function metadata of the first function;
the first micro-service management is provided with a plurality of native pods, so that the pods can be directly acquired from the managed pods after the request is received, and the time delay of cold start is saved.
And after receiving the request sent by the second micro service, the first micro service acquires the description information of the first pod. Wherein the description information of the first pod matches the function metadata of the first function.
The description information of the first pod is the cpu size, the memory size, etc. of the pod.
The description information of the first pod is matched with the function metadata of the first function, for example, the cpu size and the memory size of the first pod are both larger than those required by the first function.
When there are a plurality of pod matching the function metadata of the first function, one of them can be arbitrarily selected.
Of course, an optimal pod may also be selected, where the optimal pod may be a cpu size and a memory size of the pod equal to those required by the first function. This is not particularly limited by the present scheme.
403. The first micro service sends description information of the first pod to the second micro service;
404. the second micro service receives description information of the first pod from the first micro service, and then manages the first pod, so that the pod can carry out business processing;
and after the first pod is determined in the first micro service, returning the description information of the first pod to the second micro service.
The second microservice sends information needed to transact the specularie to the first pod, including, for example, function information and function environment variables.
The first pod downloads the function package of the first function based on the received function information, and then the sidecar container in the first pod sends the function package and the function environment variable of the first function to the runtime container, wherein the runtime container is the container in which the function runs. And the runtime container loads the function packet of the first function into the memory and loads the function environment variable, so that the first pod is changed into the execution environment of the first function.
The second microserver then modifies the description of the first pod, for example by tagging it with a first function, to facilitate management of the first pod.
After receiving the description information of the first pod, the second microservice further includes:
the second microservice sends the network address of the first pod to a third microservice.
The third micro-Service may be a front-end micro-Service Frontend Service. And the third micro-service sends a request for acquiring the pod to the second micro-service based on the trigger event request of the function 1 sent by the user by determining that the function 1 does not exist in the system, so as to deploy the function 1 instance. The event is an input value of a function in the FaaS system, and the event triggers the function to start executing.
And the third micro service further sends a request for executing the function to the first pod based on the network address of the first pod returned by the second micro service.
And after the first pod receives the request for executing the function, starting the process to execute. And after the execution is finished, returning an execution result to the third micro-service. And the third microservice returns the execution result to the user.
405. After the function is executed, the first microserver confirms whether the first pod is recycled or not;
the first microservice may determine whether to reclaim according to a preset control policy.
The preset control strategy may be, for example: and when the total resource occupation of the system is less than 80 percent and the number of the recovered Pod with the same type specification as the first Pod is less than 5, recovering the current Pod.
Of course, other control strategies may be used, or whether to recycle may be determined based on other information, which is not specifically limited in this embodiment.
406. If the first pod is confirmed to be recycled, the first micro service cleans the first pod which has executed the function;
specifically, a clean-up operation may be performed on a first pod that has performed a function, including: restarting a runtime container in the first pod to clean data and residual files in the memory; and the temporary file of the function residue can be cleared by clearing the specified path, such as clearing a log file in a mount file system.
Further, setting a survival time period of the cleaned pod. For example, it may be 300s, and when the pod is used for more than 300s, the cleaned pod is destroyed.
And if the first pod is not recycled, releasing the first pod which has executed the function by the first micro-service. E.g., delete the first pod.
407. The second micro service sends a request for acquiring a pod to the first micro service, wherein the request carries function metadata of the first function;
408. the first micro-service receives a request sent by the second micro-service, and obtains description information of a target pod from a managed pod according to function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is a pod which has been operated with the function;
and after receiving the request sent by the second micro-service, the first micro-service acquires the target pod from the managed recycled pods.
409. The first micro-service returns the description information of the target pod to the second micro-service to respond to the request;
410. the second micro service receives description information of the target pod from the first micro service, and manages the target pod based on the description information so that the target pod can carry out semi-business processing;
the second micro-service transmits information required for semi-servization semi-specialize to the target pod based on the description information, including, for example, function information and function environment variables. The sidecar container in the target pod sends the function information and the function environment variable to the runtime container, wherein the runtime container is the container in which the function runs. And loading the function information into a memory by the runtime container and loading the function environment variable.
The second microserver then modifies the description of the target pod, for example by tagging it with a tag of the first function, in order to manage the target pod.
After receiving the description information of the target pod, the second microservice further includes:
and the second micro service sends the network address of the target pod to a third micro service.
And the third micro-service further sends a request for executing the function to the target pod based on the network address of the target pod returned by the second micro-service.
And after receiving the request for executing the function, the target pod starts a process to execute. And after the execution is finished, returning an execution result to the third micro-service. And the third microservice returns the execution result to the user.
And after the function is executed, the first micro service confirms whether the target pod is recycled or not.
In the embodiment of the application, the first micro service processes the pod which runs the function once, and then acquires the target pod of which the description information is matched with the function metadata of the first function from the managed pod according to the function metadata of the first function when the pod is needed, wherein the program code of the first function is stored in the target pod. By adopting the method, the target pod is obtained from the pod which runs the function once, so that the time for regenerating the pod is saved, and the cold start time delay of the pod is effectively reduced; and the downloaded program code of the first function is reserved in the description information of the target pod, so that the time for downloading the function package is further saved, and the efficiency of pod business processing is effectively improved.
Fig. 5 is a schematic view illustrating a pod processing method according to an embodiment of the present disclosure. The method comprises steps 501-508, which are as follows:
501. the method comprises the steps that a front-end micro service receives a trigger request of a first function sent by a user, wherein the request carries function metadata of the first function;
for example, when a user wants to trigger some event, a request is sent to the front-end microservice.
502. The front-end micro service sends a request for acquiring the pod to a function instance management micro service;
the front-end micro-service inquiry system does not have an example of the first function, and an example worker of the first function is deployed to the function example management micro-service request.
503. The function instance management micro-service sends a request for acquiring the pod to the pod pool management micro-service according to the request;
504. the pod pool management micro-service determines a target pod and returns description information of the target pod to the function instance management micro-service;
the pod pool management micro-service determines a target pod from the pods processed according to the executed functions, and the description information of the target pod is matched with the function metadata of the first function.
505. The function instance management micro-service receives description information of the target pod from the pod pool management micro-service and manages the target pod so that the target pod is operated as an execution environment of the first function;
the function instance management micro-service further sends the network address of the target pod to the front-end micro-service, so that the front-end micro-service sends a request for executing a first function to the target pod.
506. And after receiving a request for executing a first function sent by the front-end micro-service, the post-service pod starts a process to execute the first function.
507. After the function is executed, the target pod returns a function execution result to the front-end micro service, and the front-end micro service returns the result to the user;
508. the pod pool management micro-service processes the target pod.
The function execution system in this embodiment includes a front-end microservice, a pod pool management microservice, and a function instance management microservice.
By adopting the method, the target pod is obtained from the pod which is obtained by processing the pod which runs the function once, so that the time for regenerating the pod is saved, the cold start time delay of the pod is effectively reduced, and the downloaded program code of the first function is reserved in the description information of the target pod, so that the time for downloading the function package is further saved, and the time for processing the pod business is effectively reduced.
Referring to fig. 6, a pod processing apparatus according to an embodiment of the present disclosure is shown. As shown in fig. 6, the apparatus includes: the receiving module 601, the obtaining module 602, and the sending module 603 are specifically as follows:
a receiving module 601, configured to receive a request for obtaining a pod from a second microservice, where the request carries function metadata of a first function;
an obtaining module 602, configured to obtain description information of a target pod from a pod managed by the apparatus according to the function metadata of the first function, where the description information of the target pod matches the function metadata of the first function, and the target pod is a pod in which a function has been run;
a sending module 603, configured to return description information of the target pod to the second microserver in response to the request.
The description information of the target pod refers to the pod resource specification, static or dynamic information of the pod such as the manager to which the pod belongs, for example, cpu size, memory size, and other information of the target pod, which is not specifically limited in this embodiment.
The function metadata of the first function refers to data describing information such as function attributes and function operation resource specifications, for example, a cpu size and a memory size required by the first function.
The above description information of the target pod is matched with the function metadata of the first function, and it can be understood that: the cpu size of the target pod is matched with the cpu size required by the first function, and the memory size of the target pod is matched with the memory size required by the first function.
This matching can be understood as: and the cpu size of the target pod is not smaller than the cpu size required by the first function, and the memory size of the target pod is not smaller than the memory size required by the first function.
As an optional implementation manner, the target pod is obtained by cleaning up a function residual temporary file in the pod where the function was run and residual data in the memory.
The method has the advantages that the function residual temporary file in the pod with the function running once and the residual data in the memory are cleaned, so that the pod can be recycled, a new pod does not need to be generated again when the pod is started next time, a cold start process is not needed, and the time delay of cold start is effectively reduced.
As an optional implementation manner, the apparatus further includes a first processing module, configured to: and according to at least one item of the description information of the pod with the function once run, the state of the device and the total resource occupation amount of the system, cleaning or deleting the pod with the function once run.
As an optional implementation manner, the apparatus further includes a cleaning module, configured to: and when the total occupied amount of the system resources is less than a first preset value and the number of the pod obtained by cleaning in the device is not more than a second preset value, cleaning the function residual temporary file in the pod which has run the function once and the residual data in the memory.
As an optional implementation manner, when the total resource occupancy of the system is greater than a first preset value, or the number of the pod obtained by cleaning in the apparatus is greater than a second preset value, the first interface is called to delete the pod in which the function has been run.
As an optional implementation manner, the apparatus further includes a second processing module, configured to: and when the survival time length of the target pod is reached, calling a first interface to delete the target pod, wherein the survival time length of the target pod is related to at least one of the description information of the pod with the function run once, the state of the device and the total resource occupation amount of the system.
As an alternative implementation manner, the target pod stores therein the program code of the first function, so that the target pod runs the first function.
By adopting the method, the time for downloading the function package is saved, the time for carrying out pod business processing is further saved, and the efficiency of pod business is effectively improved.
As an alternative implementation, the target pod maintains access to the network connection with the first pod.
The pod processing may refer to any implementation manner provided by the first aspect.
In the embodiment of the application, the first micro service acquires the description information of the target pod matched with the function metadata of the first function from the pod managed by the first micro service according to the function metadata of the first function, and the target pod is the pod where the function is operated once. By adopting the method, the target pod is obtained from the pod which has operated the function once, compared with the prior art that the pod which has operated the function is directly deleted, the time for regenerating the pod is saved and the cold start time delay of the pod is effectively reduced.
On the other hand, the scheme can reduce the time delay of the concurrent starting of the functions and control the total occupation amount of system resources. The scheme realizes a general technology based on the existing resource management platform, and avoids extra development and maintenance cost.
It should be noted that the receiving module 601, the obtaining module 602, and the sending module 603 shown in fig. 6 are used for executing relevant steps of the processing method of the pod.
For example, the receiving module 601 is used for executing the related content of step 201, the obtaining module 602 is used for executing the related content of step 202, and the sending module 603 is used for executing the related content of step 203.
In this embodiment, the pod processing device is presented in the form of a module. A "module" herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that may provide the described functionality.
Further, the above receiving module 601, obtaining module 602, and sending module 603 may be implemented by the processor 702 of the pod processing apparatus shown in fig. 7.
Fig. 7 is a schematic hardware structure diagram of a pod processing apparatus according to an embodiment of the present application. The pod processing apparatus 700 shown in fig. 7 (the apparatus 700 may specifically be a computer device) includes a memory 701, a processor 702, a communication interface 703, and a bus 704. The memory 701, the processor 702, and the communication interface 703 are communicatively connected to each other via a bus 704.
The Memory 701 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM).
The memory 701 may store a program, and when the program stored in the memory 701 is executed by the processor 702, the processor 702 and the communication interface 703 are used to perform the steps of the pod processing method according to the embodiment of the present application.
The processor 702 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more Integrated circuits, and is configured to execute related programs to implement functions that need to be executed by units in the pod Processing apparatus according to the embodiment of the present disclosure, or to execute the pod Processing method according to the embodiment of the present disclosure.
The processor 702 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the pod processing method of the present application may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 702. The processor 702 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 701, and the processor 702 reads information in the memory 701, and completes, in combination with hardware thereof, functions required to be executed by units included in the pod processing apparatus according to the embodiment of the present application, or executes a processing method based on a container group pod according to the embodiment of the method of the present application.
The communication interface 703 enables communication between the apparatus 700 and other devices or communication networks using transceiver means such as, but not limited to, transceivers. For example, data may be acquired through the communication interface 703.
Bus 704 may include a pathway to transfer information between various components of apparatus 700, such as memory 701, processor 702, and communication interface 703.
It should be noted that although the apparatus 700 shown in fig. 7 shows only memories, processors, and communication interfaces, in a specific implementation, those skilled in the art will appreciate that the apparatus 700 also includes other components necessary to achieve proper operation. Also, those skilled in the art will appreciate that the apparatus 700 may also include hardware components for performing other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 700 may also include only those components necessary to implement embodiments of the present application, and need not include all of the components shown in FIG. 7.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of any one of the methods described above.
The embodiment of the application also provides a computer program product containing instructions. The computer program product, when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the methods described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the specific descriptions of the corresponding steps in the foregoing method embodiments, and are not described herein again.
It should be understood that in the description of the present application, unless otherwise indicated, "/" indicates a relationship where the objects associated before and after are an "or", e.g., a/B may indicate a or B; wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance. Also, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A processing method based on container group pod is characterized by comprising the following steps:
the method comprises the steps that a first micro service receives a request for acquiring a pod from a second micro service, wherein the request carries function metadata of a first function;
the first micro service acquires the description information of a target pod from the pod managed by the first micro service according to the function metadata of the first function, wherein the description information of the target pod is matched with the function metadata of the first function, and the target pod is the pod which has run the function once;
the first microservice returns description information for the target pod to the second microservice in response to the request.
2. The method of claim 1, wherein the target pod is obtained by the first microservice cleaning a function residual temporary file and residual data in a memory in the pod where the function was run.
3. The method of claim 2, further comprising:
the method comprises the steps that a first micro service clears or deletes a pod which has run a function once according to description information of the pod which has run the function once, at least one item of state of the first micro service and total system resource occupation quantity, the state of the first micro service indicates at least one item of quantity and type of the pod which is managed by the first micro service and total resources occupied by the pod which is managed by the first micro service, and the total system resource occupation quantity indicates at least one item of CPU size and memory size occupied by a function running system, wherein the function running system comprises the first micro service and a second micro service.
4. The method of claim 2, further comprising:
when the total system resource occupation amount is smaller than a first preset value and the number of the pod obtained by cleaning in the first micro service is not larger than a second preset value, the first micro service cleans function residual temporary files in the pod which has run the function once and residual data in a memory, and the total system resource occupation amount indicates at least one of the size of a CPU and the size of the memory occupied by a function running system, wherein the function running system comprises the first micro service and the second micro service.
5. The method of claim 3, wherein the first micro-service deletes the pod where the function was run when the total system resource occupation is greater than a first preset value or the number of pod cleaned up in the first micro-service is greater than a second preset value.
6. The method according to any one of claims 2 to 5, further comprising:
when the survival time of the target pod is reached, the first micro-service deletes the target pod, the survival time of the target pod is related to the description information of the pod which has run the function once, at least one item of the state of the first micro-service and the total system resource occupation quantity is related to the state of the first micro-service, the state of the first micro-service indicates at least one item of the quantity and the type of the pod managed by the first micro-service and the total resource occupied by the pod managed by the first micro-service, and the total system resource occupation quantity indicates at least one item of the CPU size and the memory size occupied by the function running system, wherein the function running system comprises the first micro-service and the second micro-service.
7. The method of any of claims 1 to 6, wherein the target pod has stored therein program code for the first function, such that the target pod runs the first function.
8. The method of any of claims 1-7, wherein the target pod maintains access to a network connection with the first pod.
9. A processing apparatus based on a container group pod, comprising:
a receiving module, configured to receive a request for obtaining a pod from a second microservice, where the request carries function metadata of a first function;
an obtaining module, configured to obtain description information of a target pod from a pod managed by the apparatus according to the function metadata of the first function, where the description information of the target pod matches the function metadata of the first function, and the target pod is a pod in which a function has been run;
and the sending module is used for returning the description information of the target pod to the second microserver so as to respond to the request.
10. The apparatus of claim 9, wherein the target pod is obtained by cleaning up a function residual temporary file in the pod where the function was run and residual data in a memory.
11. The apparatus of claim 10, further comprising a first processing module configured to:
according to the description information of the pod which runs the function once, at least one item of the state of the device and the total system resource occupation amount, the pod which runs the function once is cleaned or deleted, the state of the first micro service indicates at least one item of the quantity and the type of the pod managed by the first micro service and the total resource occupied by the pod managed by the first micro service, and the total system resource occupation amount indicates at least one item of the CPU size and the memory size occupied by the function running system, wherein the function running system comprises the first micro service and the second micro service.
12. The apparatus of claim 10, further comprising a cleaning module to:
when the total system resource occupation amount is smaller than a first preset value and the number of the pod obtained by cleaning in the device is not larger than a second preset value, cleaning a function residual temporary file in the pod which has operated the function once and residual data in a memory, wherein the total system resource occupation amount indicates at least one of the size of a CPU and the size of the memory occupied by a function operation system, and the function operation system comprises the first micro-service and the second micro-service.
13. The apparatus of claim 11, wherein the pod that has run the function is deleted when the total system resource occupancy is greater than a first preset value or the number of pods cleaned up in the apparatus is greater than a second preset value.
14. The apparatus according to any one of claims 10 to 13, further comprising a second processing module configured to:
when the survival time of the target pod is reached, deleting the target pod, wherein the survival time of the target pod is related to the description information of the pod which has run the function once, at least one item of the state of the device and the total system resource occupation quantity is related to the state of the first micro-service, the state of the first micro-service indicates the quantity and the type of the pod managed by the first micro-service and at least one item of the total resource occupied by the pod managed by the first micro-service, and the total system resource occupation quantity indicates at least one item of the CPU size and the memory size occupied by the function running system, wherein the function running system comprises the first micro-service and the second micro-service.
15. The apparatus of any of claims 9 to 14, wherein the target pod has stored therein program code for the first function, such that the target pod runs the first function.
16. The apparatus of any of claims 9-15, wherein the target pod maintains access to a network connection with the first pod.
17. A processing apparatus based on a container group pod, comprising a processor and a memory; wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1 to 8.
18. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1 to 8.
19. A computer program product, characterized in that it causes a computer to carry out the method according to any one of claims 1 to 8 when the computer program product is run on the computer.
CN202110808331.8A 2021-07-16 2021-07-16 Container group pod-based processing method, related system and storage medium Active CN113656142B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110808331.8A CN113656142B (en) 2021-07-16 2021-07-16 Container group pod-based processing method, related system and storage medium
PCT/CN2022/104955 WO2023284688A1 (en) 2021-07-16 2022-07-11 Container group pod-based processing method, and related system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110808331.8A CN113656142B (en) 2021-07-16 2021-07-16 Container group pod-based processing method, related system and storage medium

Publications (2)

Publication Number Publication Date
CN113656142A true CN113656142A (en) 2021-11-16
CN113656142B CN113656142B (en) 2023-10-10

Family

ID=78489561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110808331.8A Active CN113656142B (en) 2021-07-16 2021-07-16 Container group pod-based processing method, related system and storage medium

Country Status (2)

Country Link
CN (1) CN113656142B (en)
WO (1) WO2023284688A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461437A (en) * 2022-04-11 2022-05-10 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium
WO2023284688A1 (en) * 2021-07-16 2023-01-19 华为技术有限公司 Container group pod-based processing method, and related system and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069264B (en) * 2023-03-13 2023-06-13 南京飓风引擎信息技术有限公司 Application program data information storage control system
CN116980421B (en) * 2023-09-25 2023-12-15 厦门她趣信息技术有限公司 Method, device and equipment for processing tangential flow CPU resource surge under blue-green deployment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233560A1 (en) * 2017-06-20 2018-12-27 华为技术有限公司 Dynamic scheduling method, device, and system
US20190207823A1 (en) * 2018-01-03 2019-07-04 International Business Machines Corporation Dynamic delivery of software functions
US20200218798A1 (en) * 2019-01-03 2020-07-09 NeuVector, Inc. Automatic deployment of application security policy using application manifest and dynamic process analysis in a containerization environment
CN111414233A (en) * 2020-03-20 2020-07-14 京东数字科技控股有限公司 Online model reasoning system
CN112685153A (en) * 2020-12-25 2021-04-20 广州奇盾信息技术有限公司 Micro-service scheduling method and device and electronic equipment
CN112860450A (en) * 2020-12-04 2021-05-28 武汉悦学帮网络技术有限公司 Request processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242073B2 (en) * 2016-07-27 2019-03-26 Sap Se Analytics mediation for microservice architectures
CN112306567B (en) * 2019-07-26 2023-07-21 广州虎牙科技有限公司 Cluster management system and container management and control method
CN111475235B (en) * 2020-04-13 2023-09-12 北京字节跳动网络技术有限公司 Acceleration method, device, equipment and storage medium for function calculation cold start
CN113656142B (en) * 2021-07-16 2023-10-10 华为技术有限公司 Container group pod-based processing method, related system and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233560A1 (en) * 2017-06-20 2018-12-27 华为技术有限公司 Dynamic scheduling method, device, and system
US20190207823A1 (en) * 2018-01-03 2019-07-04 International Business Machines Corporation Dynamic delivery of software functions
US20200218798A1 (en) * 2019-01-03 2020-07-09 NeuVector, Inc. Automatic deployment of application security policy using application manifest and dynamic process analysis in a containerization environment
CN111414233A (en) * 2020-03-20 2020-07-14 京东数字科技控股有限公司 Online model reasoning system
CN112860450A (en) * 2020-12-04 2021-05-28 武汉悦学帮网络技术有限公司 Request processing method and device
CN112685153A (en) * 2020-12-25 2021-04-20 广州奇盾信息技术有限公司 Micro-service scheduling method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284688A1 (en) * 2021-07-16 2023-01-19 华为技术有限公司 Container group pod-based processing method, and related system and storage medium
CN114461437A (en) * 2022-04-11 2022-05-10 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium
CN114461437B (en) * 2022-04-11 2022-06-10 中航信移动科技有限公司 Data processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2023284688A1 (en) 2023-01-19
CN113656142B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN113656142A (en) Container group pod-based processing method, related system and storage medium
CN106708622B (en) Cluster resource processing method and system and resource processing cluster
WO2018076755A1 (en) Method and apparatus for issuing upgrade package
CN113296792B (en) Storage method, device, equipment, storage medium and system
CN105763602A (en) Data request processing method, server and cloud interactive system
CN108572845B (en) Upgrading method of distributed micro-service cluster and related system
WO2014101475A1 (en) Cloud platform application deployment method and apparatus
CN110659104B (en) Service monitoring method and related equipment
CN115048149A (en) Application cache scalable processing method, device, equipment and medium
CN116680040A (en) Container processing method, device, equipment, storage medium and program product
CN110780889A (en) Cloud mobile phone data cloning method and cloud mobile phone data restoring method
CN110196749B (en) Virtual machine recovery method and device, storage medium and electronic device
CN116820527B (en) Program upgrading method, device, computer equipment and storage medium
CN108121514B (en) Meta information updating method and device, computing equipment and computer storage medium
CN114500352A (en) Plug-in hot updating system and method for medical Internet of things message routing device
CN115268909A (en) Method, system and terminal for establishing and running construction task at web front end
CN109901933B (en) Operation method and device of business system, storage medium and electronic device
CN110704249A (en) Method, device and system for ensuring application consistency
CN113377724A (en) Cache space management method, device and storage medium
CN107220101B (en) Container creation method and device
CN109947704B (en) Lock type switching method and device and cluster file system
CN111491040A (en) IP distribution method and IP distribution device
CN111124428A (en) Application automatic publishing method based on middleware creating and related device
CN113655959B (en) Static persistent disk recycling method and device, storage medium and electronic equipment
CN114911421B (en) Data storage method, system, device and storage medium based on CSI plug-in

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant