CN114528068A - Method for eliminating cold start of server-free computing container - Google Patents

Method for eliminating cold start of server-free computing container Download PDF

Info

Publication number
CN114528068A
CN114528068A CN202210030339.0A CN202210030339A CN114528068A CN 114528068 A CN114528068 A CN 114528068A CN 202210030339 A CN202210030339 A CN 202210030339A CN 114528068 A CN114528068 A CN 114528068A
Authority
CN
China
Prior art keywords
function
calculation
container
cold start
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210030339.0A
Other languages
Chinese (zh)
Inventor
邓玉辉
吴朝锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202210030339.0A priority Critical patent/CN114528068A/en
Publication of CN114528068A publication Critical patent/CN114528068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for eliminating cold start of a server-free computing container, which aims to: (1) by bypassing repeated calculation in the process of calling the function by the server-free platform, cold start of the container is eliminated; (2) the function is prevented from requesting external files, and the call delay in the execution of the function is reduced. The method designs a real-time monitoring mechanism based on container operation, and divides functions into three categories according to monitoring information: the system comprises a calculation type function, an I/O type function and an environment correlation function, wherein for the calculation type function, the execution of the function is bypassed by caching and directly returning a calculation result; for the I/O type function, the delay overhead brought by the function accessing to an external network is reduced by maintaining the external file required by the function in a local file system. The method eliminates container cold start and reduces end-to-end latency of function calls. In addition, since the calculation results of the calculation-type function can be directly returned by bypassing the function execution and the container start, the physical resources required for processing the function request can be further reduced.

Description

Method for eliminating cold start of server-free computing container
Technical Field
The invention relates to the technical field of serverless computing architectures, in particular to a method for eliminating cold start of a serverless computing container.
Background
A Serverless Computing (Serverless Computing) architecture is the preferred way for a data center to deploy large-scale distributed applications. The server-less computing mainly uses a Function-as-a-Service (FaaS) model to provide a convenient deployment mode for developers and improve the expansion capability of application programs. However, there are many issues to be improved on the FaaS platform side, especially the cold start optimization of functions.
When the FaaS platform receives a call request of a function for the first time, a corresponding container runtime environment needs to be prepared for the function request. Generally, constructing a container runtime environment includes: (1) creating and starting a basic operating environment; (2) and pulling the function code and installing the corresponding dependent item of the function. This build phase is referred to as the cold start phase. The function can only be executed after the cold start phase is completed.
Related studies have shown that the container start-up time is roughly on the order of 1 second, while the execution time of each function is typically distributed in the millisecond to second order. Compared with the short execution time of the function, the cold start phase mentioned above may take a lot of time, thereby increasing the delay in function calling and reducing the use experience of FaaS users. In addition, the cold start problem is exacerbated when multiple functions make up a function chain deployment application, as the function chain may result in cascading cold starts. Therefore, cold-starting of functions is a problem that needs to be overcome by FaaS providers, and reducing the impact of the cold-starting problem is a key challenge for FaaS platforms.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a method for eliminating the cold start of a serverless computing container.
The purpose of the invention can be achieved by adopting the following technical scheme:
a method for eliminating cold start of a server-free computing container is applied to a server-free computing system, wherein a function mapper, a behavior monitor and an I/O (input/output) repeater are added to the server-free computing system on the basis of the conventional server-free platform, and the server-free computing system is called a system for short.
After receiving the function calling request, the system carries out preprocessing operation on the meta information corresponding to the called function, and firstly judges whether the calculation result of the function is cached in the function mapper according to the meta information. If the cache misses, the function is executed normally. In the function execution process, the behavior monitor monitors the system call generated by the running function and judges the type of the function. And finally, respectively storing the calculation result required by the function or the requested external file in the function mapper and the local file system according to the function type.
In the method, after receiving a function call request, a serverless platform respectively passes through three layers of processing logics, namely a function mapper, a behavior monitor and an I/O (input/output) forwarder, and the method specifically comprises the following steps:
s1, when the system receives the function call request, the hash operation is carried out to the input parameter of the called function, and a nested key value pair data structure is initialized according to the function name and the hash value of the corresponding parameter, the final value is the output corresponding to the function, and the function name, the hash value of the function input and the function output are stored in the function mapper in the form of nested key value pair mapping;
s2, when the function is executed, the function is divided into three types according to the monitoring information through the system call generated when the behavior monitor monitors the function operation, wherein the three types are respectively a calculation type function, an I/O related function and an environment related function, and the function mapper decides whether to cache the corresponding calculation result according to the type of the function;
s3, setting caching conditions for the calculation type function and the I/O related function respectively, and setting the caching conditions for the calculation type function to eliminate cold start overhead during function calling; for the I/O related function, a buffer condition is set for reducing the end-to-end delay caused by external access to the object storage file when the function is executed.
Further, in step S1, the system receives the function call request, and on one hand, the hash preprocessing is performed on the meta information corresponding to the called function to quickly detect the cache state of the function calculation result; on the other hand, the memory overhead of caching the calculation result is reduced; the process is as follows:
s11, analyzing the function name and calling an input parameter param carried by the function, carrying out Hash calculation on the input parameter, and reserving the calculated Hash value Hash (param);
s12, using the hash value Hash (param) of the function input parameter as a key, initializing a null key value pair { Hash (param): value }, using the function name func as a key, and initializing a nested key value pair { func: { Hash (pram): value } } using the null key value pair as a value, wherein value is a function calculation result to be filled;
s13, after the function request is received by the system, firstly judging whether the function name has a nested key value pair, and then judging whether a calculation result mapped by the corresponding function input parameter exists; if the calculation result required by the function exists, the calculation result is directly returned to the function calling party; if there is no corresponding calculation result, go to steps S2 and S3 to execute the function normally.
Further, the hash computation is used to link functions, function inputs, and function computation results, speeding up the retrieval speed of the function mapper without the container start-up overhead. Meanwhile, the input parameters of the function are subjected to Hash processing, so that the input parameters can be converted into character strings with fixed length, and the memory space required by the function mapper for storing the function mapping information is reduced.
Further, in step S2, the behavior monitor monitors the system call information issued by the function during the execution of the function, and the process of classifying the function into three types according to the monitoring information is as follows:
s21, regarding the I/O related function, in a Unix-like operating system, many different types of I/O operations are uniformly identified by file descriptors, the file descriptors have an indirect reference effect on an opened file instance, the file descriptors can be obtained through open () system call, namely the I/O operation needs to be supported through the open () system call, and a behavior monitor embedded in a function container judges whether the function belongs to the I/O related function according to whether the function generates the open () system call;
s22, for the environment-related function, the system judges whether the output corresponding to the function call is consistent in the T times of execution threshold values, if the function output is inconsistent in the T times of execution, the function is classified as the environment-related function;
s23, if the above steps S21 and S22 are not hit, the objective function is classified as a calculation function, and each input of the calculation function corresponds to a unique calculation result.
Further, the serverless function is divided into a calculation type function, an I/O related function and an environment related function, and the function mapper and the I/O repeater determine the optimization strategy of the function according to the type of the function, wherein the calculation type function is the target of the function mapper for caching the calculation result, and the external file requested by the I/O related function is the object maintained by the I/O repeater.
Further, the step S3 process is as follows:
s31, when the system receives the calculation function request of the cached calculation result, the system goes directly to the function mapper to obtain the calculation result according to the function name and the input parameter, and the expenses of container starting and function repeated execution are saved;
s32, when the system receives and executes the I/O related function, the file request sent to the external object storage is intercepted by the I/O repeater, then whether the local file system has the requested file is judged; if the local file system does not exist, acquiring a file request to the outside, and storing a copy of the file request into the local file system while transmitting the file request to the function;
when a function requests a file externally, the I/O repeater firstly intercepts a corresponding file request and then judges whether a local file system caches the latest version of the file requested by the function; and if the local cache is not hit, the remote file is requested and stored in the local cache.
The optimization objectives after classifying the functions according to the monitoring information in the above steps S31 and S32 are as follows:
regarding each single server-free function as a classification problem in the execution process, the classification aims to find an optimization mode according to different characteristics of the function, and the cold start and end-to-end delay overhead of the server-free platform are reduced.
Further, both the compute-type function and the I/O correlation function are optimized to eliminate container cold start and reduce end-to-end call delay, respectively, with 81.93% end-to-end delay for the compute-type function and 67.49% end-to-end delay for the I/O correlation function on average.
Furthermore, the function mapper stores the calculation result corresponding to the input parameter of the calculation type function, and the calculation result can be directly returned by the function mapper when no server platform receives a corresponding request, so that the overhead of container initialization and function starting execution is bypassed, and the physical resource overhead is reduced by 26.23%.
Compared with the prior art, the invention has the following advantages and effects:
(1) the invention divides the server-free function into three categories, namely a calculation type function, an I/O type function and an environment correlation function according to different characteristics when the server-free function is executed, and adopts different optimization schemes according to the dividing categories of the functions.
(2) For the calculation type function, the invention avoids the repeated execution of the function by directly caching the calculation result, thereby eliminating the expense of cold start of the container; for the I/O related function, the local file system is used as the cache, the latest version of the external file requested by the function is stored in the local cache, and the higher end-to-end time delay caused by accessing the external file is reduced.
(3) The invention considers the reduction of the physical resource expense of the serverless platform under the condition of eliminating the cold start of the calculation type function container and reducing the calling delay of the I/O related function.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a diagram of an application architecture for a method of eliminating a serverless computing container cold start as disclosed herein;
FIG. 2 is an end-to-end delay comparison graph generated by a single-step function under different strategies;
FIG. 3 is another end-to-end delay comparison graph generated by a single-step function under different strategies;
FIG. 4 is a diagram illustrating call delay of a multi-step function under different policies;
FIG. 5 is a diagram illustrating physical resource utilization during execution of a single-step function;
FIG. 6 is a diagram illustrating physical resource utilization during execution of a multi-step function.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Example 1
Fig. 1 is an application architecture diagram of a method for eliminating a server-less computing container cold start, which is applied to a server-less computing architecture and is optimized for a problem of high delay caused by a container cold start and a function execution process when a function is called. Bypassing cold start of the function by caching the computation results for the compute-type function; end-to-end latency in function execution is reduced for I/O related functions by caching external requests to the local file system. The optimization strategy is compared and tested, and the cold start optimization effect of the optimization strategy on the server-free platform is evaluated from multiple angles.
In order to clarify the application scenario of the present invention more clearly, the following detailed analysis is performed in conjunction with the application architecture diagram, and the following steps are mainly divided into three steps:
1) the calculation result of the calculation type function is checked. After receiving a function request of a user, the function mapper performs hash calculation on input of the function, and then queries whether a corresponding function calculation result is cached according to a calculated hash value and a function name. The specific process is as follows, and the function name of the current request is func, the input parameter is param, and the input parameter after hash calculation is hash (param). In the function mapper, first, it is looked up whether there is a key with the value func → V of func. If yes, searching whether the key Hash (param) exists in func → V, and calculating the result corresponding to the function as func → V → Hash (param). If there is a step miss in the above process, it indicates that the cache corresponding to the requested function is not stored in the function mapper.
2) And monitoring function behaviors. Under the condition that the function mapper is not hit, the invention monitors the system call information generated by the function in the function execution process, and further judges the type of the function. In particular, the present invention separates functions into a computational function, an I/O correlation function, and an environmental correlation function. The invention adds a behavior monitoring module in the container operation for capturing the system call of the function. If the function initiates an open () system call in the execution process, it is described that the function needs to access an external file. The function is classified as an I/O type function at this time. In addition, the invention distinguishes the calculation type function from the environment correlation function by comparing the calculation results of the function for the past continuous T times. The Result of the ith calculation of the function func is recorded as Resulti. If Result existsi=ResultjAnd i is more than or equal to 1 and less than j and less than or equal to T, the calculation results of the function func in the past continuous T times are consistent, and the function can be judged as a calculation type function. Otherwise, the function is an environment-dependent function. Based on the above classification, the function behavior monitor will cache the calculation result of the calculation-type function in the form of key-value pairs into the function mapper.
3) An I/O interceptor. The I/O interceptor aims to reduce the end-to-end delay of the I/O correlation function. The I/O interceptor is cooperated with the function monitor, and the function monitor monitors the I/O system call generated in the function running process. When an interceptor catches an external file request of an I/O related function, whether a local file system has a corresponding latest file is judged according to the file name of the request. If the local file system caches the files requested by the function, directly loading the files from the local; if the local file system does not have the file requested by the function, or the file version is not up-to-date, a remote file I/O request is issued. And the I/O interceptor updates a copy to a local cache after acquiring the file from the remote place so as to facilitate subsequent function requests.
In summary, the present embodiment provides a method for eliminating cold boot in a server-less computing architecture, which utilizes the characteristics of server-less functions to classify the functions into three categories. The calculation result of the calculation type function is cached to bypass the repeated execution of the function and the starting of the container, so that the physical resource overhead required by starting the container of the server-free platform is further reduced; a local file system is provided for the I/O related function to serve as an I/O cache of the remote file, and end-to-end delay of function calling is reduced.
Example 2
In the embodiment, two aspects of function end-to-end delay and resource use condition are tested on the proposed technology from two function loads, namely a single-step function and a multi-step function, wherein the single-step function refers to an application only containing one function, and is shown in a table; and a multi-step function refers to an application that contains multiple functions, as shown in table 2. The invention utilizes the function data set disclosed in academic circles to evaluate the effect. The invention relates to a parameter T in the operation process, namely, the function is judged to belong to a calculation type function or an environment correlation function through data after the execution of the function continuous T, and the T parameter is set to be 3 in the experimental evaluation.
TABLE 1 Single-step function classes and descriptions
Figure BDA0003466217880000081
TABLE 2 Multi-step function description
Function(s) Description of the preferred embodiment
FINRA Bank audit application, including5 single step functions
Prediction Picture classification application including 3 single step functions
Set computation Set-associative computing applications comprising three single-step functions
In a serverless platform, a container is required for function execution as a carrier, and the platform needs to allocate certain physical resources, such as the number of CPU cores, a memory, and the like, to the container when the container is started. The time interval between the platform receiving the user request and the user receiving the response is the end-to-end call delay of the function. The invention tests the end-to-end delay of the function and the utilization condition of the physical resources by repeatedly executing the function for 10 times, and the test platform comprises 2 Intel (R) Xeon (R) Silver 4216 processors (each processor comprises 32 processing cores), 128GB memory and a Centos 8 system. In the following comparison, the invention is named HashCache, and the strategy used for comparison is FaaSBache.
Fig. 2 and 3 are end-to-end delay comparison graphs generated by a single step function under different strategies. In fig. 2, the computational function under the present invention is optimized 39.05% to 66.78% relative to FaaSCache. In fact, the present invention determines whether to buffer the output of the calculation type function after the results of T times are output by the continuous comparison function before buffering the output. Thus, the end-to-end delay for the compute-type function in FIG. 2 includes data when the HashCache is inactive. After removing this portion of data, fig. 3 shows the optimized value for the end-to-end delay of the function after HashCache activation. As can be seen from fig. 3, HashCache is optimized by 94.8% at the highest and 81.94% on average in terms of computational function delay, relative to FaaSCache; on average 67.4% was optimized over the I/O correlation function delay. The experimental results prove that the HashCache provided by the invention can effectively cache the calculation result of the calculation type function and can cache the remote file which is requested by the I/O type function, thereby reducing the end-to-end delay of the remote file.
Fig. 4 shows the call delay of the multi-step function under different policies, and it can be seen from the graph that HashCache is optimized by 34.6%, 54.26% and 53.9% in three applications of FINRA, Prediction and Set calculation, respectively, relative to FaaSCache.
In general, the HashCache provided by the invention has better performance in the aspect of end-to-end delay of single-step and multi-step functions. This is because, with the HashCache, the FaaS platform can return the calculation result of the function after caching the corresponding output data. In addition, the HashCache maintains a local file system cache layer to store the latest external requests for each I/O-related operation.
FIGS. 5 and 6 depict physical resource utilization during execution of single-step functions and multi-step functions. On average, the use amount of the HashCache in the memory is reduced by 26.23 percent compared with that of the FaaS cache of a comparison strategy. Meanwhile, the peak value of the CPU generated by the HashCache is far less than that of the CPU generated under the FaaSCAche strategy. The above results prove that the invention can effectively bypass the starting of the container and the execution of the function by caching the calculation result of the calculation type function. Bypassing the starting of the container avoids memory allocation and bypassing the execution of the function avoids consuming CPU resources.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A method for eliminating cold start of a server-free computing container is applied to a server-free computing system, hereinafter referred to as a system for short, and is characterized by comprising the following steps:
s1, when the system receives the function call request, the hash operation is carried out to the called function name and the function call related input parameter, and a nested key-value pair data structure is initialized according to the function name and the hash value of the corresponding parameter, the final value is the output corresponding to the function, and the function name, the function input and the function output are stored in the function mapper by the key-value pair mapping relation;
s2, when the function is executed, the function is divided into three types according to the monitoring information through the system call generated when the behavior monitor monitors the function operation, wherein the three types are respectively a calculation type function, an I/O related function and an environment related function, and the function mapper decides whether to cache the corresponding calculation result according to the type of the function;
s3, setting caching conditions for the calculation type function and the I/O related function respectively, and setting the caching conditions for the calculation type function to eliminate cold start overhead during function calling; for the I/O related function, a buffer condition is set for reducing the end-to-end delay caused by external access to the object storage file when the function is executed.
2. The method of claim 1, wherein in step S1, the system receives the function call request, and the procedure of preprocessing the meta information of the called function is as follows:
s11, analyzing the function name and the input parameters carried by the calling function, carrying out hash calculation on the function input parameters, and reserving the calculated hash value;
s12, initializing a null key value pair by using the hash value as a key, and initializing a nested key value pair by using the null key value pair as a value by using a function name as a key;
s13, after the function request is received by the system, firstly judging whether a nested key value pair with the function name as a key exists, and then judging whether a calculation result mapped by the hash value corresponding to the function input parameter exists; if the calculation result required by the function exists, the calculation result is directly returned to the function caller.
3. The method of claim 1, wherein hash operations are used to associate function names, function inputs, and function computation results to speed up key value retrieval speed of a function mapper without container start overhead.
4. The method for eliminating the cold start of the serverless computing container as claimed in claim 1, wherein in step S2, the function is divided into three types according to the monitoring information as follows:
s21, for the I/O related function, intercepting a system call of a function execution process by a behavior monitor embedded in a function container, and judging whether the function belongs to the I/O related function according to whether the function generates an open () system call, wherein the open () system call represents that the function is allowed to be opened and a file is operated;
s22, for the environment-related function, the system judges whether the calculation results corresponding to the function call are consistent in the T execution thresholds, if the calculation results of the function are inconsistent in the T executions, the function is classified as the environment-related function;
s23, if the above steps S21 and S22 are not hit, the objective function is classified as a calculation function, and each input of the calculation function corresponds to a unique calculation result.
5. The method of claim 1, wherein the serverless computing container cold start is divided into a compute-type function, an I/O-related function and an environment-related function, and the function mapper and the I/O forwarder decide whether to optimize the function according to the type of the function, wherein the compute-type function is an object for the function mapper to cache the computation result, and the external file requested by the I/O-related function is an object maintained by the I/O forwarder, and the I/O forwarder maintains the external file requested by the function in a local file system.
6. The method for eliminating cold start of serverless computing containers as claimed in claim 1, wherein said step S3 is performed by:
s31, when the system receives the calculation function request of the cached calculation result, the system goes directly to the function mapper to obtain the calculation result according to the function name and the hash value of the input parameter, and the expenses of container starting and function repeated execution are saved;
s32, when the system receives and executes the I/O related function, the file request sent by the function to the external object storage is intercepted by the I/O repeater, and then the I/O interceptor judges whether the local file system has the requested file; if not, the file request is acquired to the outside, and is transmitted to the function and simultaneously another copy is stored in the local file system.
7. The method of claim 1, wherein the optimization of container cold start elimination and end-to-end call delay reduction for both compute-type functions and I/O related functions is performed to speed up the time for the platform to process function requests and reduce the overhead of physical resources.
8. The method of any of claims 1 to 7, wherein the serverless computing system incorporates a function mapper, a behavior monitor and an I/O forwarder based on an existing serverless platform.
CN202210030339.0A 2022-01-12 2022-01-12 Method for eliminating cold start of server-free computing container Pending CN114528068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210030339.0A CN114528068A (en) 2022-01-12 2022-01-12 Method for eliminating cold start of server-free computing container

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210030339.0A CN114528068A (en) 2022-01-12 2022-01-12 Method for eliminating cold start of server-free computing container

Publications (1)

Publication Number Publication Date
CN114528068A true CN114528068A (en) 2022-05-24

Family

ID=81620385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210030339.0A Pending CN114528068A (en) 2022-01-12 2022-01-12 Method for eliminating cold start of server-free computing container

Country Status (1)

Country Link
CN (1) CN114528068A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543486A (en) * 2022-11-16 2022-12-30 北京大学 Server-free computing oriented cold start delay optimization method, device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543486A (en) * 2022-11-16 2022-12-30 北京大学 Server-free computing oriented cold start delay optimization method, device and equipment

Similar Documents

Publication Publication Date Title
EP2002343B1 (en) Multi-cache cooperation for response output caching
US6266742B1 (en) Algorithm for cache replacement
US8554790B2 (en) Content based load balancer
US10025718B1 (en) Modifying provisioned throughput capacity for data stores according to cache performance
US20070150881A1 (en) Method and system for run-time cache logging
Li et al. Tetris: Memory-efficient serverless inference through tensor sharing
CN112561197B (en) Power data prefetching and caching method with active defense influence range
CN111339143A (en) Data caching method and device and cloud server
Mertz et al. Understanding application-level caching in web applications: A comprehensive introduction and survey of state-of-the-art approaches
CN113407119B (en) Data prefetching method, data prefetching device and processor
EP4123473A1 (en) Intelligent query plan cache size management
CN114528068A (en) Method for eliminating cold start of server-free computing container
US20090320022A1 (en) File System Object Node Management
US9934147B1 (en) Content-aware storage tiering techniques within a job scheduling system
De et al. Caching vm instances for fast vm provisioning: a comparative evaluation
US9317432B2 (en) Methods and systems for consistently replicating data
CN117370058A (en) Service processing method, device, electronic equipment and computer readable medium
CN112597076A (en) Spark-oriented cache replacement method and system based on data perception
US10997077B2 (en) Increasing the lookahead amount for prefetching
CN114390069B (en) Data access method, system, equipment and storage medium based on distributed cache
Kanrar et al. Dynamic page replacement at the cache memory for the video on demand server
CN113961586A (en) Control method and device for SQL (structured query language) statements
CN113596177A (en) Method and device for analyzing IP address of intelligent household equipment
CN113849119A (en) Storage method, storage device, and computer-readable storage medium
EP4123461A1 (en) Intelligent query plan cache size management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination