WO2023138453A1 - 一种容器加载方法及装置 - Google Patents

一种容器加载方法及装置 Download PDF

Info

Publication number
WO2023138453A1
WO2023138453A1 PCT/CN2023/071747 CN2023071747W WO2023138453A1 WO 2023138453 A1 WO2023138453 A1 WO 2023138453A1 CN 2023071747 W CN2023071747 W CN 2023071747W WO 2023138453 A1 WO2023138453 A1 WO 2023138453A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
container
template
worker thread
thread
Prior art date
Application number
PCT/CN2023/071747
Other languages
English (en)
French (fr)
Inventor
柳清源
夏虞斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023138453A1 publication Critical patent/WO2023138453A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to the field of communication technologies, in particular to a container loading method and device.
  • Serverless Computing also known as Function-as-Service (FaaS)
  • FaaS Function-as-Service
  • users decouple application logic at the granularity of functions, and submit the generated function codes to cloud service providers, who are responsible for function deployment and calculation. After the user submits the function code to the cloud service provider, the user can send a function call request with parameters to the cloud service provider. Since the function code submitted by the user is usually not trusted by the serverless computing device. Therefore, when the serverless computing device executes a function call, it needs to isolate the operating environment of the function, so that the function runs in a complete isolation environment.
  • implementation manners of an isolated environment include: Linux container, Firecracker, and the like. Because, the operation of the function in the serverless computing device depends on the function instance. When the serverless computing device receives a function request sent by the user, if the serverless computing device does not have a function instance that can be used, then the serverless computing device needs to initialize the entire instance environment to support this function call request. Such a start is called a "cold start”. Because serverless computing has a high frequency of cold starts, the total startup delay accounts for a high proportion of the total end-to-end delay. Therefore, startup latency is critical to the overall performance of serverless computing.
  • the embodiment of the present application provides a container loading method and device.
  • a pre-prepared function container as a container isolation environment for functions, the overhead of initializing the container isolation environment is reduced, and the initialization time of the language runtime is optimized by reusing the language runtime state corresponding to the function that has been initialized.
  • an embodiment of the present application provides a method for loading a multi-threaded container, the method comprising: receiving a function call request sent by a user, the function call request including function information; obtaining the process number of the corresponding function container according to the information of the function, wherein a function is deployed in the function container; recreating the main thread of the language runtime process in the template container according to the process number of the function container to obtain a target subprocess, wherein both the language runtime process and the target subprocess in the template container are language runtime processes corresponding to the function, and the namespace of the target subprocess is located in the function container ; Switch the control group of the target subprocess to the control group of the function container according to the process number of the target subprocess, so that the target subprocess migrates to the function container, and loads the function in the function container;
  • the multi-threaded container loading method uses the pre-prepared function container as the final running environment of the function. Then copy the language runtime state of the initialized function to the function container by forking, so as to realize the initialization of the function's runtime environment.
  • the critical path in the language runtime initialization process corresponding to the function is skipped, thereby significantly optimizing the initialization time of the language runtime.
  • the pre-prepared function container as the final running environment of the function instance, the initialization overhead of the container isolation environment is further reduced.
  • the method before receiving the function call sent by the user, the method further includes: receiving the function code and the function dependency sent by the user; and deploying the function code and the function dependency of the function into a function container.
  • using the pre-prepared function container as the running environment of the function further reduces the initialization overhead of the container isolation environment.
  • the function information includes: at least one of a function name, a function type, and a function ID.
  • the language runtime of the function to be called can be further determined, and the function container to be called can be deployed.
  • the method before reforging the main thread of the language runtime process in the template container according to the process number of the function container to obtain the target subprocess, the method further includes: obtaining a template container corresponding to the function; if there is no template container corresponding to the function, creating a template container corresponding to the function according to the language runtime corresponding to the function; saving the sub-template container.
  • the template container corresponding to the runtime of the functional language is stored in the serverless computing device in advance.
  • the serverless computing device can directly reproduce the template container corresponding to the function call request.
  • the serverless computing device can determine the function language runtime to be called according to the received function call request after receiving the function call request, and then the serverless computing device generates a corresponding template container according to the language runtime and saves it.
  • the template container corresponding to each language runtime only needs to be created once.
  • the created template container is stored in the serverless computing device.
  • the serverless computing device can copy the initialized language runtime state (template container), which significantly improves the initialization time of the language runtime.
  • the method before forking the main thread of the language runtime process in the template container according to the process number of the function container to obtain the target subprocess, the method further includes: when the language runtime of the function is multi-threaded, obtaining the state information of each worker thread in at least one worker thread; when the state information of each worker thread in the at least one worker thread is the first state, closing each worker thread in the at least one worker thread; wherein the first state includes: the worker thread is in a blocked state due to waiting for a task, and the worker thread is in a state that does not need to save the context
  • the status information of at least one worker thread in at least two worker threads is not the first state, suspend the worker thread whose status information is the first state among the at least two worker threads, until the status information of each worker thread in the at least two worker threads is the first state, close each worker thread in the at least two worker threads; or save the context of the worker thread whose status information is not the first state, and close each worker thread in the at least two worker threads;
  • the main thread of the language runtime process in the template container is forked according to the process number of the function container to obtain the target subprocess, including: first forking the main thread to obtain the first subprocess; according to the process number of the function container, the namespace of the first subprocess is switched to the function container; the first subprocess is forked to obtain the target subprocess, and the namespace of the target subprocess is located in the function container.
  • the namespace of the first child process generated by the first forking can be switched to the function container through the setns and chroot system calls of Linux.
  • the method further includes: initializing a data structure for managing worker threads, and creating a worker thread of the target sub-process.
  • the state of the multithreaded language runtime of the template container is saved and restored respectively, so that the main thread of the template container can directly call the forking operation to copy the state of the main thread, and restore the multi-threaded state after the forking operation, so that the multi-threaded language runtime process can be created using the forking operation.
  • the embodiment of the present application provides a serverless computing device, which is used to load a multi-threaded container in a serverless computing process, and the device includes:
  • the receiving module is configured to receive a function call request sent by a user, and the function call request includes function information;
  • a processing module configured to obtain the process number of the corresponding function container according to the function information, wherein the function is deployed in the function container;
  • the processing module is further configured to reproduce the main thread of the language runtime process in the template container according to the process number of the function container to obtain a target subprocess, wherein both the language runtime process and the target subprocess in the template container are language runtime processes corresponding to the function, and the namespace of the target subprocess is located in the function container; switch the control group of the target subprocess to the control group of the function container according to the process number of the target subprocess, so that the target subprocess is migrated to the function container; and load the function in the function container.
  • the receiving module is also used for:
  • the function information includes: at least one of a function name, a function type, and a function ID.
  • processing module is also used to:
  • a template container is saved in the serverless computing device.
  • the processing module before forking the main thread of the template container according to the process ID of the function container to obtain the target child process, the processing module is also used to:
  • the state information of each worker thread in the at least one worker thread is the first state
  • close each worker thread in the at least one worker thread wherein, the first state includes: the worker thread is in a blocked state due to waiting for a task, the worker thread is in a state that does not need to save the context, and the worker thread is in a state of no logical interaction with the main thread;
  • processing module is also used to:
  • the naming control of the target sub-process is located in a function container.
  • processing module is also used to:
  • the embodiment of the present application provides a serverless computing device, including:
  • At least one memory for storing programs
  • At least one processor is used to execute the program stored in the memory, and when the program stored in the memory is executed, the processor is used to execute the method provided by the first aspect.
  • the embodiment of the present application provides a serverless computing system, including the serverless computing device provided in the second aspect.
  • the embodiment of the present application provides a computer-readable medium, where instructions are stored in the computer storage medium, and when the instructions are run on the computer, the computer is made to execute the method provided in the first aspect.
  • the embodiments of the present application provide a computer program product containing instructions, which, when the instructions are run on a computer, cause the computer to execute the method provided in the first aspect.
  • the embodiment of the present application provides a chip, the chip includes a memory and a processor, the memory is used to store computer instructions, and the processor is used to call and execute the computer instructions from the memory to execute the method provided in the first aspect.
  • Figure 1a is a system architecture diagram of a serverless computing device in the first solution
  • Figure 1b is a schematic flow diagram of a multi-threaded container loading method provided by the first solution
  • Fig. 1c is a schematic diagram of an isolation sandbox startup process provided by the second method
  • Figure 2 is a schematic diagram of an application scenario provided by the embodiment of the application itself;
  • FIG. 3 is a schematic structural diagram of a serverless computing device provided in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of another serverless computing device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a loading process of a multi-threaded container provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of a thread change state in a multi-threaded fork operation based on a template container provided by an embodiment of the present application;
  • FIG. 7 is a schematic flowchart of a method for loading a multi-threaded container provided in an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for multi-threaded replicating a template container provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of yet another multi-threaded container loading method provided by the embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a serverless computing device provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • Language runtime the runtime environment provided by high-level languages, such as Python, etc.
  • Function instance refers to the entire isolation environment of a function in serverless computing.
  • the function instance refers to the container isolation environment that contains the complete operating environment of the function.
  • Functional dependency refers to a "constraint" between various attributes (or attribute sets) in the database, and can also be regarded as a mapping between attributes.
  • instance cold start means the process of recreating a complete function instance.
  • Container isolation technology Serverless computing providers usually choose containers or virtual machines as sandboxes for running serverless functions. Linux containers control and isolate operating system resources (such as file systems, process numbers, networks, etc.) and computing resources (such as cpu, memory, etc.) used by containers through control groups (cgroups) and namespaces (namespaces).
  • cgroups control groups
  • namespaces namespaces
  • the system call of the container process needs to reuse the host operating system kernel.
  • the process ID of the idle process is the process ID of the empty container.
  • the idle process in the empty container can be determined by obtaining the process ID of the empty container, and the namespace and cgroup of the empty container can be determined through the idle process.
  • Main thread The thread used to implement the main function of the corresponding program.
  • Worker thread A thread in a process other than the main thread.
  • the POSIX interface includes a fork system call, which is the basic method for generating new processes.
  • the Fork operation will create a new address space as the address space of the child process (the new process generated by the fork operation), and the new address space has the same memory segment as the parent process (used in the fork operation for the process being forked).
  • the fork system call adopts the Copy-on-Write (Copy-on-Write) method.
  • Copy-on-Write Copy-on-Write
  • the copy-on-write mechanism improves the performance of the fork operation.
  • using the fork system call to fork a new instance from an initialized "template" process can greatly improve the startup performance of the serverless instance.
  • the serverless computing device includes: a container engine, a container runtime, and a language runtime.
  • the container engine is responsible for processing the user's function call request. After the container engine receives the user's function call request, it manages and mounts the container image, and calls the container runtime.
  • the container runtime creates a container isolation environment through the Linux namespace (namespace) and control component (cgroup) interfaces.
  • the execution flow of the first solution is shown in FIG. 1b, including steps S101 to S103.
  • Step S101 The container engine obtains the container image according to the function requested in the user's function call request, mounts the container image and invokes the container runtime.
  • Step S102 Create an isolation environment for function instances according to the namespace and cgroup interfaces of Linux when the container is running.
  • Step S103 After the isolation environment is created, the container process completely initializes the language runtime environment.
  • the container isolation environment of the loaded function can be well loaded, and the loaded container isolation environment can be used by establishing a connection with the serverless computing device.
  • the cold start of this solution needs to load a complete function isolation environment and initialize a complete language runtime, and the whole process takes a long time.
  • this scheme does not take full advantage of the similarity between instances of the same function, but repeats the initialization work many times.
  • the second scheme implements the fast startup of the isolation sandbox written in a multi-threaded language through a safe fork of the multi-threaded sandbox.
  • the second solution is based on the virtualization technology that requires fork implementation at the operating system (OS) layer, not based on the user mode method, and the threshold for use is high.
  • the second solution is mainly carried out on the virtualization isolation sandbox, which cannot be directly compatible with the existing container-based isolation method.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the application of the present invention.
  • a serverless computing (Serverless Computing) device is deployed on the cloud server to provide serverless computing for user equipment.
  • the user device decouples the application logic at a function granularity, and uploads the function code and function dependencies generated based on the application logic to the cloud server.
  • the user equipment can send a function call request to the cloud server, and the function call request carries parameter information needed for the function call.
  • the serverless computing device deployed on the cloud server deploys and calculates the pre-received function code according to the received function call request.
  • the serverless computing device needs to isolate the user's function execution environment so that the function runs in a complete isolation environment.
  • FIG. 3 is a schematic structural diagram of a serverless computing device provided by an embodiment of the present application.
  • the device includes: a processor 301 , a network interface 302 , and a memory 303 .
  • the processor 301, the network interface 302, and the memory 303 may be connected through a bus or in other ways.
  • the storage 303 is a memory device of the serverless computing device, and is used to store programs and data. For example, store function code and function dependencies sent by users.
  • the memory 303 provides a storage space, which stores the operating system of the server and program instructions for implementing the multi-threaded container loading method. Operating systems include but are not limited to: Windows system (an operating system), Linux system (an operating system), Hongmeng system (an operating system), etc., which are not limited here.
  • the processor 301 (or central processing unit (CPU)) is the computing core and control core of the serverless computing device.
  • the processor 301 reads the program instructions and data stored in the memory 303, so as to execute the multi-threaded container loading method.
  • the processor 301 stores the received function codes and function dependencies after reading the program instructions stored in the memory 303 .
  • the network interface 302 may include standard wired interfaces and wireless interfaces (such as Wi-Fi, mobile communication interfaces, etc.).
  • the network interface 302 is controlled by the processor 301 for sending and receiving data. For example, receive function code and function dependencies sent by users.
  • FIG. 4 is a schematic structural diagram of another serverless computing device provided by an embodiment of the present application.
  • the structural diagram of the serverless computing device provided in FIG. 4 is a schematic structural diagram of a certain state in the process of the serverless computing device receiving the function call request sent by the user, generating a corresponding function instance according to the function calling request, and migrating the function instance to the corresponding isolation environment (function container).
  • the device includes: a template container 401 , a function container 402 , and a control thread 403 .
  • the template container 401 contains the language runtime process of the corresponding function.
  • the language runtime process in template container 401 acts as the "parent" instance for all child instances.
  • the function container 402 is the isolation environment where the function finally runs.
  • the control thread 403 is used to manage the process of generating new instances through fork.
  • the serverless computing device provides complete function request calling logic to the outside world.
  • the user uploads the pre-written function code and function dependencies to the serverless computing device.
  • the serverless computing device deploys the received function code and function dependencies into the function container 402 .
  • the serverless computing device creates a template container 401 according to the language runtime of the received function.
  • the serverless computing device receives the function calling request sent by the user, the serverless computing device forwards the received function calling request to the template container 401 through the control thread 403 .
  • the serverless computing device also needs to send the process ID (pid) of the function container 402 to the template container 401 through the control process.
  • the template container 401 receives the function call request sent by the user and the process number of the function container 402, it starts to create a function instance based on fork.
  • the serverless developer After the serverless developer completes the development of the function code, the function code and function dependencies are uploaded to the serverless computing device. After receiving the function code and function dependency uploaded by the user, the serverless computing device deploys the function code and function dependency into the corresponding function container.
  • the function container in the serverless computing device may be an empty container pre-existing in the serverless computing device, or it may be a newly created function container after the serverless computing device receives the function code and function dependencies uploaded by the user.
  • an empty container refers to a group of namespace and cgroup existing in the serverless computing device, and the namespace and cgroup include an idle process.
  • the serverless computing device determines whether there is an empty container in the serverless computing device. When there is an empty function container in the serverless computing device, the serverless computing device regards the empty container as a function container, and deploys the received function code and function dependencies into the function container. When there is no empty container in the serverless computing device, the serverless computing device regenerates an empty container as a function container. The serverless computing device then deploys the received functional dependencies and functional code into the functional container.
  • the control process receives the function call request sent by the user and obtains the process number of the function container.
  • the serverless computing device receives the function call request sent by the user, it also needs to obtain the process number of the function container through the control process in the serverless computing device.
  • the function to be called by the function call request is deployed in the function container.
  • the function call request sent by the user includes function information.
  • the information of the function includes: at least one of a function name, a function type, and a function ID.
  • the control process determines the function container according to the function information, and then obtains the process number of the function container.
  • the function name refers to the naming of the function when the user uploads the written function to the serverless computing device.
  • the function ID refers to an identification generated by the serverless computing device after the user uploads the function to the serverless computing device. Specifically, the serverless computing device calculates the hash value of the function, and generates a corresponding identifier for the function according to the calculated hash.
  • the function container actively sends the process ID of the function container to the control process.
  • the control process forwards the process number of the function container to the corresponding template container according to the received function call request.
  • the control process After receiving the function call request, the control process determines the template container corresponding to the function according to the language runtime of the function to be called by the function call request. Wherein, the language runtime process of the function is included in the template container. Then, the control process sends the received function call request and the process number of the function container to the template container.
  • the control process triggers the main thread of the template container to call a fork operation to obtain the target child process.
  • the template container Before calling the fork operation on the template container, it is also necessary to confirm whether the language runtime of the template container is multi-threaded. When the language runtime of the template container is not multi-threaded, the template container directly calls the fork operation.
  • the template container When the language runtime of the template container is multi-threaded, after the template container receives the function call request sent by the user, it performs a multi-threaded fork on the template container based on the POSIX Fork technology to obtain a target child process.
  • the template container obtains the state information of each worker thread.
  • the state information of each working thread is the first state
  • the template container closes the working thread of the template container.
  • the first state includes: a state in which the worker thread is blocked due to waiting for a task, a state in which the worker thread does not need to save the context, and a state in which the worker thread has no logical interaction with the main thread.
  • the template container suspends the worker thread whose state information is the first state among the at least two worker threads, until the state information of each worker thread in the at least two worker threads is the first state.
  • the template container then closes the template container's worker thread. Or, save the context thread of the worker thread whose state information is not the first state.
  • the template container then closes the template container's worker thread.
  • the template container "closes” the worker thread, and the action of "close” does not really exist. Instead, during the fork process of the main thread of the template container, for a multi-threaded parent process, fork will only copy the thread that actually called fork (in the embodiment of this application, referring to the main thread of the language runtime process in the template container), while other processes will disappear in the child processes generated after the fork.
  • the template container migrates the namespace of the generated process to the function container according to the process ID of the function container.
  • the template container needs to migrate the container isolation environment of the newly generated process. Specifically, the template container needs to perform two multi-threaded forks to switch the namespace of the newly generated process to the namespace of the function container.
  • the template container performs the first multi-thread fork operation to obtain the first child process. Then, the template container makes a system call to the first subprocess, and switches the namespace of the first subprocess to the function container. Although the namespace of the first child process is switched to the function container, the switching operation does not take effect in the first child process. It is also necessary to perform a second multithreaded fork operation on the first child process to obtain the target child process. At this point, the obtained namespace of the target subprocess has been switched to the function container.
  • the template container forwards the obtained process number of the target child process to the control process.
  • the control process switches the cgroup of the target sub-process to the cgroup of the function container according to the received process number of the target sub-process.
  • the control process switches the cgroup of the target subprocess to the cgroup of the function container according to the pid of the target subprocess, so that the target subprocess migrates to the function container.
  • the target child process After the target child process is migrated to the function container, the target child process initializes the data structure used to manage the worker threads in the memory, and reuses the process of language runtime process initialization to recreate various worker threads. At this point, the target child process resumes the multi-threaded running state.
  • the template container executes a multi-thread fork operation to generate a new process.
  • a multi-thread fork operation In the process of executing the fork operation of the template container to generate the target child process, it is also necessary to perform processes such as multi-thread suspension and multi-thread state recovery for multiple threads of the template container.
  • FIG. 6 is a schematic diagram of thread change states in a template container-based multi-thread fork operation provided by an embodiment of the present application. Next, the multi-threaded fork involved in this solution is introduced in detail in combination with FIG. 6 .
  • the fork operation is mainly performed on the multithreading of the language runtime in the user mode. Because the fork operation of POSIX cannot be directly applied to multi-threaded processes. Because for a multi-threaded parent process, fork will only copy the thread that actually called fork, while other processes will disappear in the child process after fork.
  • existing language runtimes such as NodeJS
  • NodeJS existing language runtimes
  • a corresponding template container is pre-created for each language runtime (that is, the template container includes a language runtime process). Therefore, it is necessary to process multiple threads of the template container in the user state, so as to avoid the mutual exclusion lock problem that occurs after the fork operation is performed on the template container.
  • the template container After the template container receives the user's function call request, the template container obtains status information of multiple working threads of the template container. Then, the template container processes the plurality of worker threads according to the acquired work status information of the plurality of worker threads.
  • the template container obtains the working state information of each worker thread, and when it is in the first state, the control thread triggers the main thread of the template container to call a fork operation, and closes the worker thread of the template container during the fork operation.
  • the first state includes: the working thread is in a blocked state due to waiting for a task, the working thread is in a state that does not need to save the context, and the working thread is in a state of no logical interaction with the main thread.
  • the template container obtains the working state information of each worker thread, and when there is at least one worker thread in the template container whose status information is not in the first state, the template container synchronizes the status of the working threads of the template container so that the status information of all the working threads in the template container is in the first state. Then, the control thread triggers the main thread of the template container to call a fork operation, and closes the worker thread of the template container during the fork operation.
  • the template container obtains the working status information of each worker thread, and when there is at least one worker thread in the template container whose status information is not the first status, the template container saves the context of the worker thread whose status information is not the first status. Then, the control thread triggers the main thread of the template container to call a fork operation, and closes the worker thread of the template container during the fork operation.
  • the control thread controls the main thread of the template container to call POSIX fork to generate the target child process.
  • POSIX fork the template container migrates the isolated environment of the container during the fork process, it needs to call POSIX fork twice.
  • the control thread triggers the main thread of the template container to perform the first fork operation to generate the first child process.
  • the template container invokes the setns and chroot system calls of Linux according to the received pid of the function container, and switches the namespace of the first child process to the namespace of the function container.
  • the namespace of the generated first child process is switched to the namespace of the function container.
  • the setting of the namespace of the first child process in the first fork process will not take effect immediately. Therefore, it is also necessary to perform a second fork operation on the generated first child process.
  • the control thread regards the first child process as a parent process, triggers the first child process to perform a second fork operation, and generates a target child process. At this point, the namespace of the target child process has been switched to the function container.
  • the namespace environment of the newly generated target subprocess is already located in the function container. At this time, it is also necessary to restore the working thread of the target child process.
  • the target subprocess initializes the data structure used to manage the worker threads in memory, and reuses the process when the language runtime process is initialized to recreate various worker threads. At this point, the newly generated target child process resumes the multi-threaded running state.
  • the target child process when the target child process creates the worker thread, it is also necessary to determine whether the context of the worker thread of the template container is saved in the serverless computing device. Based on the context of the worker thread of the template container stored in the serverless computing device, when the target child process creates a worker thread, it needs to restore the worker thread that was closed during the first fork operation according to the context saved in the serverless computing device.
  • the template container After performing two fork operations on the main thread of the template container to obtain the target child process, the template container also needs to return the process ID of the target process to the control process.
  • the control process After the control process receives the process ID of the target process, the control process migrates the control group (cgroup) of the target child process to the function container according to the target process ID.
  • the final generated target subprocess instance is completely located in the isolation environment of the function container.
  • the multithreading of the language runtime is processed in the user state, so that the multithreaded language runtime can create a target subprocess through POSIX fork.
  • the POSIX interface includes a fork system call, which can generate a child process identical to the parent process through the fork system call.
  • the Fork operation will create a new address space as the address space of the child process, and the new address space has the same memory segment as the parent process.
  • the fork system call uses the Copy-on-Write (Copy-on-Write) method.
  • the new address space is not copied first, but the two address spaces point to the same physical memory segment first, and the copy-on-write flag is set, and the physical memory page is copied when a write operation occurs.
  • the copy-on-write mechanism improves the performance of the fork operation.
  • using the fork system call to fork a new instance from an initialized "template" process (template container) can greatly improve the startup performance of the serverless instance.
  • the embodiment of the present application also provides a schematic flowchart of a multi-threaded container loading method.
  • the method is executed by a serverless computing device, and the method includes: step S701-step S705.
  • Step S701 receiving the function code and function dependency sent by the user.
  • the function code and function dependencies of the function are packaged and uploaded to the serverless computing device.
  • functional dependency refers to a "constraint" between various attributes (or attribute sets) in the database, and can also be regarded as a mapping between attributes.
  • a serverless function developer can use Python or Nodejs to write function code. It should be noted that Python's language runtime has only one main thread. Therefore, when Serverless function developers use Python to write function codes.
  • the template container corresponding to the language runtime of this function has only one main thread and no worker threads.
  • Step S702 deploying the received function code and function dependencies into a function container.
  • Serverless After Serverless receives the function code and function dependencies sent by the user, it stores the function codes and function dependencies in the function container. Specifically, after the serverless computing device receives the function code and function dependencies sent by the user, it is determined whether there is an empty function container in the serverless computing device. When there is an empty function container in the serverless computing device, the serverless computing device regards the empty function container as a function container, and deploys the received function code and function dependencies into the function container. When there is no empty function container in the serverless computing device, the serverless computing device regenerates an empty function container as a function container. The serverless computing device then deploys the received functional dependencies and functional code into the functional container.
  • the process ID of the idle process is the process ID of the function container.
  • the idle process in the function container can be determined by obtaining the process ID of the function container, and the namespace and cgroup of the function container can be determined through the idle process.
  • Step S703 receiving a function call request sent by the user.
  • the control process sends the function call request to the template container corresponding to the language runtime of the function to be called by the function call request.
  • the serverless computing device After the serverless computing device receives the function call request sent by the user, it is determined whether there is a template container corresponding to the language runtime of the function to be called in the function call request in the serverless computing device. Based on the existence of a template container corresponding to the language runtime of the function to be called in the serverless computing device, the control process directly forwards the function call request to the template container. Since the serverless computing device does not have a template container corresponding to the language runtime of the function that needs to be called, the serverless computing device creates a corresponding template container according to the language runtime of the function that needs to be called, that is, the newly generated template container is the language runtime process of the function. Then, the control process forwards the user's function call request to the newly created template container.
  • the template container corresponding to each language runtime only needs to be created once.
  • the newly created template container will be saved to the serverless computing device for subsequent use.
  • Step S704 acquiring the process number of the template container and the function container corresponding to the function to be called by the function call request.
  • the template container is the language runtime process of the function that contains the function-as-a-service logic.
  • the control process forwards the user's function request to the template container, it also needs to send the pid of the function container corresponding to the function to be called in the function call request to the template container.
  • the control process after receiving the function call request from the user, the control process obtains the process ID of the function container where the function is deployed according to the function to be called in the function call request.
  • the serverless computing device deploys the received function code and function dependencies into a function container, and then sends the pid of the function container to the control process.
  • the control process sends the pid of the function container to the corresponding template container.
  • Step S705 forking the template container, and migrating the process generated by the forking to the function container.
  • the template container After the template container receives the function call request forwarded by the control process and the process number of the target container where the function to be called by the function call request is deployed, it starts to fork.
  • the template container Before the template container invokes the fork operation, it is also necessary to confirm whether the language runtime of the template container is multi-threaded. When the language runtime of the template container is not multi-threaded, the template container directly calls the fork operation.
  • steps S7051 to S7056 are included.
  • Step S7051 acquiring multiple threads of the template container.
  • the multiple threads of the acquired template container usually include a main thread and multiple worker threads.
  • Step S7052 acquiring the working status information of each of the multiple threads of the template container, and processing the multiple threads according to the acquired working status information to obtain the main thread among the multiple threads.
  • the template container obtains the working state information of each worker thread, and when all are in the first state, the control thread triggers the main thread of the template container to call a fork operation, and closes the worker thread of the template container during the fork operation.
  • the first state includes: the worker thread is in a blocked state due to waiting for a task, the template container is in a state that does not need to save the context, and the template container is in a state of no logical interaction with the main thread.
  • the template container obtains the working state information of each worker thread, and when there is at least one worker thread in the template container whose status information is not in the first state, the template container synchronizes the status of the working threads of the template container so that the status information of all the working threads in the template container is in the first state. Then the control thread triggers the main thread of the template container to call the fork operation, and closes the worker thread of the template container during the fork operation.
  • the template container obtains the working status information of each worker thread, and when there is at least one worker thread in the template container whose status information is not the first status, the template container saves the context of the worker thread whose status information is not the first status. Then, the control thread triggers the main thread of the template container to call a fork operation, and closes the worker thread of the template container during the fork operation.
  • Step S7053 performing the first multi-thread reproduction of the main thread to generate a first sub-process.
  • Step S7054 according to the process number of the function container, make a system call to the first sub-process, and switch the namespace of the first sub-process to the namespace of the function container.
  • the template container After the template container receives the function call request sent by the user, it performs the first fork operation on the main thread to obtain the first child process. Then the template container determines the namespace corresponding to the function container according to the received pid of the function container. Then call the setns and chroot system calls of the Linux system to switch the namespace of the first child process to the function container. Specifically, the template container switches the root directory of the first child process to the root directory of the function container by calling a chroot (change root) command. Then the template container calls the setns function to add the first child process to the namespace of the function container.
  • Step S7055 performing a second multi-thread reproduction of the first sub-process to generate a target sub-process.
  • a second fork operation is performed on the first child process to obtain a target child process.
  • the namespace environment of the target child process is already located in the function container.
  • the target child process After completing the second fork operation and obtaining the target child process, it is also necessary to restore the working thread closed in step S3052.
  • the target child process needs to initialize the data structure used to manage worker threads in memory, and then reuse the process initialization process of NodeJS runtime to recreate various worker threads. Make the newly generated target child process resume the multi-threaded running state.
  • the target child process when the target child process creates the worker thread, it is also necessary to determine whether the context of the worker thread of the template container is stored in the serverless computing device. Based on the context of the worker thread of the template container stored in the serverless computing device, when the target child process creates a worker thread, it needs to restore the worker thread that was closed during the first fork operation according to the context saved in the serverless computing device.
  • Step S7056 switch the control group of the target sub-process to the control group in the function container according to the process number of the target sub-process.
  • the template container After completing the second fork operation, the template container also needs to return the pid of the target child process to the control process. After the control process receives the pid of the target child process, the control process is responsible for completing the migration of the target child process cgroup. So that the generated target child process is completely located in the isolation environment of the function container.
  • Step S706 load the function in the function container.
  • the template container before forking the template container, it is also necessary to determine whether the language runtime corresponding to the function is multi-threaded.
  • the language runtime of the function is multi-threaded, it is necessary to process the working thread of the template container corresponding to the language runtime of the function. Then, the main thread of the template container is used as the parent process, and the new process is obtained by calling POSIX fork twice, and the namespace and control group of the new process are switched to the function container.
  • the newly generated process also needs to initialize the data structure used to manage the worker threads in the memory, and reuse the process initialization process of the current language to recreate various worker threads, so that the newly generated process can restore the multi-threaded running state.
  • the state of the multi-threads in the multi-threaded language runtime is saved, so that the main thread of the template container can directly call POSIX fork to copy the state of the main thread. Then restore the multi-threaded state after the fork operation, and realize the use of fork to create a multi-threaded language runtime process.
  • the initialized language runtime process state can be reused, thereby skipping most of the initialization process and quickly creating a new multi-threaded language runtime process.
  • the containers of the same function can share part of the state, reducing the proportion of physical memory (Proportional Set Size, PSS) actually used after the shared library is allocated.
  • PSS Proportional Set Size
  • FIG. 9 is a schematic flowchart of another method for loading a multi-threaded container provided in an embodiment of the present application. Referring to FIG. 9, the method is executed by a serverless computing device, and the method includes: step S901-step S905.
  • Step S901 receiving the function code and function dependency sent by the user.
  • Step S902 deploying the received function code and function dependencies into corresponding function containers.
  • step S901-step S902 is the same as the implementation manner of step S701-step S702, and will not be repeated here.
  • Step S903 obtaining the template container corresponding to the function and the process number of the function container corresponding to the function, wherein the template container contains the language runtime process of the function.
  • Step S904 forking the main thread of the language runtime process contained in the template container, and migrating the process generated by the forking to the function container.
  • step S7051-step S7056 The process of multithreading the template container is the same as step S7051-step S7056. I won't repeat them here.
  • Step S905 receiving a function call request sent by the user, and loading the function into the function container.
  • the serverless computing device after the serverless computing device receives the function code and function dependencies sent by the user, the serverless computing device starts to generate function instances of the function. That is, in the embodiment of the present application, the function instance of the function is generated in the serverless computing device before the serverless computing device receives the function call request sent by the user, which reduces the waiting time for the user to call the function.
  • FIG. 10 is a schematic structural diagram of a serverless computing device provided by an embodiment of the present application.
  • the serverless computing device shown in FIG. 10 is used to load the multi-thread container in the serverless computing process.
  • the serverless computing device shown in FIG. 10 includes: a receiving module 1001 , a processing module 1002 , and a storage module 1003 .
  • the receiving module 1001 is used for receiving function codes and function dependencies sent by users. Then the receiving module 1001 deploys the received function code and function dependencies into the function container.
  • the receiving module 1001 is also configured to receive a function calling request sent by a user. Then the receiving module 1001 forwards the function call request together with the process number of the function container corresponding to the function to be called in the function call request to the template container corresponding to the function.
  • the processing module 1002 is used to perform the first fork operation on the template container to obtain the first child process. Then, the processing module 1002 makes a system call to the first sub-process according to the process ID of the function container, and switches the namespace of the first sub-process to the namespace of the function container. After the processing module 1002 switches the namespace of the first sub-process to the namespace of the function container, the processing module 1002 performs a second multi-threaded fork on the first sub-process to obtain the target sub-process. Then, the processing module 1002 switches the control group of the target sub-process to the control group of the function container according to the process number of the target sub-process.
  • the storage module 1003 is used for storing the function codes and function dependencies received by the receiving module 1001 .
  • the serverless computing device After the serverless computing device receives the function call request sent by the user, the serverless computing device generates a new function instance according to the function call request, and migrates the function instance to the target isolation environment (function container).
  • the target isolation environment for the process, please refer to the descriptions of the related embodiments of the accompanying drawings 5, 6, 7, 8, and 9 above, and the description will not be repeated here.
  • serverless computing device shown in FIG. 10 can perform, please refer to the descriptions in the foregoing method embodiments.
  • the device embodiment depicted in Figure 10 is merely illustrative.
  • the division of modules is only a logical function division, and there may be another division method in actual implementation.
  • various modules or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • Each processing module in each embodiment of the present application may be integrated into one processing module, each module may exist separately physically, or two or more modules may be integrated into one module.
  • each module in FIG. 10 can be implemented in the form of hardware or in the form of software function modules.
  • the above-mentioned processing module 1002 may be implemented by a software function module generated by at least one processor 301 in FIG. 3 after reading the program code stored in the memory 303 .
  • the above-mentioned modules in FIG. 10 can also be implemented by different hardware in the serverless computing device.
  • the processing module 1002 is implemented by a part of processing resources (such as a core in the multi-core processor) in at least one processor 301 in FIG. 3
  • the receiving module 1001 is realized by the network interface 302 in FIG.
  • the above functional modules can also be implemented by combining software and hardware.
  • the receiving module 801 is implemented by a hardware programmable device
  • the processing module 1002 is a software functional module generated by the CPU after reading the program code stored in the memory.
  • the embodiment of the present application also provides a serverless computing system, including at least one serverless computing device (as shown in FIG. 3 and FIG. 10 ).
  • Table 1 Startup cost and memory cost of starting a container instance
  • Table 1 shows that starting a container instance through a traditional container engine method requires a startup delay of 85.5ms.
  • a complete container instance needs to be created. Among them, steps such as the creation of the namespace and cgroup isolation environment of the container and the initialization of the language runtime are included.
  • the memory overhead of PSS does not change with the number of instances.
  • the PSS overhead per container is 14.66MB.
  • the initialized language runtime process state can be reused through multi-threaded fork, thereby skipping most of the initialization process and quickly creating a new multi-threaded language runtime process. Therefore, when starting a container instance based on the fork method in this solution, it only takes 8.4ms to start a new instance.
  • containers of the same function can share part of the state, reducing the PSS overhead.
  • the PSS overhead of each container is 7.25MB.
  • a computer program product refers to computer readable program code stored on a computer readable medium.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • Computer readable storage media include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • the computer readable storage medium is random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or portable read only memory (CD-ROM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Stored Programmes (AREA)

Abstract

本发明提供一种容器加载方法及装置。该方法可以包括:接收用户发送的函数调用请求,该函数调用请求中包括函数的信息;根据函数的信息确定与函数对应的模板容器以及部署有该函数的函数容器的进程号;根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,将目标子进程迁移到所述函数容器中;在所述函数容器中加载所述函数。通过复用预先初始化完成的语言运行时状态,优化了语言运行时的初始化时间。其次,通过使用预先生成的函数容器作为函数实例的最终运行环境,缩小了容器隔离环境的初始化开销。

Description

一种容器加载方法及装置
本申请要求在2022年1月19日提交中国国家知识产权局、申请号为202210062563.8,发明名称为“一种容器加载方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信技术领域,尤其涉及一种容器加载方法及装置。
背景技术
无服务器计算(Serverless Computing)也被称为函数即服务(Function-as-Service,FaaS)是云计算领域的新的编程范式。在Serverless范式下,用户将应用逻辑以函数粒度进行解耦,并将生成的函数代码提交给云服务商,由云服务商来负责函数的部署和计算。在用户将函数代码提交给云服务商之后,用户便可以向云服务商发送附带有参数的函数调用请求。由于用户提交的函数代码通常是不被无服务器计算装置信任的。因此,无服务器计算装置在执行函数调用时,需要将该函数的运行环境进行隔离,使得该函数运行在一个完整的隔离环境内。
在现有技术中,隔离环境的实现方式包括:Linux容器、Firecracker等。由于,在无服务器计算装置内的函数的运行依赖函数实例。当无服务器计算装置接收到用户发送的函数请求时,如果无服务器计算装置不存在可以使用的函数实例,那么无服务器计算装置需要对整个实例环境进行初始化,用来支持这次的函数调用请求。这样的启动叫做“冷启动”。由于Serverless计算具有冷启动频率高、总启动时延在总的端到端时延中占比高的特点。因此,启动时延对Serverless计算的整体性能至关重要。
发明内容
本申请实施例提供了一种容器加载方法及装置,通过使用预先准备好的函数容器作为函数的容器隔离环境,缩小了容器隔离环境初始化的开销,以及通过复刻的方法复用已经初始化完成的函数对应的语言运行时状态,优化了语言运行时的初始化时间。
第一方面,本申请实施例提供了一种多线程容器加载方法,该方法包括:接收用户发送的函数调用请求,该函数调用请求中包括函数的信息;根据该函数的信息得到对应的函数容器的进程号,其中该函数容器中部署有函数;根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,其中,该模板容器中的语言运行时进程和目标子进程均为该函数对应的语言运行时进程,目标子进程的命名空间位于函数容器中;根据目标子进程的进程号将目标子进程的控制组切换为函数容器的控制组,以使目标子进程迁移到函数容器中,在所述函数容器中加载所述函数;
本申请实施例提供的多线程容器加载方法,将预先准备好的函数容器作为函数的最终运行环境。然后通过复刻的方法将已经初始化完成的函数的语言运行时状态复制到函数容器中,从而实现对函数的运行环境的初始化。在本申请实施例中通过复用预先初始化完成的语言运行时状态,跳过了函数对应的语言运行时初始化过程中的关键路径,从而显著优化了语言运行时的初始化时间。其次,通过使用预先准备好的函数容器作为函数实例的最终运行环境,进一步缩小了容器隔离环境的初始化开销。
在一个可能的实现方式中,在接收用户发送的函数调用之前,该方法还包括:接收用户发送的函数代码和函数依赖;将函数代码和函数的函数依赖部署到函数容器中。
也就是说,将预先准备好的函数容器作为函数的运行环境,进一步缩小了容器隔离环境的初始化开销。
在一个可能的实现方式中,函数信息包括:函数名、函数类型、函数ID中的至少一种。
也就是说,通过函数调用请求中携带的函数调用信息,可以进一步的确定需要调用的函数的语言运行时,以及部署有需要调用的函数容器。
在一个可能的实现方式中,根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程之前,该方法还包括:获取该函数对应的模板容器;在不存在与该函数对应的模板容器的情况下,根据该函数对应的语言运行时,创建一个与该函数对应的模板容器;保存子啊模板容器。
也就是说,预先在无服务器计算装置中保存与函数语言运行时对应的模板容器。当无服务器计算装置接收到函数调用请求以后,可以直接复刻与该函数调用请求相对应的模板容器。当无服务器计算装置中没有保存与函数调用请求需要调用的函数对应的模板容器时,无服务器计算装置可以在接收到函数调用请求以后,根据接收的函数调用请求确定需要调用的函数语言运行时,然后无服务器计算装置根据该语言运行时生成对应的模板容器,并保存。在本申请实施例中,每个语言运行时对应的模板容器仅需要创建一次。创建模板容器保存在无服务器计算装置中,无服务器计算装置在接收到函数调用请求以后,能够复制已经初始化完成的语言运行时状态(模板容器),显著提高了语言运行时的初始化时间。
在一个可能的实现方式中,根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程之前,该方法还包括:在函数的语言运行时为多线程的情况下,获取至少一个工作线程中每一个工作线程的状态信息;当至少一个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭至少一个工作线程中的每一个工作线程;其中,第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程处于与主线程无逻辑上的交互状态;当至少两个工作线程中存在至少一个工作线程的状态信息不为第一状态时,暂停至少两个工作线程中状态信息为第一状态的工作线程,直到至少两个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭至少两个工作线程中的每一个工作线程;或者,保存状态信息不为第一状态的工作线程的上下文,关闭至少两个工作线程中的每一个工作线程。
也就是说,在对模板容器进行复刻操作之前还需要对模板容器的多个工作线程的状态进行判断。然后,根据每个工作线程的状态信息选择直接关闭该工作线程或者保存该工作线程的上下文以后关闭该工作线程。关闭工作线程之后,模板容器就只存在一个主线程, 因此可以对主线程调用复刻操作,生成新的线程。显著优化了函数语言运行时的初始化时间。
在一个可能的实现方式中,根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,包括:对主线程进行第一次复刻,得到第一子进程;根据函数容器的进程号,将第一子进程的命名空间切换到函数容器中;对第一子进程进行复刻得到目标子进程,目标子进程的命名空间位于函数容器中。
也就是说,在对模板容器的主线程进行第一次复刻以后,还需要根据函数容器的进程号将第一次复刻生成的第一子进程的命名空间切换到函数容器中。具体地,可以通过Linux的setns和chroot系统调用,将第一子进程的命名空间切换到函数容器中。
在一个可能的实现方式中,根据目标子进程的进程号将目标子进程的控制组切换为函数容器以后,该方法还包括:对用于管理工作线程的数据结构进行初始化,创建所述目标子进程的工作线程。
也就是说,在对模板容器进行复刻操作的前后,分别对模板容器多线程语言运行时的多线程进行状态保存和状态恢复,使得模板容器的主线程可以直接调用复刻操作复制主线程状态,并且在复刻操作后恢复多线程状态,实现使用复刻操作创建多线程语言运行时进程。
第二方面,本申请实施例提供了一种无服务器计算装置,该无服务器计算装置用于对无服务器计算过程中的多线程容器进行加载,该装置包括:
接收模块,用于接收用户发送的函数调用请求,该函数调用请求中包括函数的信息;
处理模块,用于根据函数信息得到对应的函数容器的进程号,其中,函数容器中部署有所述函数;
所述处理模块,还用于根据函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,其中,该模板容器中的语言运行时进程和目标子进程均为该函数对应的语言运行时进程,目标子进程的命名空间位于函数容器中;根据目标子进程的进程号将目标子进程的控制组切换为函数容器的控制组,以使目标子进程迁移到函数容器中;在所述函数容器中加载所述函数。
在一个可能的实现方式中,接收模块还用于:
接收用户发送的函数代码和函数依赖;
将函数代码和所述函数依赖部署到函数容器中。
在一个可能的实现方式中,函数信息包括:函数名、函数类型、函数I D中的至少一种。
在一个可能的实现方式中,处理模块还用于:
获取该函数对应的模板容器;
在无服务器计算装置中不存在与该函数对应的模板容器的情况下,根据函数对应的语言运行时,创建一个与该函数对应的模板容器;
将模板容器保存在所述无服务器计算装置中。
在一个可能的实现方式中,在根据函数容器的进程号对模板容器的主线程进行复刻得到目标子进程之前,处理模块还用于:
在函数的语言运行时为多线程的情况下,获取至少一个工作线程中每一个工作线程的 状态信息;
当至少一个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭至少一个工作线程中的每一个工作线程;其中,第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程处于与主线程无逻辑上的交互状态;
当至少两个工作线程中存在至少一个工作线程的状态信息不为第一状态时,暂停至少两个工作线程中状态信息为第一状态的工作线程,直到至少两个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭至少两个工作线程中的每一个工作线程;或者,保存状态信息不为第一状态的工作线程的上下文,关闭至少两个工作线程中的每一个工作线程。
在一个可能的实现方式中,处理模块还用于:
对主线程进行第一次复刻,得到第一子进程;
根据函数容器的进程号,将第一子进程的命名空间切换到函数容器中;
对所述第一子进程进行复刻得到目标子进程,所述目标子进程的命名控制位于函数容器中。
在一个可能的实现方式中,处理模块还用于:
对用于管理工作线程的数据结构进行初始化,创建目标子进程的工作线程。
第三方面,本申请实施例提供了一种无服务器计算装置,包括:
至少一个存储器,用于存储程序;
至少一个处理器,用于执行存储器存储的程序,当存储器存储的程序被执行时,处理器用于执行第一方面所提供的方法。
第四方面,本申请实施例提供了一种无服务器计算系统,包括第二方面所提供的无服务器计算装置。
第五方面,本申请实施例提供了一种计算机可读介质,计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行第一方面所提供的方法。
第六方面,本申请实施例提供了一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面所提供的方法。
第七方面,本申请实施例提供了一种芯片,该芯片包括存储器和处理器,存储器用于存储计算机指令,处理器用于从存储器中调用并运行该计算机指令,以执行第一方面所提供的方法。
附图说明
图1a为第一种方案中的无服务器计算装置的系统架构图;
图1b为第一种方案提供的一种多线程容器加载方法的流程示意图;
图1c为第二种提供的一种隔离沙箱启动过程示意图;
图2为本身申请实施例提供的一种应用场景示意图;
图3为本申请实施例中提供的一种无服务器计算装置的结构示意图;
图4本申请实施例提供的另一种无服务器计算装置的结构示意图;
图5本申请实施例提供的一种多线程容器的加载过程示意图;
图6为本申请实施例提供的一种基于模板容器的多线程fork操作中的线程变化状态示意图;
图7为本申请实施例提供的一种多线程容器加载方法的流程示意图;
图8为本申请实施例提供的一种对模板容器进行多线程复刻的方法的流程示意图;
图9为本申请实施例提供的一又种多线程容器加载方法的流程示意图;
图10为本申请实施例提供的一种无服务器计算装置的结构示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本申请实施例中的技术方案进行描述。
在本申请实施例的描述中,属于“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如A和/或B,可以表示:单独存在A,单独存在B,同时存在A和B这三种情况。另外,除非另有说明,术语“多个”的含义是指两个或两个以上。例如,多个系统是指两个或两个以上的系统,多个屏幕终端是指两个或两个以上的屏幕终端。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在介绍本发明申请的方案之前,先对本发明申请实施例中需要用到的关键术语进行解释。
语言运行时:高级语言提供的运行时环境,如Python等。
函数实例:指在Serverless计算中函数的整个隔离环境。当使用容器作为函数的隔离方式时,函数实例是指包含了函数完整运行环境的容器隔离环境。
函数依赖:是指在数据库中,各种不同属性(或者属性集合)关系间的一种“约束”,也可以看作属性之间的一种映射。
实例冷启动:在Serverless计算中,实例冷启动意为重新创建完整的函数实例的过程。
容器隔离技术:Serverless Computing提供商通常选择容器或虚拟机作为运行Serverless函数的沙箱。Linux容器通过控制组(cgroup)和命名空间(namespace)对容器使用的操作系统资源(如文件系统、进程号、网络等)和计算资源(如cpu、内存等)进行控制和隔离。除此之外,容器进程的系统调用需要复用宿主操作系统内核来进行。当系统需要启动一个空容器时,可以启动一个空闲的进程,并且为该进程设置一个新的namespace和cgroup组。此时,该空闲进程的进程号即为该空容器的进程号。进一步地,通过获取该空容器的进程号即可确定该空容器中的空闲进程,通过该空闲进程可以确定该空容器的namespace和cgroup组。
主线程:用于实现对应程序主要功能的线程。
工作线程:进程中除主线程以外的线程。
POSIX Fork技术:POSIX接口包括了一个fork系统调用,是用于产生新进程的基本方法。Fork操作会创建一个新的地址空间作为子进程(fork操作生成的新进程)的地址空间,新的地址空间拥有和父进程(在fork操作中用于被fork的进程)相同的内存段。为了减少fork时的内存拷贝开销,fork系统调用采用写时复制(Copy-on-Write)的方法,在fork时在新的地址空间内先不进行拷贝,而是首先让两个地址空间指向相同的物理内存段,并且设置写时复制标记,当发生写操作时再复制物理内存页。写时复制机制提升了fork操作的性能。在Serverless函数启动的场景下,由于同一个函数的不同实例具有类似的启动过程,因此利用fork系统调用,从一个已初始化好的“模板”进程fork出新的实例,可以大大提高Serverless实例的启动性能。
目前有几种方案能够对无服务器计算装置内的函数实例环境进行初始化,用以支持函数调用请求。第一种方案,如图1a所示,无服务器计算装置包括:容器引擎、容器运行时、语言运行时。其中,容器引擎负责处理用户的函数调用请求。当容器引擎接收到用户的函数调用请求以后,对容器镜像进行管理和挂载,并调用容器运行时。容器运行时通过Linux的命名空间(namespace)和控制组件(cgroup)接口创建容器隔离环境。第一种方案的执行流程如图1b所示,包括步骤S101到步骤S103。
步骤S101:容器引擎根据用户的函数调用请求中请求的函数获取容器镜像,挂载该容器镜像并调用容器运行时。
步骤S102:容器运行时按照Linux的namespace和cgroup接口,创建函数实例的隔离环境。
步骤S103:隔离环境创建好后,容器进程完整初始化语言运行时环境。
第一种方案,虽然能够很好的加载函数的容器隔离环境,并且加载完成的容器隔离环境能够与无服务器计算装置建立连接使用。但是该方案的冷启动需要加载完整的函数隔离环境,并且初始化完整的语言运行时,整个过程耗时相当长。其次,该方案没有充分利用相同函数实例之间的相似性,而是重复进行了多次初始化工作。
如图1c所示,第二种方案通过多线程沙箱安全复刻(fork)实现基于多线程语言编写的隔离沙箱的快速启动。
第二种方案是基于虚拟化技术需要在操作系统(OS)层进行fork实现,不是基于用户态的方法,使用门槛高。其次,第二种方案,主要在虚拟化隔离沙箱上进行,无法直接兼容现有的基于容器隔离的方法。
图2为本发明申请实施例提供的一种应用场景示意图。如图2所示,在本申请实施例中,云服务器上部署有无服务器计算(Serverless Computing)装置,用于为用户设备提供无服务器计算。在Serverless模式下,用户设备将应用逻辑以函数粒度进行解耦,并将基于该应用逻辑生成的函数代码和函数依赖上传到云服务器上。然后,用户设备便可以向云服务器发送函数调用请求,该函数调用请求中携带了该函数调用需要用到的参数信息。云服务器接收到用户设备发送的函数调用请求以后,由云服务器上部署的无服务器计算装置根据接收到的函数调用请求对预先接收的函数代码进行 部署和计算。进一步地,由于用户提交的函数代码是不被无服务器计算装置信任的。因此,无服务器计算装置需要将用户的函数运行环境进行隔离,使得该函数运行在一个完整的隔离环境中。
图3为本申请实施例提供的一种无服务器计算装置的结构示意图。如图3所示,该装置包括:处理器301、网络接口302、存储器303。其中,处理器301、网络接口302、存储器303可以通过总线或者其他方式连接。
存储器303是无服务器计算装置的记忆设备,用于存放程序和数据。例如,存放用户发送的函数代码和函数依赖。存储器303提供存储空间,该存储空间存储了服务器的操作系统和用以实现多线程容器加载方法的程序指令。操作系统包括但不限于:Windows系统(一种操作系统),Linux系统(一种操作系统),鸿蒙系统(一种操作系统)等等,在此不做限定。
在本方案中,处理器301(或称为中央处理器(central processing unit,CPU))是无服务器计算装置的计算核心及控制核心。处理器301读取存储器303中保存的程序指令和数据,从而执行多线程容器加载方法。处理器301读取存储器303中保存的程序指令后对接收的函数代码和函数依赖进行存储。
网络接口302可以包括标准的有线接口、无线接口(如Wi-Fi,移动通信接口等)。网络接口302受处理器301的控制用于收发数据。例如,接收用户发送的函数代码和函数依赖。
图4为本申请实施例提供的另一种无服务器计算装置的结构示意图。需要说明的是,图4所提供的无服务器计算装置的结构示意图为无服务器计算装置在接收到用户发送的函数调用请求以后,无服务器计算装置根据该函数调用请求生成对应的函数实例,并将该函数实例迁移到对应的隔离环境(函数容器)的过程中的某一个状态的结构示意图。如图4所示,该装置包括:模板容器401、函数容器402、控制线程403。其中,模板容器401中包含了对应函数的语言运行时进程。模板容器401中的语言运行时进程作为所有子实例的“父亲”实例。当模板容器401语言运行时进程作为父进程时,通过多线程fork方法产生新的实例。函数容器402为函数最终运行的隔离环境。控制线程403用来管理通过fork生成新实例的过程。无服务器计算装置对外提供完整的函数请求调用逻辑。
在本申请实施例中,用户将预先编写好的的函数代码和函数依赖上传到无服务器计算装置。无服务器计算装置将接收的函数代码和函数依赖部署到函数容器402中。然后,无服务器计算装置根据接收的函数的语言运行时创建一个模板容器401。当无服务器计算装置接收到用户发送的函数调用请求以后,无服务器计算装置通过控制线程403将接收的函数调用请求转发给模板容器401。同时无服务器计算装置还需要通过控制进程将函数容器402的进程号(pid)发送给模板容器401。模板容器401接收到用户发送的函数调用请求和函数容器402的进程号以后,开始进行基于fork的函数实例创建。
以上即是对本方案中涉及的无服务器计算装置的介绍,接下来基于图4所示的无服务器计算装置和图5所示的无服务器计算装置中的多线程容器加载过程,对本方案中涉及的基于用户态的多线程容器加载方案进行详细介绍。详见下文描述。
(1)用户将函数代码和函数依赖上传到无服务器计算装置。
Serverless开发人员完成函数代码的开发以后,将该函数代码和函数依赖上传到无服务器计算装置中。无服务器计算装置接收到用户上传的函数代码和函数依赖以后,将该函数代码和函数依赖部署到对应的函数容器中。
需要说明的,无服务器计算装置中的函数容器可以是预先存在于无服务器计算装置中的空的容器,也可以是无服务器计算装置在接收到用户上传的函数代码和函数依赖以后再新创建的函数容器。其中,空的容器是指在无服务器计算装置中存在的一组namespace和cgroup组,该namespace和cgroup组中包含了一个空闲进程。在一个可能的示例中,无服务器计算装置在接收到用户上传的函数代码和函数依赖以后,确定无服务器计算装置中是否存在空的容器。当无服务器计算装置中存在空函数容器时,无服务器计算装置将该空容器作为函数容器,并将接收的函数代码和函数依赖部署到该函数容器中。当无服务器计算装置中不存在空容器时,无服务器计算装置重新生成一个空的容器作为函数容器。然后无服务器计算装置将接收的函数依赖和函数代码部署到该函数容器中。
(2)控制进程接收用户发送的函数调用请求并获取函数容器的进程号。在本方案中,无服务器计算装置接收到用户发送的函数调用请求以后,还需要通过无服务器计算装置中的控制进程获取函数容器的进程号。其中,该函数容器中部署有该函数调用请求需要调用的函数。
需要说明的是,控制进程接收用户发送的函数调用请求和获取函数容器的进程号这两个操作之间是没有先后顺序的。
在一个可能的示例中,用户发送的函数调用请求中包括函数信息。函数的信息包括:函数名、函数类型、函数ID中的至少一种。控制进程根据函数信息确定函数容器,进而获取函数容器的进程号。需要说明的是,函数名指用户将编写完成的函数上传到无服务计算装置中时对该函数的命名。函数ID是指用户将该函数上传到无服务计算装置以后,由该无服务计算装置为该生成的标识。具体地,通过无服务器计算装置计算该函数的哈希值,根据计算得到的哈希为该函数生成相应的标识。
在另一个可能的示例中,无服务器计算装置将用户发送的函数代码和函数依赖部署到函数容器以后,函数容器主动将该函数容器的进程号发送给控制进程。
(3)控制进程根据接收的函数调用请求将函数容器的进程号转发给对应的模板容器。
控制进程接收到函数调用请求以后,根据该函数调用请求需要调用的函数的语言运行时,确定与该函数对应的模板容器。其中,模板容器中包含了该函数的语言运行时进程。然后,控制进程将接收到的函数调用请求和函数容器的进程号发送给模板容器。
(4)控制进程触发模板容器的主线程调用fork操作,得到目标子进程。
在对模板容器调用fork操作之前,还需要确认模板容器的语言运行时是否为多线程。当模板容器的语言运行时不为多线程时,模板容器直接调用fork操作。
当模板容器的语言运行时为多线程时,模板容器接收到用户发送的函数调用请求以后,基于POSIX Fork技术对模板容器进行多线程fork,得到一个目标子进程。
在一个示例中,在对模板容器调用fork操作之前,还需要对模板容器的工作线程的状态进行判断,确定是否可以直接关闭该模板容器的工作线程。具体地,模板容器获取每一个工作线程的状态信息。当每一个工作线程的状态信息都为第一状态时,模板容器关闭该模板容器的工作线程。其中,第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程与主线程无逻辑上的交互状态。当模板容器的工作线程存在至少一个工作线程的状态信息不为第一状态时,模板容器暂停至少两个工作线程中状态信息为第一状态的工作线程,直到至少两个工作线程中的每一个工作线程的状态信息都为第一状态。然后,模板容器关闭该模板容器的工作线程。或者,保存状态信息不为第一状态的工作线程的上下文程。然后,模板容器关闭该模板容器的工作线程。
需要说明的是,在本申请实施例中,模板容器“关闭”工作线程,“关闭”这个动作并不是真的存在。而是模板容器的主线程进行fork的过程中,对于多线程的父进程而言,fork只会对实际调用fork的线程(在本申请实施例中,指的是模板容器中的语言运行时进程的主线程)进行复制,而其他进程会在fork之后生成的子进程中消失。
(5)模板容器根据函数容器的进程号将生成的进程的namespace迁移到函数容器中。
模板容器在进行多线程fork过程中需要对新生成的进程的容器隔离环境进行迁移。具体地,模板容器需要进行两次多线程fork以实现将新生成的进程的namespace切换为函数容器的namespace。模板容器进行第一次多线程fork操作,得到第一子进程。然后,模板容器对第一子进程进行系统调用,将第一子进程的namespace切换到函数容器中。虽然,将第一子进程的namespace切换到了函数容器中,但是该切换操作在第一子进程中并不生效。还需要对第一子进程进行第二次多线程fork操作,得到目标子进程。此时,得到的目标子进程的namespace已经切换到了函数容器中。
(6)模板容器将得到的目标子进程的进程号转发给控制进程。
(7)控制进程根据接收到的目标子进程的进程号,将该目标子进程的cgroup切换为函数容器的cgroup。
在得到目标子进程以后,还需要将目标子进程的pid发送给控制进程。控制进程根据目标子进程的pid将目标子进程的cgroup切换为函数容器的cgroup,以使目标子进程迁移到函数容器中。
在将目标子进程迁移到函数容器中以后,目标子进程对内存中用于管理工作线程的数据结构进行初始化,并复用语言运行时进程初始化时的流程,重新创建各种工作线程。此时,目标子进程恢复多线程的运行状态。
在一个可能的实施例中,模板容器接收到函数调用请求后执行多线程fork操作产生新的进程。在模板容器执行fork操作产生目标子进程的过程中,还需要对模板容器的多个线程进行多线程暂停、多线程状态恢复等流程。图6为本申请实施例提供的一种基于模板容器的多线程fork操作中的线程变化状态示意图。接下来,结合图6对本方案中涉及到的多线程fork进行详细介绍。
(1)合并
在本方案中,主要在用户态对语言运行时的多线程进行fork操作。由于POSIX 的fork操作无法直接应用在多线程的进程上。因为对于多线程的父进程而言,fork只会对实际调用fork的线程进行复制,而其他进行会在fork之后的子进程中消失。而现有的语言运行时(如NodeJS)通常是多线程的,这就使得fork无法直接应用在Serverless场景中。在本方案中,在无服务器计算装置中,预先为每个语言运行时创建了对应的模板容器(即模板容器中包含了语言运行时进程)。因此,需要在用户态对模板容器的多个线程进行处理,以避免在对模板容器进行fork操作以后,出现的互斥锁问题。
模板容器接收到用户的函数调用请求以后,模板容器获取该模板容器的多个工作线程的状态信息。然后,模板容器根据获取的多个工作线程的工作状态信息对多个工作线程进行处理。
在一个可能的示例中模板容器获取每一个工作线程的工作状态信息,当都为第一状态时,控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。其中,第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程处于与主线程无逻辑上的交互状态。
在一个可能的示例中,模板容器获取每一个工作线程的工作状态信息,当模板容器的工作线程存在至少一个工作线程的状态信息不为第一状态时,模板容器同步该模板容器的工作线程的状态,以使模板容器的所有工作线程的状态信息都为第一状态。然后,控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。
在一个可能的示例中,模板容器获取每一个工作线程的工作状态信息,当模板容器的工作线程存在至少一个工作线程的状态信息不为第一状态时,模板容器保存状态信息不为第一状态的工作线程的上下文。然后,控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。
(2)fork
在对模板容器的工作线程进行处理以后,由控制线程控制模板容器的主线程调用POSIX fork生成目标子进程。其中,模板容器在fork过程中对容器的隔离环境进行迁移时,需要调用两次POSIX fork。
具体地,在用户态关闭语言运行时的工作线程以后,控制线程触发模板容器的主线程进行第一次fork操作,产生第一子进程。然后,模板容器根据接收到的函数容器的pid,调用Linux的setns和chroot系统调用,将第一子进程的命名空间切换为函数容器的命名空间。虽然第一次fork过程中将生成的第一子进程的命名空间切换为了函数容器的命名空间。但是在的第一次fork过程中对第一子进程的命名空间的设置并不会立即生效。因此,还需要对生成的第一子进程进行第二次fork操作。
控制线程将第一子进程作为父进程,触发第一子进程进行第二次fork操作,生成目标子进程。此时,目标子进程的命名空间已经切换到了函数容器中。
(3)扩展
在对模板容器的主线程进行两次fork操作以后,新生成的目标子进程的namespace环境已经位于函数容器中。此时,还需要对目标子进程的工作线程进行恢复。
在一个可能的示例中,目标子进程对内存中用于管理工作线程的数据结构进行初始化,并复用语言运行时进程初始化时的流程,重新创建各种工作线程。此时,新生成的目标子进程恢复多线程的运行状态。
在另一个可能的示例中,在目标子进程创建工作线程时,还需要确定无服务器计算装置中是否保存有模板容器的工作线程的上下文。基于无服务器计算装置中保存有模板容器的工作线程的上下文,目标子进程在创建工作线程时,需要根据无服务器计算装置中保存的上下文恢复在第一次fork操作时关闭的工作线程。
(4)迁移
在对模板容器的主线程进行两次fork操作,得到目标子进程以后,模板容器还需要将目标进程的进程号返回给控制进程。控制进程接收到目标进程的进程号以后,控制进程根据目标进程号将目标子进程的控制组(cgroup)迁移到函数容器中。使得最终生成的目标子进程实例完全位于函数容器的隔离环境中。
在本申请实施例中,在用户态对语言运行时的多线程进行处理,使得多线程语言运行时能够通过POSIX fork创建目标子进程。其中,在通过POSIX fork创建目标子进程的过程中,POSIX接口包括了一个fork系统调用,通过fork系统调用可以产生一个与父进程完全相同的子进程。具体地,Fork操作会创建一个新的地址空间作为子进程的地址空间,新的地址空间拥有和父进程相同的内存段。而为了减少fork时的内存拷贝开销,fork系统调用采用写时复制(Copy-on-Write)的方法。在fork时在新的地址空间先不进行拷贝,而是首先让两个地址空间指向相同的物理内存段,并且设置写时复制标记,当发生写操作时再复制物理内存页。写时复制机制提升了fork操作的性能。在Serverless函数启动的场景下,由于同一个函数的不同实例具有类似的启动过程,因此利用fork系统调用,从一个已初始化好的“模板”进程(模板容器)中fork出新的实例,可以大大提高Serverless实例的启动性能。
基于上文描述的多线程容器加载方案,本申请实施例还提供的一种多线程容器加载方法的流程示意图。参见图7,该方法由无服务器计算装置执行,该方法包括:步骤S701-步骤S705。
步骤S701,接收用户发送的函数代码和函数依赖。
Serverless函数开发人员完成函数代码开发以后,将该函数的函数代码和函数依赖打包并上传至无服务器计算装置。其中,函数依赖是指在数据库中,各种不同属性(或者属性集合)关系间的一种“约束”,也可以看作属性之间的一种映射。
在一个可能的示例中,Serverless函数开发人员可以使用Python或者Nodejs来进行函数代码的编写。需要说明的是Python的语言运行时只有一个主线程。因此,当Serverless函数开发人员使用Python来进行函数代码的编写时。该函数的语言运行时对应的模板容器只有一个主线程,而没有工作线程。
步骤S702,将接收的函数代码和函数依赖部署到函数容器中。
Serverless接收到用户发送的函数代码和函数依赖以后,将该函数代码和函数依赖存储到函数容器中。具体地,当无服务器计算装置接收到用户发送的函数代码和函数依赖以后,确定无服务器计算装置中是否存在空函数容器。当无服务器计算装置中 存在空函数容器时,无服务器计算装置将该空函数容器作为函数容器,并将接收的函数代码和函数依赖部署到该函数容器中。当无服务器计算装置中不存在空函数容器时,无服务器计算装置重新生成一个空的函数容器作为函数容器。然后无服务器计算装置将接收的函数依赖和函数代码部署到该函数容器中。
在生成空的函数容器时,需要启动一个空闲的进程,并且为该进程设置一个新的namespace和cgroup组。此时,该空闲进程的进程号即为该函数容器的进程号。进一步地,通过获取函数容器的进程号即可确定该函数容器中的闲进程,通过该空闲进程可以确定该函数容器的namespace和cgroup组。
步骤S703,接收用户发送的函数调用请求。
无服务器计算装置接收到用户发送的函数调用请求以后,控制进程将该函数调用请求发送到与该函数调用请求需要调用的函数的语言运行时相对应的模板容器中。
需要说明的是,在无服务器计算装置接收到用户发送的函数调用请求以后,确定无服务器计算装置中是否存在有与函数调用请求中需要调用的函数的语言运行时相对应的模板容器。基于无服务器计算装置中存在需要调用的函数的语言运行时对应的模板容器,控制进程直接将该函数调用请求转发到该模板容器中。基于无服务器计算装置不存在与需要调用的函数的语言运行时相对应的模板容器,无服务器计算装置根据需要调用的函数的语言运行时创建一个对应的模板容器,即新生成的模板容器为函数的语言运行时进程。然后,控制进程将用户的函数调用请求转发到新创建的模板容器中。
需要说明的是,在本方案中,每个语言运行时对应的模板容器只需要创建一次。新创建的模板容器会被保存到无服务器计算装置中,以方便后续的使用。
步骤S704,获取函数调用请求需要调用的函数对应的模板容器和函数容器的进程号。
在本方案中,模板容器为包含函数即服务逻辑的函数的语言运行时进程。控制进程在将用户的函数请求转发给模板容器时,还需要将部署有函数调用请求中需要调用的函数对应的函数容器的pid发送给模板容器。
在一个可能的示例中,控制进程在接收到用户的函数调用请求以后,根据该函数调用请求中需要调用的函数,获取部署有该函数的函数容器的进程号。
在另一个可能的示例中,无服务器计算装置将接收到的函数代码和函数依赖部署到函数容器中,然后将该函数容器的pid发送给控制进程。控制进程将该函数容器的p id发送给对应的模板容器。
步骤S705,对模板容器进行复刻,并将复刻生成的进程迁移到函数容器中。
模板容器接收到控制进程转发的函数调用请求和部署有该函数调用请求需要调用的函数的目标容器的进程号后,开始进行fork。
在模板容器调用fork操作之前,还需要确认模板容器的语言运行时是否为多线程。当模板容器的语言运行时不为多线程时,模板容器直接调用fork操作。
当模板容器对应的语言运行时为多线程时,还需要对模板容器的工作线程的状态进行判断。具体地,在对模板容器进行多线程复刻时,如图8所示,包括步骤S7051-步骤S7056。
步骤S7051,获取模板容器的多个线程。
获取的模板容器的多个线程通常包括一个主线程和多个工作线程。
步骤S7052,获取模板容器的多个线程中的每一个线程的工作状态信息,并根据获取的工作状态信息对多个线程进行处理,得到多个线程中的主线程。
由于现有的语言运行时通常是多线程的,对于多线程的父进程而言,fork只会对实际调用fork的线程进行复制,而其他线程会在fork之后的子进程中消失。因此,为了避免在对模板容器进行fork操作以后,因为消失的线程而出现的互斥锁问题。
在一个可能的示例中,模板容器获取每一个工作线程的工作状态信息,当都为第一状态时,控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。其中,第一状态包括:工作线程因等待任务而处于阻塞的状态、模板容器处于不需要保存上下文的状态、模板容器处于与主线程无逻辑上的交互状态。
在一个可能的示例中,模板容器获取每一个工作线程的工作状态信息,当模板容器的工作线程存在至少一个工作线程的状态信息不为第一状态时,模板容器同步该模板容器的工作线程的状态,以使模板容器的所有工作线程的状态信息都为第一状态。然后控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。
在一个可能的示例中,模板容器获取每一个工作线程的工作状态信息,当模板容器的工作线程存在至少一个工作线程的状态信息不为第一状态时,模板容器保存状态信息不为第一状态的工作线程的上下文。然后,控制线程触发模板容器的主线程调用fork操作,并在执行fork操作的过程中关闭该模板容器的工作线程。
步骤S7053,对主线程进行第一次多线程复刻,生成第一子进程。
步骤S7054,根据函数容器的进程号,对第一子进程进行系统调用,将第一子进程的命名空间切换为函数容器的命名空间。
模板容器接收到用户发送的函数调用请求以后,即对主线程进行第一次fork操作,得到第一子进程。然后模板容器根据接收到的函数容器的pid确定该函数容器对应的namespace。然后调用Linux系统的setns和chroot系统调用,将第一子进程的namespace切换到函数容器中。具体地,模板容器通过调用chroot(change root)命令把第一子进程的根目录切换为函数容器的根目录。然后模板容器调用setns函数将第一子进程加入到函数容器的namespace中。
步骤S7055,对第一子进程进行第二次多线程复刻,生成目标子进程。
虽然在对主线程的第一次fork操作中,将生成的第一子进程的命名空间切换为了函数容器的命名空间。但是,该切换操作并不会立刻生效。因此,还需要对第一子进程进行第二次多线程fork。
对第一子进程进行第二次fork操作,得到目标子进程。此时,目标子进程的namespace环境已经位于函数容器中。
在完成第二次fork操作,得到目标子进程之后,还需要对在步骤S3052中关闭的工作线程进行恢复。在一个示例中,第二次fork操作完成之后,目标子进程需要对内存中用于管理工作线程的数据结构进行初始化,然后复用NodeJS运行时的进程初始化流程,重新创建各种工作线程。使得新生成的目标子进程可以恢复多线程的运行状态。
在一个示例中,在目标子进程创建工作线程时,还需要确定无服务器计算装置中是否保存有模板容器的工作线程的上下文。基于无服务器计算装置中保存有模板容器的工作线程的上下文,目标子进程在创建工作线程时,需要根据无服务器计算装置中保存的上下文恢复在第一次fork操作时关闭的工作线程。
步骤S7056,根据目标子进程的进程号将目标子进程的控制组切换为函数容器中的控制组。
在完成第二次fork操作以后,模板容器还需要将还需要将目标子进程的pid返回给控制进程。控制进程接收到目标子进程的pid以后,由控制进程负责完成目标子进程cgroup的迁移。以使得生成的目标子进程完全位于函数容器的隔离环境中。
步骤S706,在函数容器中加载该函数。
在本申请实施例中,在对模板容器进行fork之前,还需要确定函数对应的语言运行时是否为多线程。当函数的语言运行时为多线程时,需要对函数的语言运行时对应的模板容器的工作线程进行处理。然后将模板容器的主线程作为父进程,通过两次调用POSIX fork,得到新的进程并且将新的进程的命名空间和控制组切换到了函数容器中。进一步地,通过两次调用POSIX fork,得到目标子进程以后,新生成的进程还需要对内存中用于管理工作线程的数据结构进行初始化,以及复用当前语言进行时进程初始化的流程,重新创建各种工作线程,以使得新生成的进程恢复多线程的运行状态。在本申请实施例中,通过在fork操作前,对多线程语言运行时的多线程进行状态保存,使得模板容器的主线程可以直接调用POSIX fork复制主线程状态。然后在fork操作之后恢复多线程状态,实现使用fork创建多线程语言运行时进程。进一步地,在本申请实施例中,通过多线程的fork操作,能够复用已经初始化好的语言运行时进程状态,从而跳过大部分的初始化过程,快速创建出新的多线程语言运行时进程。在通过fork操作创建目标子进程的过程中,通过fork的Copy-on-Write机制,使得相同函数的容器之间可以共享部分状态,降低了比例分配共享库后实际使用的物理内存(Proportional Set Size,PSS)。
图9为本申请实施例提供的另一种多线程容器加载方法的流程示意图。参见图9,该方法由无服务器计算装置执行,该方法包括:步骤S901-步骤S905。
步骤S901,接收用户发送的函数代码和函数依赖。
步骤S902,将接收的函数代码和函数依赖部署到对应的函数容器中。
步骤S901-步骤S902的具体实现方式与步骤S701-步骤S702的实现方式相同,在此不在赘述。
步骤S903,获取该函数对应的模板容器以及该函数对应的函数容器的进程号,其中,模板容器中包含了该函数的语言运行时进程。
步骤S904,对模板容器中包含的语言运行时进程的主线程进行复刻,并将复刻生成的进程迁移到函数容器中。
对模板容器进行多线程复刻的过程与步骤S7051-步骤S7056相同。在此不再赘述。
步骤S905,接收用户发送的函数调用请求,在函数容器中加载该函数。
在本申请实施例中,无服务器计算装置接收到用户发送的函数代码和函数依赖以 后,无服务器计算装置开始生成函数的函数实例。即在本申请实施例中,在无服务器计算装置接收到用户的发送的函数调用请求之前就在无服务器计算装置中生成了函数的函数实例,减少了用户进行函数调用时的等待时间。
图10是本申请实施例提供的一种无服务器计算装置的结构示意图。具有图10所示的无服务器计算装置用于对无服务器计算过程中的多线程容器进行加载无服务器计算装置。
图10所示的无服务器计算装置包括:接收模块1001、处理模块1002、存储模块1003。
接收模块1001用于接收用户发送的函数代码和函数依赖。然后接收模块1001将接收的函数代码和函数依赖部署到函数容器中。
接收模块1001还用于接收用户发送的函数调用请求。然后接收模块1001将该函数调用请求和将部署有该函数调用请求需要调用的函数对应的函数容器的进程号一起转发到与该函数对应的模板容器中。
处理模块1002用于对模板容器进行第一次fork操作,得到第一子进程。然后,处理模块1002根据函数容器的进程号对第一子进程进行系统调用,将第一子进程的命名空间切换为函数容器的命名空间。处理模块1002在将第一子进程的命名空间切换为函数容器的命名空间以后,处理模块1002对第一子进程进行第二次多线程fork,得到目标子进程。然后,处理模块1002根据目标子进程的进程号将目标子进程的控制组切换为函数容器的控制组。
存储模块1003用于存储接收模块1001接收的函数代码和函数依赖。
可选地,无服务器计算装置接收到用户发送的函数调用请求以后,无服务器计算装置根据该函数调用请求生成新的函数实例,并将该函数实例迁移到目标隔离环境(函数容器)中的过程,请参照上文附图5、附图6、图7、图8、图9相关实施例的描述,在这里不再重复描述。
图10所示的无服务器计算装置能够执行的其他功能请参照前面各个方法实施例中的描述。
图10所描述的装置实施例仅仅是示意性的。例如模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如,多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。在本申请各个实施例中的各个工呢鞥模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。
例如,图10中的各个模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。例如,采用软件实现时,上述处理模块1002可以是由附图3中的至少一个处理器301读取存储器303中存储的程序代码后,生成的软件功能模块来实现。图10中的上述各个模块也可以是由无服务器计算装置中的不同硬件分别实现,例如处理模块1002由附图3中至少一个处理器301中的一部分处理资源(例如多核处理器中的一个核)实现,而接收模块1001由附图3的网络接口302和中至少一个处理器301中的其余部分处理资源(例如多核处理器中的其他核),或者采用FPGA、或协处理器 等可编程器件来完成。显然上述功能模块也可以采用软件硬件相结合的方式来实现,例如接收模块801由硬件可编程器件实现,而处理模块1002是由CPU读取存储器中存储的程序代码后,生成的软件功能模块。
本申请实施例还提供了一种无服务器计算系统,包括至少一个无服务器计算装置(如附图3和附图10所示)。
基于图1所示的基于容器引擎的容器实例启动方法和图7所示的多线程容器加载方法。本申请实施例中,对分别使用图1和图7所示的方法启动一个容器实例的启动时延和内存开销进行了比较。比较结果如表1所示。
表1:启动一个容器实例的启动开销和内存开销
启动方式 通过容器引擎启动 通过fork启动
启动时延 85.5ms 8.4ms
8并发容器的PSS内存开销 14.66MB 7.25MB
基于表1可知,通过容器引擎直接启动一个容器的启动时延为85.5ms。而通过本方案中的fork操作来启动一个容器的启动时延为8.4ms(启动时延降低到至10ms内)。
进一步地,表1表明通过传统的容器引擎的方法启动一个容器实例,需要耗费的启动时延为85.5ms。在通过容器引擎来启动一个容器实例时,需要创建完整的容器实例。其中,包括对容器的namespace和cgroup隔离环境的创建以及语言运行时的初始化等步骤。此外,通过传统方法启动容器实例时,PSS的内存开销不会随着实例数的变化而变化。当启动8个相同的容器实例时,每个容器的PSS开销为14.66MB。
而基于本方案中的fork方法启动容器实例时,通过多线程fork,能够复用已初始化好的语言运行时进程状态,从而跳过大部分的初始化过程,快速创建出新的多线程语言运行时进程。因此,基于本方案中的fork方法启动容器实例时,仅需要8.4ms就可以启动一个新实例。同时,由于fork的写时复制(Copy-on-Write,COW)机制,使得相同的函数的容器之间可以共享部分状态,降低PSS开销。在本申请实施例中,当启动8个相同的容器实例时,每个容器的PSS开销为7.25MB。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员将会理解,本申请的各个方面、或各个方面的可能实现方式可以被具体实施为计算机程序产品。计算机程序产品是指存储在计算机可读介质中的计算机可读程序代码。
计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质包括但不限于电子、磁性、光学、电磁、红外或半导体系统、设备或者装置,或者前述的任意适当组合。如计算机可读存储介质为随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)或便携式只读存储器(CD-ROM)。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的范围。这样,倘若本申请的这些修改和变型属于本发明权利要求的范围之内,则本发明也意图包括这些改动和变型在内。

Claims (19)

  1. 一种容器加载方法,其特征在于,所述方法包括:
    接收用户发送的函数调用请求,所述函数调用请求中包括所述函数的信息;
    根据所述函数的信息得到对应的函数容器的进程号,所述函数容器中部署有所述函数;
    根据所述函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,所述模板容器中的语言运行时进程和所述目标子进程均为所述函数对应的语言运行时进程,所述目标子进程的命名空间位于所述函数容器中;
    根据所述目标子进程的进程号将所述目标子进程的控制组切换为所述函数容器的控制组,以将所述目标子进程迁移到所述函数容器中;
    在所述函数容器中加载所述函数。
  2. 根据权利要求1所述的方法,其特征在于,所述在接收用户发送的函数调用之前,所述方法还包括:
    接收用户发送的函数代码和函数依赖;
    将所述函数代码和所述函数的函数依赖部署到所述函数容器中。
  3. 根据权利要求1所述的方法,其特征在于,所述函数的信息包括:函数名、函数类型、函数ID中的至少一种。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述函数容器的进程号对所述模板容器中的语言运行时进程的主线程进行复刻得到目标子进程之前,所述方法还包括:
    获取所述函数对应的模板容器;
    在不存在与所述函数对应的模板容器的情况下,根据所述函数对应的语言运行时,创建一个与所述函数对应的模板容器;
    保存所述模板容器。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据所述函数容器的进程号对所述模板容器中的语言运行时进程的主线程进行复刻得到目标子进程之前,所述方法还包括:
    在所述函数的语言运行时为多线程的情况下,获取模板容器的至少一个工作线程中每一个工作线程的状态信息;
    基于所述至少一个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭所述至少一个工作线程中的每一个工作线程;所述第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程处于与主线程无逻辑上的交互状态;
    当所述至少两个工作线程中存在至少一个工作线程的状态信息不为第一状态时,暂停所述至少两个工作线程中状态信息为第一状态的工作线程,直到所述至少两个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭所述至少两个工作线程中的每一个工作线程;或者,保存状态信息不为第一状态的工作线程的上下文,关闭所述至少两个工作线程中的每一个工作线程。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,包括:
    对所述主线程进行第一次复刻,得到第一子进程;
    根据所述函数容器的进程号,将所述第一子进程的命名空间切换到所述函数容器中;
    对所述第一子进程进行复刻得到目标子进程,所述目标子进程的命名控制位于所述函数容器中。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据目标子进程的进程号将所述目标子进程的控制组切换为函数容器以后,所述方法还包括:
    对用于管理工作线程的数据结构进行初始化,创建所述目标子进程的工作线程。
  8. 一种无服务器计算装置,其特征在于,所述装置包括:
    接收模块,用于接收用户发送的函数调用请求,所述函数调用请求中包括函数的信息;
    处理模块,用于根据所述函数信息得到对应的函数容器的进程号,所述函数容器中部署有所述函数;
    所述处理模块,还用于根据所述函数容器的进程号对模板容器中的语言运行时进程的主线程进行复刻得到目标子进程,所述模板容器中的语言运行时进程和所述目标子进程均为所述函数对应的语言运行时进程,所述目标子进程的命名空间位于所述函数容器中;根据所述目标子进程的进程号将所述目标子进程的控制组切换为所述函数容器的控制组,以使所述目标子进程迁移到所述函数容器中;在所述函数容器中加载所述函数。
  9. 根据权利要求8所述的装置,其特征在于,所述接收模块还用于:
    接收用户发送的函数代码和函数依赖;
    将所述函数代码和所述函数的函数依赖部署到所述函数容器中。
  10. 根据权利要求8所述的装置,其特征在于,所述函数信息包括:函数名、函数类型、函数ID中的至少一种。
  11. 根据权利要求8所述的装置,其特征在于,所述处理模块还用于:
    获取所述函数对应的模板容器;
    在所述无服务器计算装置中不存在与所述函数对应的模板容器的情况下,根据所述函数对应的语言运行时,创建一个与所述函数对应的模板容器;将所述模板容器保存在所述无服务器计算装置中。
  12. 根据权利要求8-11任一项所述的装置,其特征在于,所述根据所述函数容器的进程号对所述模板容器中的语言运行时进程的主线程进行复刻得到目标子进程之前,所述处理模块还用于:
    在所述函数的语言运行时为多线程的情况下,获取所述模板容器的至少一个工作线程中每一个工作线程的状态信息;
    当所述至少一个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭所述至少一个工作线程中的每一个工作线程;所述第一状态包括:工作线程因等待任务而处于阻塞的状态、工作线程处于不需要保存上下文的状态、工作线程处于与主线程无逻辑上的交互状态;
    当所述至少两个工作线程中存在至少一个工作线程的状态信息不为第一状态时,暂停所述至少两个工作线程中状态信息为第一状态的工作线程,直到所述至少两个工作线程中的每一个工作线程的状态信息都为第一状态时,关闭所述至少两个工作线程中的每一个工作线程;或者,保存状态信息不为第一状态的工作线程的上下文,关闭所述至少两个工作 线程中的每一个工作线程。
  13. 根据权利要求8-12任一项所述的装置,其特征在于,所述处理模块还用于:
    对所述主线程进行第一次复刻,得到第一子进程;
    根据所述函数容器的进程号,将所述第一子进程的命名空间切换到所述函数容器中;
    对所述第一子进程进行复刻得到目标子进程,所述目标子进程的命名控制位于所述函数容器中。
  14. 根据权利要求8-13任一项所述的装置,其特征在于,所述处理模块还用于:
    对用于管理工作线程的数据结构进行初始化,创建所述目标子进程的工作线程。
  15. 一种无服务器计算装置,其特征在于,包括:
    至少一个存储器,用于存储程序;
    至少一个处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行如权利要求1-7任一项所述的方法。
  16. 一种无服务器计算系统,其特征在于,包括如权利要求8-14任一项所述的无服务器计算装置。
  17. 一种计算机可读介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1-7任一所述的方法。
  18. 一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-7任一所述的方法。
  19. 一种芯片,所述芯片包括存储器和处理器,所述存储器用于存储计算机指令,所述处理器用于从所述存储器中调用并运行该计算机指令,以执行如权利要求1-7任一所述的方法。
PCT/CN2023/071747 2022-01-19 2023-01-10 一种容器加载方法及装置 WO2023138453A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210062563.8A CN116501438A (zh) 2022-01-19 2022-01-19 一种容器加载方法及装置
CN202210062563.8 2022-01-19

Publications (1)

Publication Number Publication Date
WO2023138453A1 true WO2023138453A1 (zh) 2023-07-27

Family

ID=87318911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071747 WO2023138453A1 (zh) 2022-01-19 2023-01-10 一种容器加载方法及装置

Country Status (2)

Country Link
CN (1) CN116501438A (zh)
WO (1) WO2023138453A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140244716A1 (en) * 2013-02-26 2014-08-28 Red Hat, Inc. Shared Application Store for a Platform-as-a-Service (PaaS) System
CN110427248A (zh) * 2019-07-12 2019-11-08 中国人民解放军国防科技大学 一种基于容器的轻量级用户环境构建方法、系统及介质
CN111475235A (zh) * 2020-04-13 2020-07-31 北京字节跳动网络技术有限公司 函数计算冷启动的加速方法、装置、设备及存储介质
CN113672343A (zh) * 2021-08-04 2021-11-19 浪潮云信息技术股份公司 一种基于轻量安全容器的函数计算冷启动加速的方法
CN113703867A (zh) * 2021-08-26 2021-11-26 哈尔滨工业大学 一种无服务计算中加速启动方法及系统
CN113934435A (zh) * 2021-09-29 2022-01-14 光大科技有限公司 一种函数冷启动方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140244716A1 (en) * 2013-02-26 2014-08-28 Red Hat, Inc. Shared Application Store for a Platform-as-a-Service (PaaS) System
CN110427248A (zh) * 2019-07-12 2019-11-08 中国人民解放军国防科技大学 一种基于容器的轻量级用户环境构建方法、系统及介质
CN111475235A (zh) * 2020-04-13 2020-07-31 北京字节跳动网络技术有限公司 函数计算冷启动的加速方法、装置、设备及存储介质
CN113672343A (zh) * 2021-08-04 2021-11-19 浪潮云信息技术股份公司 一种基于轻量安全容器的函数计算冷启动加速的方法
CN113703867A (zh) * 2021-08-26 2021-11-26 哈尔滨工业大学 一种无服务计算中加速启动方法及系统
CN113934435A (zh) * 2021-09-29 2022-01-14 光大科技有限公司 一种函数冷启动方法及装置

Also Published As

Publication number Publication date
CN116501438A (zh) 2023-07-28

Similar Documents

Publication Publication Date Title
US9996401B2 (en) Task processing method and virtual machine
CN107636612B (zh) 应用迁移装置、方法与存储介质
Dukic et al. Photons: Lambdas on a diet
US9558023B2 (en) Live application mobility from one operating system level to an updated operating system level and applying overlay files to the updated operating system
WO2012131507A1 (en) Running a plurality of instances of an application
WO2012100535A1 (zh) 超级内核组件的升级方法和计算机系统
US11586455B2 (en) Managing containers across multiple operating systems
US8813076B2 (en) Virtual machine updates
WO2022213850A1 (zh) 安卓运行环境构建的方法及装置
US11586358B2 (en) Building file system images using cached logical volume snapshots
CN109597631B (zh) 一种进程的升级方法、装置及电子设备
WO2020029995A1 (en) Application upgrading through sharing dependencies
US11573815B2 (en) Dynamic power management states for virtual machine migration
US11928489B2 (en) Extension application mechanisms through intra-process operation systems
JP6385471B2 (ja) 移行および遠隔ランタイム統合
US9158550B2 (en) Caching based operating system installation
CN115136133A (zh) 按需代码执行的单次使用执行环境
WO2023138453A1 (zh) 一种容器加载方法及装置
Wang et al. Reg: An ultra-lightweight container that maximizes memory sharing and minimizes the runtime environment
US20220308940A1 (en) Allocating and using file descriptors for an application executing on a plurality of nodes
CN116225617A (zh) 容器实例的管理迁移方法、装置和电子设备及存储介质
CN113419820B (zh) 实时虚拟机的部署方法和云平台
US20240152371A1 (en) Dynamic re-execution of parts of a containerized application pipeline
US20240127148A1 (en) Delta based task analysis for ci systems
US20240054000A1 (en) Container scheduling and deployment method and apparatus, and domain controller system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23742768

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023742768

Country of ref document: EP

Effective date: 20240723

NENP Non-entry into the national phase

Ref country code: DE