WO2022120577A1 - 一种预处理函数的无服务器计算方法及其系统 - Google Patents

一种预处理函数的无服务器计算方法及其系统 Download PDF

Info

Publication number
WO2022120577A1
WO2022120577A1 PCT/CN2020/134560 CN2020134560W WO2022120577A1 WO 2022120577 A1 WO2022120577 A1 WO 2022120577A1 CN 2020134560 W CN2020134560 W CN 2020134560W WO 2022120577 A1 WO2022120577 A1 WO 2022120577A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
request
container
user
information
Prior art date
Application number
PCT/CN2020/134560
Other languages
English (en)
French (fr)
Inventor
叶可江
张永贺
须成忠
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2020/134560 priority Critical patent/WO2022120577A1/zh
Publication of WO2022120577A1 publication Critical patent/WO2022120577A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Definitions

  • the present application belongs to the technical field of cloud computing, and particularly relates to a serverless computing method and system for preprocessing functions.
  • Serverless computing is a new type of cloud computing model. It divides traditional single applications into fine-grained divisions, and divides the application into functions. Each function assumes a part of the function of the application, and turns an application into a combination of functions.
  • the characteristics of serverless computing include payment for usage time, shielding server configuration from users, rapid expansion and contraction capabilities, and statelessness. Serverless computing is very popular among users because of its low cost and high elasticity.
  • Serverless computing divides an application with function as the granularity, and the execution of functions is triggered by some user-defined rules or requests.
  • serverless computing resources are used to call services only when a request arrives or a rule is triggered, and no resources are occupied if there is no request or no rule is triggered, and the user pays according to the number of calls and the duration.
  • serverless computing greatly reduces the cost of use for users, and allows users to completely eliminate the configuration of servers, thereby simplifying development and improving development efficiency.
  • Serverless computing has great flexibility to meet different requirements. Resource requirements under concurrency.
  • the container is used as the executor of the function. After the user submits the function, the system saves the source code without any processing, and then injects the source code into the container for interpretation or compilation and execution.
  • the startup method is very inefficient: first, the container as a function executor needs to load the source code and then compile or interpret it every time it is cold started. This repetitive operation takes up additional resources; second, some more complex functions may There will be a long compilation or interpretation time, which will greatly affect the user experience.
  • the main technical problem to be solved by this application is to provide a serverless computing method and system for preprocessing functions, which can generate executable target files or intermediate files by performing function preprocessing on functions after a user submits a function request and save them.
  • the container serving as the function executor is cold started, the data volume of the directory corresponding to the function is mounted, so that the container does not need to be interpreted or compiled and executed from the source code after the container is started, which reduces the cold start delay.
  • a serverless computing method for preprocessing functions comprising the following steps:
  • Step S1 receiving a request from a user
  • Step S2 according to the request information submitted by the user, judge, if it is to submit a function request, then analyze the source code or extract the parameters of the function request to determine the programming language type used by the user; if it is a function execution request, then skip to step S4;
  • Step S3 perform corresponding code processing according to the programming language type obtained in step S2, generate a target file, and save;
  • Step S4 when a function execution request is received, mount the target file in step S3 into a container serving as a function executor, and directly execute the function in the target file.
  • step S3 the code processing includes compilation processing and interpretation processing.
  • the target file includes an executable file and/or an intermediate language file.
  • step S3 if the programming language type is a static language, a compilation process is performed to generate an executable file; if the programming language type is a dynamic language, an interpretation process is performed to generate an intermediate language file.
  • step S4 it is judged whether the request is a request to execute a function, and if not, it is discarded; if so, the information of the function executor to be used is determined according to the information requested by the user.
  • the running container causes it to execute the functions of the object file within it.
  • a serverless computing system for preprocessing functions including:
  • the controller module is responsible for determining the category of the user request and processing it;
  • a preprocessing module configured to perform corresponding code processing on the programming language type determined by the controller module to be used by the user to generate a target file
  • a storage module for storing the target file, user information and function information generated by the preprocessing module
  • Function executor used as a container for executing functions
  • the container scheduler is configured to schedule the function executor to execute the function in the target file according to the user request.
  • the service discovery module is used for managing the information of all function executors, and sending the required function executor information to the controller module.
  • a message queue module configured to buffer the request sent by the controller module and send it to the container scheduler.
  • the storage module When the function executor needs a cold start, the storage module provides the container scheduler with function information and related file information for the function executor to use in cold start.
  • the beneficial effects of the embodiments of the present application are: compared with the prior art, the present application generates and saves an executable target file or an intermediate file by performing function preprocessing on a function after a user submits a function request, When the container as the function executor is cold started, the data volume of the corresponding directory of the function is mounted, so that the container does not need to be interpreted or compiled and executed from the source code after the container is started, which reduces the cold start delay.
  • FIG. 1 is a block diagram of steps of a serverless computing method of a preprocessing function according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an embodiment of a serverless computing method for a preprocessing function according to an embodiment of the present application
  • FIG. 3 is a structural block diagram of a serverless computing system of a preprocessing function according to an embodiment of the present application.
  • serverless computing does not run persistently on the server like traditional applications. Server resources are only used when user execution requests arrive. Therefore, it is very important to reduce the startup delay of serverless computing.
  • Firecracker In order to reduce the cold start delay of containers, Amazon's Alexandru Agache and others proposed a lightweight serverless computing dedicated container Firecracker in the article "Firecracker: Lightweight Virtualization for Serverless Applications” to reduce the startup overhead of containers. Firecracker combines With the security and isolation and the speed and flexibility of containers provided by hardware virtualization technology, it uses the Linux kernel virtual machine to create and run micro-virtual machines, eliminating unnecessary devices and customer-facing features to reduce each The memory footprint of the micro virtual machine can improve hardware utilization and shorten the startup time; Manco F et al. proposed in the article "My VM is Lighter (and Safer) than your Container” to reduce unnecessary overhead by continuously trimming the virtual machine Start the resources occupied by the virtual machine to reduce the startup delay. These methods reduce the complexity of the container by redesigning the virtualization technology used by the container to reduce the initialization overhead.
  • the present application provides a serverless computing method for a preprocessing function, comprising the following steps:
  • Step S1 receiving a request from a user
  • Step S2 according to the request information submitted by the user, judge, if it is to submit a function request, then analyze the source code or extract the parameters of the function request to determine the programming language type used by the user; if it is a function execution request, then skip to step S4;
  • Step S3 perform corresponding code processing according to the programming language type obtained in step S2, generate a target file, and save;
  • Step S4 when a function execution request is received, mount the target file in step S3 into a container serving as a function executor, and directly execute the function in the target file.
  • step S3 the code processing includes compilation processing and interpretation processing.
  • the target file includes an executable file and/or an intermediate language file, specifically, if the programming language type is a static language, then compile processing is performed to generate an executable file; if the programming language type is a dynamic language, then Interpretation processing is performed to generate intermediate language files.
  • step S4 determine whether the request is a request to execute a function, if not, discard it; if so, determine the information of the function executor to be used according to the information requested by the user; determine whether a cold start container is required, if necessary, according to the user
  • the requested information mounts the required target file into the container as the function executor, and directly executes the function in the target file; if not, select the running container according to the information requested by the user to execute the function of the target file in it .
  • the present application provides an embodiment, and the steps of this embodiment are as follows:
  • Step 1 First receive the user's request
  • Step 2 determine the request type, if it is a request to submit a function, then go to step 3, otherwise go to step 6;
  • Step 3 Analyze the source code or extract the request parameter information to obtain the programming language type used by the function;
  • Step 4 Perform corresponding processing according to the language type; the processing method generally includes compilation and interpretation. For static languages, it is generally compiled and processed to generate executable files; for dynamic languages, it is generally interpreted and processed to generate intermediate language files; some languages require mixed compilation and interpretation processing to generate intermediate language files or executable files.
  • the language processing process selects the corresponding method according to the language characteristics; the environment required to process the source code of the function can use the locally configured environment or use the container technology to process the output in the container target document;
  • Step 5 save the file obtained after the processing in step 4, and save the original data information of the corresponding function and the user at the same time;
  • Step 6 determine whether the request is a request to execute a function, if so, go to step 8, otherwise go to step 7;
  • Step 7 The request is illegal and discarded
  • Step 8 Determine the information of the function executor to be used according to the information requested by the user;
  • Step 9 according to the determined information in Step 8, determine whether a cold start container is required, if necessary, perform Step 11, otherwise, perform Step 10;
  • Step 10 there is already a running container, select the corresponding container execution function according to the function executor information determined in step 8;
  • Step 11 There is no running container, and the container needs to be cold started.
  • the container is cold started, the corresponding file is mounted according to the information determined in step 8, and the function is executed after the container is started.
  • the present application provides a serverless computing system for a preprocessing function, including:
  • the controller module is responsible for determining the category of the user request and processing it;
  • the service discovery module is used to manage the information of all function executors, and send the required function executor information to the controller module;
  • a preprocessing module configured to perform corresponding code processing on the programming language type determined by the controller module to be used by the user to generate a target file
  • a storage module for storing the target file, user information and function information generated by the preprocessing module
  • Function executor used as a container for executing functions
  • a message queue module configured to cache the request sent by the controller module and send it to the container scheduler
  • the container scheduler is configured to schedule the function executor to execute the function in the target file according to the user request.
  • the storage module when the function executor needs a cold start, provides the container scheduler with function information and related file information for the function executor to use for cold start.
  • the controller module is used to analyze user requests and function source code information, and determine the container information required to execute the function; the preprocessing module is used to perform corresponding processing on the function source code; the service discovery module is used to find functions that are executed as functions.
  • the container information of the server is sent to the controller module; the message queue module is used to cache the requested information; the container scheduler is used to retrieve the corresponding information from the storage module according to the request in the message queue module to start the container or select an existing container to execute Functions; function executors are used as containers for executing functions.
  • Controller module The controller module is the entrance of the system and is responsible for determining the category of the user request and processing it accordingly; if a function request is submitted, the controller module will analyze the function source code or extract the request parameters to obtain the language category used by the source code , and pass the language category, function metadata information and user information to the preprocessing module; if the request is a function execution request, the controller module obtains the container information required by the corresponding function execution from the service discovery module, and then executes the function required by the function. The container information and request information are sent to the message queue module.
  • the service discovery module is responsible for managing the information of the function executors running in the whole system. When the service discovery module receives the request from the controller module, it will retrieve all the running function executors. The function executor will return the container information to the controller module, and if not found, it will notify the controller module to prepare to cold start the function executor.
  • Preprocessing module After the preprocessing module receives the source code and the corresponding user information and programming language type information from the controller module, it will select the corresponding processing method according to the language type, and save the generated file, user information, and function information to the The storage module; the environment used by the preprocessing module to process the source code of the function can be the environment configured locally, or the source code can be processed in a container containing the corresponding environment by using the container technology to output the object file.
  • Storage module The storage module is used to store the files, user information, and function information obtained from the preprocessing module; when the function executor needs to be cold started, it provides the container scheduler with function information and related file information for the function executor to cool down. Start using.
  • the message queue module is used to cache the requests sent by the controller module and send them to the container scheduler.
  • the message queue module can temporarily cache the request information when the number of requests is too large to process, so as to avoid the situation of missing requests.
  • Container scheduler module obtains the corresponding information from the message queue module, and the information content includes user information, function information, and function executor information obtained by the service discovery module; when the container scheduling module receives the information, according to the information Content If the function executor needs to be cold-started, the container scheduler module uses the function information and user information to retrieve the corresponding file information from the storage module, and mounts the file into the container when the container as the function executor is cold-started; If the cold start is not required, the function scheduler module selects the corresponding function executor to execute the function according to the obtained information.
  • Function executor module As a container for executing functions, it is scheduled by the container scheduler module.
  • the existing technology starts with the container itself, reduces the resources occupied by the container itself to speed up the start of the container, and ignores the key to the cold start delay of the container.
  • a large part of the cold start time of the container is used for compiling or interpreting the function source code.
  • the object file is generated by processing the function source code in advance, and it is directly mounted into the container when the container is cold started to avoid The time for compiling or interpreting the source code is greatly improved, the startup efficiency of the container is greatly improved, and the overhead of server resources for interpreting or compiling the function source code when the container is cold started is also reduced.
  • the function source code submitted by the user is preprocessed, an executable file or an intermediate file is generated, and the container is directly mounted into the container to accelerate the startup of the container when the container is started.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

一种预处理函数的无服务器计算方法及其系统,属于云计算技术领域;该方法先接收用户的请求(S1);再根据用户提交的请求信息进行判断,如果是提交函数请求,则分析源代码或者提取函数请求的参数从而确定用户使用的编程语言类型(S2);根据得到的编程语言类型进行对应的代码处理,生成目标文件,并且进行保存(S3);收到函数执行请求时,将目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数(S4)。该方法通过在用户提交函数请求后对函数进行函数预处理生成可执行的目标文件或中间文件并保存,在作为函数执行器的容器冷启动的时候挂载函数对应目录的数据卷,从而容器启动后不用从源代码开始解释或编译执行,减少冷启动延迟。

Description

一种预处理函数的无服务器计算方法及其系统 技术领域
本申请属于云计算技术领域,特别涉及一种预处理函数的无服务器计算方法及其系统。
背景技术
无服务器计算是一种新型的云计算模式,它将传统的单体应用进行细粒度分割,把应用分割成一个个函数,每个函数承担应用的一部分功能,将一个应用变成函数的组合。无服务器计算的特点有按使用时间付费、向用户屏蔽服务器配置、快速的扩缩容能力、无状态化,无服务器计算因为其低廉的成本和高度的弹性深受用户的欢迎。
随着云计算技术的迅猛发展,无服务器计算逐渐成为云计算发展的必然趋势,无服务器计算是以函数为粒度将一个应用拆分,函数的执行由一些用户定义的规则或请求来触发执行。
在无服务器计算中,只有在有请求到来时或者规则触发时占用资源调用服务,没有请求或者没有规则触发则不占用任何资源,用户根据调用次数、时长进行付费。相比传统的云计算架构,无服务器计算极大降低了用户的使用成本,而且使用户可以完全免除对服务器的配置从而简化了开发提高了开发效率,无服务器计算有极大的弹性可以满足不同并发量下的资源需求。
在目前的无服务器计算系统中使用容器作为函数的执行器,在用户提交函数后,系统不经任何处理将源代码保存起来,当执行的时候才将源代码注入容器中解释或编译执行,这样的启动方式十分低效:首先,作为函数执行器的容 器每次冷启动的时候都需要加载源代码然后编译或解释执行,这种重复的操作占用额外的资源;其次,有些比较复杂的函数可能会有较长的编译或解释时间,这会很影响用户的体验。
发明内容
本申请主要解决的技术问题是提供一种预处理函数的无服务器计算方法及其系统,其通过在用户提交函数请求后对函数进行函数预处理生成可执行的目标文件或中间文件并保存,在作为函数执行器的容器冷启动的时候挂载函数对应目录的数据卷,从而容器启动后不用从源代码开始解释或编译执行,减少冷启动延迟。
为了解决上述问题,本申请提供了如下技术方案:
一种预处理函数的无服务器计算方法,其中,包括如下步骤:
步骤S1、接收用户的请求;
步骤S2、根据用户提交的请求信息,进行判断,如果是提交函数请求,则分析源代码或者提取函数请求的参数从而确定用户使用的编程语言类型;如果是函数执行请求,则跳到步骤S4;
步骤S3、根据步骤S2得到的编程语言类型进行对应的代码处理,生成目标文件,并且进行保存;
步骤S4、当收到函数执行请求时,将步骤S3内的目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数。
本申请实施例采取的技术方案还包括:
在步骤S3内,所述代码处理包括编译处理和解释处理。
本申请实施例采取的技术方案还包括:
在步骤S3内,所述目标文件包括可执行文件和/或中间语言文件。
本申请实施例采取的技术方案还包括:
在步骤S3内,如果编程语言类型为静态语言,则进行编译处理生成可执行文件;如果编程语言类型为动态语言,则进行解释处理生成中间语言文件。
本申请实施例采取的技术方案还包括:
在步骤S4内,判断请求是否为执行函数的请求,如果不是,则丢弃;如果是,根据用户请求的信息确定所要使用函数执行器的信息。
本申请实施例采取的技术方案还包括:
判断是否需要冷启动容器,如果需要,则根据用户请求的信息将所需目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数;如果不需要,则根据用户请求的信息选择正在运行的容器使之执行其内目标文件的函数。
本申请实施例采取的又一技术方案为:一种预处理函数的无服务器计算系统,其中,包括:
控制器模块,用于负责确定用户请求的类别并进行处理;
预处理模块,用于对所述控制器模块确定用户使用的编程语言类型进行对应的代码处理生成目标文件;
存储模块,用于将所述预处理模块生成的目标文件、用户信息、函数信息进行保存;
函数执行器,用于作为执行函数的容器;
容器调度器,用于根据用户请求调度所述函数执行器执行目标文件内函数。
本申请实施例采取的技术方案还包括:
服务发现模块,用于负责管理所有的函数执行器的信息,向所述控制器模块发送其所需的函数执行器的信息。
本申请实施例采取的技术方案还包括:
消息队列模块,用于缓存所述控制器模块发送的请求并发送至所述容器调度器。
本申请实施例采取的技术方案还包括:
在所述函数执行器需要冷启动的时候,所述存储模块向所述容器调度器提供函数信息和相关的文件信息以供所述函数执行器冷启动使用。
相对于现有技术,本申请实施例产生的有益效果在于:与现有技术相比,本申请通过在用户提交函数请求后对函数进行函数预处理生成可执行的目标文件或中间文件并保存,在作为函数执行器的容器冷启动的时候挂载函数对应目录的数据卷,从而容器启动后不用从源代码开始解释或编译执行,减少冷启动延迟。
附图说明
图1为本申请实施例的预处理函数的无服务器计算方法的步骤框图;
图2为本申请实施例的预处理函数的无服务器计算方法的实施例的方法流程示意图;
图3为本申请实施例的预处理函数的无服务器计算系统的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实 施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
目前无服务器计算不像传统的应用那样在服务器持久运行,只有当用户执行请求到来时才会使用服务器资源,因此减少无服务器计算的启动时延十分重要。
在当今的无服务器计算系统中,启动时延的主要影响因素是作为函数执行器的容器的冷启动时延。
为了减少容器的冷启动时延,亚马逊公司的Alexandru Agache等人在文章《Firecracker:Lightweight Virtualization for Serverless Applications》中提出一种轻量级的无服务器计算专用容器Firecracker来减少容器的启动开销,Firecracker结合了硬件虚拟化技术提供的安全性与隔离性和容器的速度与灵活性,它使用Linux内核虚拟机来创建和运行微虚拟机,剔除了不必要的设备和面向客户的功能,以减少每个微虚拟机的内存占用,这可以提高硬件利用率并缩短启动时间;Manco F等人在文章《My VM is Lighter(and Safer)than your Container》提出通过不断的裁剪虚拟机去除不必要的开销减少启动虚拟机占用的资源来减少启动时延。这些方法都通过重新设计容器使用的虚拟化技术减少容器的复杂度以减轻初始化的开销。
现有的大多解决无服务器计算的容器冷启动延迟都是从容器本身入手,使用轻量级容器替代原本的重量级容器来达到节省启动开销的效果,但是这些方法都有一个共同的局限性:他们忽略了容器冷启动时进行代码编译执行或解释执行所占的时延,这是影响容器冷启动效率的一个重要因素,每次容器冷启动的时候都需要重新编译或解释函数源代码,这种重复的操作会浪费系统资源而且重新编译或解释函数源代码会影响启动时延。
如图1所示,本申请提供一种预处理函数的无服务器计算方法,包括如下步骤:
步骤S1、接收用户的请求;
步骤S2、根据用户提交的请求信息,进行判断,如果是提交函数请求,则分析源代码或者提取函数请求的参数从而确定用户使用的编程语言类型;如果是函数执行请求,则跳到步骤S4;
步骤S3、根据步骤S2得到的编程语言类型进行对应的代码处理,生成目标文件,并且进行保存;
步骤S4、当收到函数执行请求时,将步骤S3内的目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数。
其中,在步骤S3内,代码处理包括编译处理和解释处理。
进一步,在步骤S3内,目标文件包括可执行文件和/或中间语言文件,具体地讲,如果编程语言类型为静态语言,则进行编译处理生成可执行文件;如果编程语言类型为动态语言,则进行解释处理生成中间语言文件。
在步骤S4内,判断请求是否为执行函数的请求,如果不是,则丢弃;如果是,根据用户请求的信息确定所要使用函数执行器的信息;判断是否需要冷启动容器,如果需要,则根据用户请求的信息将所需目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数;如果不需要,则根据用户请求的信息选择正在运行的容器使之执行其内目标文件的函数。
如图2所示,本申请提供一个实施例,该实施例的步骤如下:
步骤1、首先接收用户的请求;
步骤2、判断请求类型,如果是提交函数的请求,则执行步骤3,否则执行步骤6;
步骤3、分析源代码或者提取请求参数信息获得该函数使用的编程语言类型;
步骤4、根据语言类型进行相应的处理;处理方式一般包括编译和解释,对于静态语言一般进行编译处理生成可执行文件;对于动态语言来说一般进行解释处理生成中间语言文件;一些语言需要混合编译和解释处理来生成中间语言文件或者可执行文件,语言处理的过程根据语言特点选择对应的方式;处理函数源代码所需的环境可以使用本机配置的环境或者利用容器技术在容器中处理后输出目标文件;
步骤5、将步骤4处理后得到的文件保存,同时保存对应函数和用户的原数据信息;
步骤6、判断请求是否为执行函数的请求,如果是则执行步骤8,否则执行步骤7;
步骤7、该请求非法,丢弃;
步骤8、根据用户请求的信息确定所要使用函数执行器的信息;
步骤9、根据步骤8的确定的信息判断是否需要冷启动容器,如果需要则执行步骤11,否则执行步骤10;
步骤10、已经有正在运行的容器,根据步骤8确定的函数执行器信息选择相应的容器执行函数;
步骤11、没有正在运行的容器,容器需要冷启动,容器冷启动的时候根据步骤8确定的信息挂载对应的文件,容器启动后执行函数。
也就是说,1、根据用户提交的源代码和请求信息,分析源代码或者提取请求的参数确定用户使用的编程语言;2、根据得到的编程语言进行对应的代码处理,不同的编程语言类型可能有不同的处理方式,静态语言一般可以生成 可执行文件,动态语言一般生成中间语言文件,将处理后生成的文件保存;3、当收到用户的函数执行请求时,作为函数执行器的容器冷启动的过程中把生成的文件挂载到容器中,使容器跳过解释或编译函数源代码的过程减少容器的冷启动的时间。
如图3所示,本申请提供一种预处理函数的无服务器计算系统,包括:
控制器模块,用于负责确定用户请求的类别并进行处理;
服务发现模块,用于负责管理所有的函数执行器的信息,向所述控制器模块发送其所需的函数执行器的信息;
预处理模块,用于对所述控制器模块确定用户使用的编程语言类型进行对应的代码处理生成目标文件;
存储模块,用于将所述预处理模块生成的目标文件、用户信息、函数信息进行保存;
函数执行器,用于作为执行函数的容器;
消息队列模块,用于缓存所述控制器模块发送的请求并发送至所述容器调度器;
容器调度器,用于根据用户请求调度所述函数执行器执行目标文件内函数。
其中,在函数执行器需要冷启动的时候,存储模块向容器调度器提供函数信息和相关的文件信息以供函数执行器冷启动使用。
也就是说,控制器模块用于分析用户请求和函数源代码信息,确定执行函数所需要的容器信息;预处理模块用于对函数源代码进行对应的处理;服务发现模块用于寻找作为函数执行器的容器信息并发送给控制器模块;消息队列模块用于缓存请求的信息;容器调度器用于根据消息队列模块中的请求从存储模 块取出对应的信息来启动容器或选择现有的容器来执行函数;函数执行器用于作为执行函数的容器。
具体地讲:
控制器模块:控制器模块是系统的入口,负责确定用户请求的类别并进行相应的处理;如果是提交函数请求,则控制器模块会分析函数源代码或提取请求参数获取源代码使用的语言类别,并将语言类别、函数元数据信息和用户信息传递给预处理模块;如果请求为函数执行请求,则控制器模块向服务发现模块获取对应函数执行所需要的容器信息,然后将执行函数所需要的容器信息和请求信息发送给消息队列模块。
服务发现模块:服务发现模块负责管理整个系统运行中的函数执行器的信息,当服务发现模块收到控制器模块发来的请求时,它会检索所有运行中的函数执行器,如果找到合适的函数执行器则会将该容器的信息返回给控制器模块,如果没找到则通知控制器模块准备冷启动函数执行器。
预处理模块:预处理模块从控制器模块收到源代码和对应的用户信息和编程语言类型信息后会根据语言类型选择对应的处理方式,处理后将生成的文件、用户信息、函数信息保存到存储模块;预处理模块处理函数源代码使用的环境可以是本机配置的环境,也可以利用容器技术把源代码在含有相应环境的容器中处理后输出目标文件。
存储模块:存储模块用来存储从预处理模块获取到的文件、用户信息、函数信息;在函数执行器需要冷启动的时候向容器调度器提供函数信息和相关的文件信息以供函数执行器冷启动使用。
消息队列模块:消息队列模块用来缓存控制器模块发出来的请求并送向容器调度器,消息队列模块可以在请求数量太大来不及处理的时候临时缓存请求 信息,避免丢失请求的情况发生。
容器调度器模块:容器调度器模块从消息队列模块取得相应的信息,信息内容包括用户信息、函数信息、服务发现模块获取到的函数执行器信息;当容器调度模块收到的信息后,根据信息内容如果需要冷启动函数执行器的时候,则容器调度器模块用函数信息和用户信息从存储模块检索相应的文件信息,在作为函数执行器的容器冷启动的时候将文件挂载到容器中;如果不需要冷启动,则函数调度器模块根据获取到的信息选择相应的函数执行器执行函数。
函数执行器模块:作为执行函数的容器,供容器调度器模块调度。
在本申请内,通过在用户提交函数源代码后进行相应的处理,保存处理后的文件,在作为函数执行器的容器冷启动的时候将对应的文件挂载到容器中以加快容器的启动。
对于作为函数执行器的容器冷启动时延过长的问题,现有技术都是从容器本身入手,减小容器本身占用的资源来加快容器的启动,忽略了在容器冷启动时延中的关键因素,容器冷启动时间中很大一部分用在了对函数源代码的编译或者解释处理,本申请则通过提前处理函数源代码生成目标文件,在容器冷启动的时候直接挂载进容器中,避免了对源代码的编译或解释时间,很大程度上提高了容器的启动效率,同时还减少了服务器资源在容器冷启动时用于函数源代码解释或编译的开销。
本申请具备如下优点:
1、改进在无服务器计算系统中函数执行器冷启动时间过长的问题,通过在用户提交函数后对函数进行函数预处理生成可执行的目标文件或中间文件并保存,在作为函数执行器的容器冷启动的时候挂载函数对应目录的数据卷,从而容器启动后不用从源代码开始解释或编译执行,减少冷启动延迟。
2、改进在无服务器计算系统中函数执行器冷启动时重复处理函数源代码占用系统资源的问题,通过提前处理用户提交的函数源代码,将处理后的源代码直接挂载到冷启动的容器中以避免对函数源代码的解释或编译操作。
3、通过在预处理函数的无服务器计算系统中的预处理模块,预处理用户提交的函数源代码,生成可执行文件或者中间文件,在容器启动的时候直接挂载到容器中加速容器的启动。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种预处理函数的无服务器计算方法,其特征在于,包括如下步骤:
    步骤S1、接收用户的请求;
    步骤S2、根据用户提交的请求信息,进行判断,如果是提交函数请求,则分析源代码或者提取函数请求的参数从而确定用户使用的编程语言类型;如果是函数执行请求,则跳到步骤S4;
    步骤S3、根据步骤S2得到的编程语言类型进行对应的代码处理,生成目标文件,并且进行保存;
    步骤S4、当收到函数执行请求时,将步骤S3内的目标文件挂载到作为函数执行器的容器中,直接执行目标文件内函数。
  2. 根据权利要求1所述的一种预处理函数的无服务器计算方法,其特征在于,在步骤S3内,所述代码处理包括编译处理和解释处理。
  3. 根据权利要求2所述的一种预处理函数的无服务器计算方法,其特征在于,在步骤S3内,所述目标文件包括可执行文件和/或中间语言文件。
  4. 根据权利要求3所述的一种预处理函数的无服务器计算方法,其特征在于,在步骤S3内,如果编程语言类型为静态语言,则进行编译处理生成可执行文件;如果编程语言类型为动态语言,则进行解释处理生成中间语言文件。
  5. 根据权利要求1所述的一种预处理函数的无服务器计算方法,其特征在于,在步骤S4内,判断请求是否为执行函数的请求,如果不是,则丢弃;如果是,根据用户请求的信息确定所要使用函数执行器的信息。
  6. 根据权利要求5所述的一种预处理函数的无服务器计算方法,其特征在于,判断是否需要冷启动容器,如果需要,则根据用户请求的信息将所需目标文 件挂载到作为函数执行器的容器中,直接执行目标文件内函数;如果不需要,则根据用户请求的信息选择正在运行的容器使之执行其内目标文件的函数。
  7. 一种预处理函数的无服务器计算系统,其特征在于,包括:
    控制器模块,用于负责确定用户请求的类别并进行处理;
    预处理模块,用于对所述控制器模块确定用户使用的编程语言类型进行对应的代码处理生成目标文件;
    存储模块,用于将所述预处理模块生成的目标文件、用户信息、函数信息进行保存;
    函数执行器,用于作为执行函数的容器;
    容器调度器,用于根据用户请求调度所述函数执行器执行目标文件内函数。
  8. 根据权利要求7所述的一种预处理函数的无服务器计算系统,其特征在于,还包括:
    服务发现模块,用于负责管理所有的函数执行器的信息,向所述控制器模块发送其所需的函数执行器的信息。
  9. 根据权利要求8所述的一种预处理函数的无服务器计算系统,其特征在于,还包括:
    消息队列模块,用于缓存所述控制器模块发送的请求并发送至所述容器调度器。
  10. 根据权利要求9所述的一种预处理函数的无服务器计算系统,其特征在于,在所述函数执行器需要冷启动的时候,所述存储模块向所述容器调度器提供函数信息和相关的文件信息以供所述函数执行器冷启动使用。
PCT/CN2020/134560 2020-12-08 2020-12-08 一种预处理函数的无服务器计算方法及其系统 WO2022120577A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134560 WO2022120577A1 (zh) 2020-12-08 2020-12-08 一种预处理函数的无服务器计算方法及其系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/134560 WO2022120577A1 (zh) 2020-12-08 2020-12-08 一种预处理函数的无服务器计算方法及其系统

Publications (1)

Publication Number Publication Date
WO2022120577A1 true WO2022120577A1 (zh) 2022-06-16

Family

ID=81973956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134560 WO2022120577A1 (zh) 2020-12-08 2020-12-08 一种预处理函数的无服务器计算方法及其系统

Country Status (1)

Country Link
WO (1) WO2022120577A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543486A (zh) * 2022-11-16 2022-12-30 北京大学 面向无服务器计算的冷启动延迟优化方法、装置和设备
CN116257306A (zh) * 2023-04-20 2023-06-13 天津大学 一种基于Serverless技术的数值计算方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200081745A1 (en) * 2018-09-10 2020-03-12 Nuweba Labs Ltd. System and method for reducing cold start latency of serverless functions
CN111158855A (zh) * 2019-12-19 2020-05-15 中国科学院计算技术研究所 一种基于微容器及云函数的轻量虚拟化裁剪方法
US20200213279A1 (en) * 2018-12-21 2020-07-02 Futurewei Technologies, Inc. Mechanism to reduce serverless function startup latency
CN111475235A (zh) * 2020-04-13 2020-07-31 北京字节跳动网络技术有限公司 函数计算冷启动的加速方法、装置、设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200081745A1 (en) * 2018-09-10 2020-03-12 Nuweba Labs Ltd. System and method for reducing cold start latency of serverless functions
US20200213279A1 (en) * 2018-12-21 2020-07-02 Futurewei Technologies, Inc. Mechanism to reduce serverless function startup latency
CN111158855A (zh) * 2019-12-19 2020-05-15 中国科学院计算技术研究所 一种基于微容器及云函数的轻量虚拟化裁剪方法
CN111475235A (zh) * 2020-04-13 2020-07-31 北京字节跳动网络技术有限公司 函数计算冷启动的加速方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543486A (zh) * 2022-11-16 2022-12-30 北京大学 面向无服务器计算的冷启动延迟优化方法、装置和设备
CN116257306A (zh) * 2023-04-20 2023-06-13 天津大学 一种基于Serverless技术的数值计算方法

Similar Documents

Publication Publication Date Title
Chen et al. GFlink: An in-memory computing architecture on heterogeneous CPU-GPU clusters for big data
US8578377B2 (en) Accelerator and its method for realizing supporting virtual machine migration
Wang et al. Laperm: Locality aware scheduler for dynamic parallelism on gpus
Basaran et al. Supporting preemptive task executions and memory copies in GPGPUs
CN112445550B (zh) 一种预处理函数的无服务器计算方法及其系统
US9996401B2 (en) Task processing method and virtual machine
CN107943577B (zh) 用于调度任务的方法和装置
KR100898315B1 (ko) 인핸스드 런타임 호스팅
CN106156278B (zh) 一种数据库数据读写方法和装置
EP3895010A1 (en) Performance-based hardware emulation in an on-demand network code execution system
WO2022120577A1 (zh) 一种预处理函数的无服务器计算方法及其系统
US20130117753A1 (en) Many-core Process Scheduling to Maximize Cache Usage
US10402223B1 (en) Scheduling hardware resources for offloading functions in a heterogeneous computing system
CN107077390B (zh) 一种任务处理方法以及网卡
CN111158855B (zh) 一种基于微容器及云函数的轻量虚拟化裁剪方法
JPH05204656A (ja) スレッド固有データ保持方法
Lee et al. Granular computing
CN106027617A (zh) 一种私有云环境下任务及资源动态调度的实现方法
US20110219373A1 (en) Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform
CN112491426B (zh) 面向多核dsp的服务组件通信架构及任务调度、数据交互方法
WO2023124543A1 (zh) 用于大数据的数据处理方法和数据处理装置
WO2024119988A1 (zh) 多cpu环境下的进程调度方法、装置、电子设备和介质
US10579419B2 (en) Data analysis in storage system
Wu et al. Irina: Accelerating DNN inference with efficient online scheduling
CN104714839A (zh) 一种控制进程生命期的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964522

Country of ref document: EP

Kind code of ref document: A1