WO2023109700A1 - 基于进程寄生的无服务器计算的分支预测方法及装置 - Google Patents

基于进程寄生的无服务器计算的分支预测方法及装置 Download PDF

Info

Publication number
WO2023109700A1
WO2023109700A1 PCT/CN2022/138141 CN2022138141W WO2023109700A1 WO 2023109700 A1 WO2023109700 A1 WO 2023109700A1 CN 2022138141 W CN2022138141 W CN 2022138141W WO 2023109700 A1 WO2023109700 A1 WO 2023109700A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
parasitic
branch prediction
target
container
Prior art date
Application number
PCT/CN2022/138141
Other languages
English (en)
French (fr)
Inventor
叶可江
林彦颖
须成忠
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to AU2022416127A priority Critical patent/AU2022416127B2/en
Priority to CA3212167A priority patent/CA3212167A1/en
Publication of WO2023109700A1 publication Critical patent/WO2023109700A1/zh
Priority to US18/459,397 priority patent/US11915003B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3842Speculative instruction execution
    • G06F9/3844Speculative instruction execution using dynamic branch prediction, e.g. using branch history tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding
    • G06F9/3806Instruction prefetching for branches, e.g. hedging, branch folding using address prediction, e.g. return stack, branch history buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the technical field of serverless computing, in particular to a branch prediction method and device for serverless computing based on process parasitic, electronic equipment and a readable storage medium.
  • Serverless computing refers to building and running applications without managing infrastructure such as servers. It describes a more fine-grained deployment model in which an application is broken down into one or more fine-grained functions that are uploaded to a platform and then executed, scaled and billed based on current needs.
  • Serverless computing does not mean that servers are no longer used to host and run code, nor does it mean that operation and maintenance engineers are no longer needed, but that consumers of serverless computing no longer need to configure, maintain, update, expand, and In terms of capacity planning, these tasks and functions are all handled by the serverless platform and completely abstracted from developers and IT/operations teams. As a result, developers focus on writing the business logic of the application, and operations engineers are able to raise their focus to more critical business tasks.
  • a branch predictor In computer architecture, a branch predictor (Branch predictor) is a digital circuit that guesses which branch will be executed before the execution of the branch instruction ends to improve the performance of the processor's instruction pipeline. The purpose of using a branch predictor is to improve the flow of instruction pipelines.
  • branch predictor needs a certain amount of training to achieve a relatively stable high prediction accuracy. Therefore, when functions in serverless computing are scheduled to servers, branch prediction accuracy is usually very low at the very beginning. However, the running time of functions in serverless computing is usually at the millisecond level, and high branch prediction errors usually lead to more performance overhead, thereby reducing the execution performance of functions in serverless computing.
  • the current solution is usually to redesign the branch predictor, and redesign the branch predictor algorithm.
  • a branch predictor is a hardware device, and redesigning a branch predictor requires modifications at the hardware level, which will reduce the versatility of branch prediction.
  • the purpose of the present invention is to provide a branch prediction method and device based on process parasitic serverless computing, electronic equipment and readable storage media, without changing the hardware of the branch predictor, to improve the accuracy of branch prediction and improve the accuracy of serverless computing. Computes the execution performance of the function.
  • the present invention provides a branch prediction method based on process parasitic serverless computing, comprising the following steps:
  • the parasitic process is triggered when the container is initialized on the new server, and the parasitic process is used to initiate a system call, triggering the system kernel to select a target template function according to the type of the target function and copy N times;
  • the branch predictor on the new server is trained by using the copied execution data of the N target template functions as training data.
  • the objective function is dispatched to an instance in the current computing environment that is not executing the function task, and the instance executes the calculation task of the objective function.
  • branching method of serverless computing based on process parasitic further includes:
  • an instance is generated in the current computing environment, and the instance executes the computing task of the objective function.
  • the judging whether the current computing environment needs to be expanded includes:
  • branching method of serverless computing based on process parasitic further includes:
  • an instance is generated that performs the computation tasks of the target function.
  • the type of the objective function is inferred using a python deep learning algorithm.
  • the core of the target template function is programming language, if-else logic structure, for loop position feature, and function feature.
  • the present invention also provides a branch prediction device for serverless computing based on process parasitic, including:
  • a receiving module configured to receive a call request from a user for a target function
  • a scheduling module configured to schedule the container executing the target function to a new server that has not executed the target function in a short period of time when capacity expansion is required; wherein a parasitic process;
  • a call module configured to trigger the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N Second-rate;
  • a training module configured to use the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
  • the present invention also provides an electronic device, including a processor and a memory, and a computer program is stored on the memory, and when the computer program is executed by the processor, it can realize any of the above-mentioned Steps of a branch prediction method for serverless computing based on process parasitic.
  • the present invention also provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the process-based parasitic The steps of the branch prediction method for serverless computing.
  • the present invention has universal applicability.
  • the present invention can improve the branch prediction accuracy of all types of servers by pre-executing template functions, improve the execution performance of functions in serverless computing, and is applicable to all architectures (including ARM, RISC-V, etc.).
  • the present invention executes the template function in advance, making full use of the time locality of the branch predictor.
  • FIG. 1 is an overall design architecture diagram of a branch prediction method based on process parasitic serverless computing provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of a branch prediction method for serverless computing based on process parasitic provided by an embodiment of the present invention
  • FIG. 3 is a flowchart of a branch prediction method based on process parasitic serverless computing in a specific example of the present invention
  • FIG. 4 is a structural diagram of a branch prediction device for serverless computing based on process parasitic according to an embodiment of the present invention.
  • the present invention provides a branch prediction method, device, electronic equipment and readable storage medium based on process parasitic serverless computing.
  • Serverless computing is a method of providing back-end services on demand. Serverless providers allow users to write and deploy code without worrying about the underlying infrastructure. Users who obtain backend services from serverless providers will be charged based on the amount of computation and resource usage, and since this service is automatically scalable, there is no need to reserve and pay for a fixed amount of bandwidth or servers.
  • the container contains the application and all the elements required for the application to function properly, including system libraries, system settings, and other dependencies. Any type of application can run in a container, and no matter where the containerized application is hosted, it will function the same way. By the same token, containers can also carry serverless computing applications (that is, functions) and run on any server on the cloud platform.
  • An instance refers to the runtime environment in which an application is running.
  • a container A running a certain service can be considered as an instance of the service.
  • the functions of the serverless computing platform can be reduced to zero. Due to the automatic scaling of serverless computing, a large number of serverless function instances can be pulled up in a short period of time.
  • the present invention designs a template function with programming language, if-else logic structure, for loop position feature and function feature as the core by investigating the mainstream serverless function workload.
  • the code size of the template function is usually 20-30% of the normal function, does not generate any network requests and disk operations, and the execution time is usually 5-10ms.
  • multiple functions use python to perform deep learning and infer the same type of function, then multiple functions correspond to a template function, because their execution process is basically the same: load the library, load the algorithm model, read the parameters, perform Infer, return the result.
  • the present invention adds pre-execution processes to the base image by redesigning the base container image.
  • the pre-executed process starts to execute at the beginning of the container startup, calling the system call in advance to trigger the process of the kernel copying the template function.
  • the invention realizes fast duplication of specified template functions by adding a system call in the system kernel.
  • the system call passes parameters which template function needs to be copied, such as python deep learning, template functions such as web template, bigdata template, ML template, Stream template.
  • template functions such as web template, bigdata template, ML template, Stream template.
  • a branch prediction method based on process parasitic serverless computing provided by the present invention, including the following steps:
  • Step S100 receiving a call request from a user for a target function
  • Step S200 when capacity expansion is required, dispatch the container executing the target function to a new server that has not executed the target function in a short period of time; wherein a parasitic process is pre-added to the base image of the container;
  • Step S300 triggering the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N times;
  • Step S400 using the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
  • step S100 the user initiates a calling request for the target function through the client, and the client can make the request in the form of a web interface, a command line tool, or a RESTful API.
  • step S200 it is first judged whether there is an instance in the current computing environment that does not execute the function task; Execute the calculation task of the objective function. It is understandable that if there are function instances running in the environment, it means that the function is in a warm-up state at this time, so scheduling the target function tasks to these machines will improve the accuracy of branch prediction. If not, then consider how to use the present invention to improve performance.
  • no instance of the function task is running in the current computing environment, it is judged whether the current computing environment needs to be expanded; if no expansion is required, an instance is generated in the current computing environment, and the instance executes the target function computing tasks. Specifically, it is judged whether capacity expansion is required according to whether the CPU usage of all instances in the current computing environment exceeds a preset value. For example, when the CPU usage of all instances exceeds the situation, it is considered that the load is already heavy, so capacity expansion is required. If expansion is not required, an instance can be directly generated in the current computing environment, and the instance can be used to execute the computing task of the objective function.
  • step S200 perform step S200 to schedule the container executing the objective function to a new server that has not executed the objective function in a short period of time (that is, schedule the container to the new server).
  • step S300 since a parasitic process is pre-added in the basic image of the container, when the container is initialized on the new server, the process embedded in the container image (that is, the parasitic process) will be executed first. process), the parasitic process will initiate a system call to trigger the system kernel to select a target template function according to the type of the target function and copy it N times.
  • the type of the objective function is inferred using a python deep learning algorithm. Since a function type corresponds to a template function, the corresponding target template function can be selected after the target function type is determined.
  • step S400 the copied N target template functions are automatically executed, and the execution data can be used as training data to train the branch predictor on the new server.
  • the branch predictor (hardware design) is not familiar with this type of function, so it will mispredict more. Therefore, in the present invention, the template function is executed in advance, so that the branch predictor is familiar with this function to achieve a warm-up effect. Branch prediction generally only occurs in the case of code logic routing such as if-else. Therefore, as long as the template function is also the result of this design, the branch predictor can be familiar with this logic structure in advance. After the same type of function is executed many times, the branch predictor will automatically become familiar with this function model, so as to make accurate predictions. The specific training process of the branch predictor belongs to the category of branch predictor algorithm design, and will not be repeated here.
  • an instance is generated, and the instance executes the calculation task of the objective function. Since the triggering of the parasitic process, the initiation of the system call, and the copying of N template processes are performed during the container initialization process, the calculation task of generating an instance to execute the target function is performed after the container initialization is successful, while the calculation task of executing the target function.
  • the branch predictor has been trained by the execution data of N target template functions, that is, the branch predictor has a warm-up effect on the target function when the calculation task of the target function is executed. Therefore, the accuracy of branch prediction can be improved, which in turn improves the execution performance of functions in serverless computing.
  • the present invention designs a template function based on function features.
  • a parasitic process is used to call the system call, and the system calls the fast fork template process, and then the template process is used to improve the accuracy of branch prediction and improve the function of serverless computing. execution performance.
  • the present invention has carried out sufficient experiments, and the results show that the present invention improves the branch prediction accuracy rate by 49%, and improves the overall throughput by 38%, which shows that the design scheme of the present invention is feasible.
  • the present invention also provides a branch prediction device based on process parasitic serverless computing, as shown in Figure 4, including:
  • a receiving module 100 configured to receive a call request from a user for a target function
  • the scheduling module 200 is configured to schedule the container executing the target function to a new server that has not executed the target function in a short period of time when capacity expansion is required; wherein a base image of the container is pre-added with a parasitic process;
  • the calling module 300 is used to trigger the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N times;
  • the training module 400 is configured to use the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
  • the device for branch prediction based on process parasitic serverless computing further includes:
  • the first judging module is used to judge whether there are instances of unexecuted function tasks running in the current computing environment after the receiving module 100 receives the user's call request for the target function; if so, trigger the first executing module;
  • the first execution module is configured to schedule the target function to an instance in the current computing environment that is not executing function tasks, and the instance executes the computing tasks of the target function.
  • the branch prediction based on process parasitic serverless computing further includes:
  • the second judging module is used to judge whether the current computing environment needs to be expanded if there is no instance of the unexecuted function task in the current computing environment; if no expansion is required, trigger the second execution module;
  • the second execution module is further configured to generate an instance in the current computing environment, and the instance executes the computing task of the objective function.
  • the second judging module judges whether the current computing environment needs to be expanded, specifically:
  • the device for branch prediction based on process parasitic serverless computing further includes:
  • the third execution module is configured to generate an instance after the container is initialized on the new server, and the instance executes the calculation task of the target function.
  • the type of the objective function is inferred using a python deep learning algorithm.
  • the core of the target template function is programming language, if-else logic structure, for loop position feature, and function feature.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the present invention also provides an electronic device, including a processor and a memory, where a computer program is stored in the memory, and when the processor executes the computer program, the process parasitic-based wireless The steps of the branch prediction method computed by the server.
  • the processor may be a central processing unit (Central Processing Unit) in some embodiments Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chips.
  • the processor is typically used to control the overall operation of the electronic device.
  • the processor is configured to run program codes stored in the memory or process data, for example, run program codes of a branch prediction method based on process parasitic serverless computing.
  • the memory includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), random access memory (RAM), SRAM Access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the storage may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device.
  • the memory may also be an external storage device of the electronic device, such as a plug-in hard disk equipped on the electronic device, a smart memory card (SmartMedia Card,SMC), Secure Digital (Secure Digital, SD) card, flash memory card (FlashCard), etc.
  • the storage may also include both an internal storage unit of the electronic device and an external storage device thereof.
  • the memory is usually used to store the operation method and various application software installed in the electronic device, such as the program code of the branch prediction method based on process parasitic serverless computing.
  • the memory can also be used to temporarily store various types of data that have been output or will be output.
  • the present invention also provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, serverless computing based on process parasitic as described above is realized The steps of the branch prediction method.
  • the present invention provides a branch prediction method, device, electronic equipment and readable storage medium based on process parasitic serverless computing, which has the following advantages and positive effects:
  • the present invention has universal applicability.
  • the present invention can improve the branch prediction accuracy of all types of servers by pre-executing template functions, improve the execution performance of functions in serverless computing, and is applicable to all architectures (including ARM, RISC-V, etc.).
  • the present invention executes the template function in advance, making full use of the time locality of the branch predictor.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
  • a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)

Abstract

本发明提供一种基于进程寄生的无服务器计算的分支预测方法及装置,所述方法包括如下步骤:接收用户针对目标函数的调用请求;在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。本发明能够在不改变分支预测器硬件的情况下,提升分支预测准确率,提高无服务器计算中函数的执行性能。

Description

基于进程寄生的无服务器计算的分支预测方法及装置 技术领域
本发明涉及无服务器计算技术领域,特别涉及一种基于进程寄生的无服务器计算的分支预测方法及装置、电子设备及可读存储介质。
背景技术
无服务器计算(Serverless Computing)是指在构建和运行应用时无需管理服务器等基础设施。它描述了一个更细粒度的部署模型,在该模型中,应用被拆解为一个或多个细粒度的函数被上传到一个平台,然后根据当前所需执行、扩展和计费。
无服务器计算并不意味着不再使用服务器来承载和运行代码,也不意味着不再需要运维工程师,而是指无服务器计算的消费者不再需要进行服务器配置、维护、更新、扩展和容量规划上,这些任务和功能都由无服务器平台处理,并且完全从开发人员和IT/操作团队中抽象出来。因此,开发人员专注于编写应用程序的业务逻辑,运营工程师能够将他们的重点提升到更关键的业务任务上。
在计算机体系结构中,分支预测器(Branch predictor)是一种数字电路,在分支指令执行结束之前猜测哪一路分支将会被运行,以提高处理器的指令流水线的性能。使用分支预测器的目的,在于改善指令管线化的流程。
分支预测器需要一定的训练量才能达到较为稳定的高预测准确率。因此,当无服务器计算中的函数被调度到服务器上时,最开始阶段的分支预测准确率通常十分低。而无服务器计算中函数的运行时间通常在毫秒级,高分支预测错误通常导致较多性能开销,从而降低了无服务器计算中函数的执行性能。
目前解决方案通常为重新设计分支预测器,以及重新设计分支预测器算法。通过扩大分支预测器的感知范围、时间局部性原理的充分运用,从而提升整体的分支预测准确性。然而,分支预测器是硬件设备,重新设计分支预测器要在硬件级别进行修改,将降低分支预测的通用性。
技术问题
本发明的目的在于提供一种基于进程寄生的无服务器计算的分支预测方法及装置、电子设备及可读存储介质,在不改变分支预测器硬件的情况下,提升分支预测准确率,提高无服务器计算中函数的执行性能。
技术解决方案
为达到上述目的,本发明提供一种基于进程寄生的无服务器计算的分支预测方法,包括如下步骤:
接收用户针对目标函数的调用请求;
在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
进一步的,在接收到用户针对目标函数的调用请求之后,还包括:
判断当前计算环境中是否有未执行函数任务的实例正在运行;
如果有,则将所述目标函数调度到所述当前计算环境中未执行函数任务的实例上,该实例执行所述目标函数的计算任务。
进一步的,所述基于进程寄生的无服务器计算的分支方法还包括:
如果当前计算环境中没有未执行函数任务的实例正在运行,则判断所述当前计算环境是否需要扩容;
如果不需要扩容,则在所述当前计算环境中生成一实例,该实例执行所述目标函数的计算任务。
进一步的,所述判断所述当前计算环境是否需要扩容,包括:
判断所述当前计算环境中所有实例的CPU使用量是否超过预设值,如果是,判定所述当前计算环境需要扩容。
进一步的,所述基于进程寄生的无服务器计算的分支方法还包括:
在所述新服务器上初始化所述容器之后,生成一实例,该实例执行所述目标函数的计算任务。
进一步的,所述目标函数的类型采用python深度学习算法进行推断。
进一步的,所述目标模板函数以编程语言、if-else逻辑结构、for循环位置特征、函数特征为核心。
为达到上述目的,本发明还提供一种基于进程寄生的无服务器计算的分支预测装置,包括:
接收模块,用于接收用户针对目标函数的调用请求;
调度模块,用于在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
调用模块,用于在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
训练模块,用于将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
为达到上述目的,本发明还提供一种电子设备,包括处理器和存储器,所述存储器上存储有计算机程序,所述计算机程序被所述处理器执行时,实现上文任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
为达到上述目的,本发明还提供一种可读存储介质,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现上文任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
有益效果
1、相比于重新设计分支预测器,本发明具有普适性。本发明通过预执行模板函数的方法可以提升所有类型服务器的分支预测准确率,提高无服务器计算中函数的执行性能,对所有体系架构(包括ARM,RISC-V等)都适用。
2、相比于分支预测算法的时间局部性,本发明提前执行模板函数,充分利用了分支预测器的时间局部性。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本发明的技术方案,下面将对描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一个实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图:
图1是本发明一实施例提供的基于进程寄生的无服务器计算的分支预测方法的整体设计架构图;
图2是本发明一实施例提供的基于进程寄生的无服务器计算的分支预测方法的流程图;
图3是本发明一具体示例中基于进程寄生的无服务器计算的分支预测方法的流程图;
图4是本发明一实施例提供的基于进程寄生的无服务器计算的分支预测装置的结构图。
本发明的实施方式
下面将参照附图更详细地描述本发明的示例性实施例。虽然附图中显示了本发明的示例性实施例,然而应当理解,可以以各种形式实现本发明而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本发明,并且能够将本发明的范围完整的传达给本领域的技术人员。
为解决现有技术存在的问题,本发明提供了一种基于进程寄生的无服务器计算的分支预测方法、装置、电子设备及可读存储介质。
无服务器计算中部分概念解释如下:
①无服务器计算:无服务器计算是一种按需提供后端服务的方法。无服务器提供者允许用户编写和部署代码,而不必担心底层基础设施。从无服务器提供商获得后端服务的用户将根据计算量和资源使用量来付费,由于这种服务是自动扩展的,不必预留和付费购买固定数量的带宽或服务器。
②容器:容器包含应用程序和应用程序正常运行所需的所有元素,包括系统库、系统设置和其他依赖项。任何类型的应用程序都可以在容器中运行,无论容器化应用程序托管在何处,它都将以相同的方式运行。同样的道理,容器同时也可以携带无服务器计算的应用(也就是函数)然后运行在云平台的任何服务器上。
③实例:实例指的是某个应用正在运行的运行时环境,例如运行着某个服务的一个容器A,可以被认为容器A是这个服务的一个实例。原理上无服务计算平台的函数可以缩减到0个。由于无服务器计算进行自动扩缩,可以在短时间之内拉起大量的无服务器函数实例。
本发明的基本思路如下:
(1)构建无服务器计算模板函数
本发明通过调研了主流无服务器函数工作负载,设计了以编程语言、if-else逻辑结构、for循环位置特征、函数特征为核心的模板函数。模板函数的代码量通常在正常函数的20-30%,不产生任何的网络请求以及磁盘操作,执行时间通常在5-10ms。例如,若多个函数使用python执行深度学习推断为同一类型的类函数,那么则多个函数对应就一个模板函数,因为它们的执行过程基本一致:加载库,加载算法模型,读取参数,进行推断,返回结果。
(2)设计寄生容器的预运行进程
本发明通过重新设计基础容器镜像,在基础镜像中添加预执行的进程。预执行的进程在容器启动之初即开始执行,提前调用系统调用以触发内核复制模板函数的进程。
(3) Fork模板函数的系统调用开发
本发明通过在系统内核中增加一种系统调用,实现对指定的模板函数的快速复制。系统调用通过参数的方式传递需要复制哪一个模板函数,例如python深度学习,模板函数例如有web template、bigdata template、ML template、Stream template。以上过程的整体设计架构如图1所示。
请参考图2,本发明提供的一种基于进程寄生的无服务器计算的分支预测方法,包括如下步骤:
步骤S100,接收用户针对目标函数的调用请求;
步骤S200,在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
步骤S300,在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
步骤S400,将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
以下结合图3对本发明的上述步骤进行详细说明。
在步骤S100中,用户通过客户端发起针对目标函数的调用请求,客户端可以以Web界面、命令行工具、RESTful API等形式进行请求调用。
在执行步骤S200之前,首先判断当前计算环境中是否有未执行函数任务的实例正在运行;如果有,则将所述目标函数调度到所述当前计算环境中未执行函数任务的实例上,该实例执行所述目标函数的计算任务。可以理解的是,如果环境中有函数实例正在运行,表示此时函数处于预热状态,所以将目标函数任务调度到这些机器上,分支预测的正确性是有提升效果的。如果没有,那么再考虑如何使用本发明提升性能。
如果当前计算环境中没有未执行函数任务的实例正在运行,则判断所述当前计算环境是否需要扩容;如果不需要扩容,则在所述当前计算环境中生成一实例,该实例执行所述目标函数的计算任务。具体的,根据所述当前计算环境中所有实例的CPU使用量是否超过预设值来判断是否需要扩容。例如,当所有实例的CPU使用超过的情况认为已经负载较大了,所以需要进行扩容。如果不需要扩容,可直接在当前计算环境中生成一实例,使用该实例执行所述目标函数的计算任务。
如果需要扩容,则执行步骤S200,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上(即调度容器到新服务器)。
在步骤S300中,由于所述容器的基础镜像中预先添加了一寄生进程,由此,在新服务器上初始化所述容器时,就会首先执行埋点在容器镜像中的进程(即所述寄生进程),所述寄生进程就会发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次。所述目标函数的类型采用python深度学习算法进行推断。由于一种函数类型对应一个模板函数,因此确定了目标函数类型后即可选择对应的目标模板函数。
在步骤S400中,复制出的N个目标模板函数自动执行,且执行数据可作为训练数据,对所述新服务器上的分支预测器进行训练。
可以理解的是,当容器被调度到新服务器时,由于分支预测器(硬件设计)不熟悉这种类型的函数,所以会预测错较多。因此,在本发明中,提前执行模板函数,让分支预测器熟悉这种函数起到预热效果。分支预测一般只发生在有if-else这种代码逻辑选路的情况下,因此,只要模板函数也是这种设计结果就可以提前让分支预测器熟悉这种逻辑结构。同样类型函数执行多次之后,分支预测器就会自动熟悉这种函数模型,从而做出准确的预测。分支预测器的具体训练过程属于分支预测器算法设计范畴,在此不做赘述。
进一步的,在步骤S300所述新服务器上初始化所述容器成功之后,生成一实例,该实例执行所述目标函数的计算任务。由于触发寄生进程、发起系统调用、复制出N 个模板进程是在容器初始化过程中进行的,生成实例以执行目标函数的计算任务是在容器初始化成功以后进行的,而在执行目标函数的计算任务之前,已经由N个目标模板函数的执行数据对分支预测器进行了训练,即当执行目标函数的计算任务时分支预测器对目标函数已具有预热效果。因此,可以提高分支预测的准确性,进而提高无服务器计算中函数的执行性能。
综上所述,本发明设计了基于函数特征的模板函数,在容器初始化时采用寄生进程调用系统调用,系统调用快速fork模板进程,进而通过模板进程提升分支预测准确率,提高无服务器计算中函数的执行性能。本发明进行了充分的实验,结果显示本发明提升了49%的分支预测准确率,提升了38%的整体吞吐量,说明本发明设计方案可行。
基于同一发明构思,本发明还提供一种基于进程寄生的无服务器计算的分支预测装置,如图4所示,包括:
接收模块100,用于接收用户针对目标函数的调用请求;
调度模块200,用于在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
调用模块300,用于在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
训练模块400,用于将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
可选的,所述基于进程寄生的无服务器计算的分支预测装置还包括:
第一判断模块,用于在所述接收模块100接收到用户针对目标函数的调用请求之后,判断当前计算环境中是否有未执行函数任务的实例正在运行;如果有,则触发第一执行模块;
所述第一执行模块,用于将所述目标函数调度到所述当前计算环境中未执行函数任务的实例上,该实例执行所述目标函数的计算任务。
可选的,所述基于进程寄生的无服务器计算的分支预测还包括:
第二判断模块,用于如果当前计算环境中没有未执行函数任务的实例正在运行,判断所述当前计算环境是否需要扩容;如果不需要扩容,触发所述第二执行模块;
所述第二执行模块,还用于在所述当前计算环境中生成一实例,该实例执行所述目标函数的计算任务。
可选的,所述第二判模块判断所述当前计算环境是否需要扩容,具体为:
判断所述当前计算环境中所有实例的CPU使用量是否超过预设值,如果是,判定所述当前计算环境需要扩容。
可选的,所述基于进程寄生的无服务器计算的分支预测装置还包括:
第三执行模块,用于在所述新服务器上初始化所述容器之后,生成一实例,该实例执行所述目标函数的计算任务。
可选的,所述目标函数的类型采用python深度学习算法进行推断。
可选的,所述目标模板函数以编程语言、if-else逻辑结构、for循环位置特征、函数特征为核心。
对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
基于同一发明构思,本发明还提供一种电子设备,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序时实现如上文所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
所述处理器在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器通常用于控制所述电子设备的总体操作。本实施例中,所述处理器用于运行所述存储器中存储的程序代码或者处理数据,例如运行基于进程寄生的无服务器计算的分支预测方法的程序代码。
所述存储器至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器可以是所述电子设备的内部存储单元,例如该电子设备的硬盘或内存。在另一些实施例中,所述存储器也可以是所述电子设备的外部存储设备,例如该电子设备上配备的插接式硬盘,智能存储卡(SmartMedia Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(FlashCard)等。当然,所述存储器还可以既包括所述电子设备的内部存储单元也包括其外部存储设备。本实施例中,所述存储器通常用于存储安装于所述电子设备的操作方法和各类应用软件,例如基于进程寄生的无服务器计算的分支预测方法的程序代码等。此外,所述存储器还可以用于暂时地存储已经输出或者将要输出的各类数据。
基于同一发明构思,本发明还提供一种可读存储介质,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如上文所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
综上所述,本发明提供的一种基于进程寄生的无服务器计算的分支预测方法、装置、电子设备及可读存储介质,具有下列优点和积极效果:
1、相比于重新设计分支预测器,本发明具有普适性。本发明通过预执行模板函数的方法可以提升所有类型服务器的分支预测准确率,提高无服务器计算中函数的执行性能,对所有体系架构(包括ARM,RISC-V等)都适用。
2、相比于分支预测算法的时间局部性,本发明提前执行模板函数,充分利用了分支预测器的时间局部性。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (10)

  1. 一种基于进程寄生的无服务器计算的分支预测方法,其特征在于,包括如下步骤:
    接收用户针对目标函数的调用请求;
    在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
    在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
    将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
  2. 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,在接收到用户针对目标函数的调用请求之后,还包括:
    判断当前计算环境中是否有未执行函数任务的实例正在运行;
    如果有,则将所述目标函数调度到所述当前计算环境中未执行函数任务的实例上,该实例执行所述目标函数的计算任务。
  3. 如权利要求2所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述方法还包括:
    如果当前计算环境中没有未执行函数任务的实例正在运行,则判断所述当前计算环境是否需要扩容;
    如果不需要扩容,则在所述当前计算环境中生成一实例,该实例执行所述目标函数的计算任务。
  4. 如权利要求3所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述判断所述当前计算环境是否需要扩容,包括:
    判断所述当前计算环境中所有实例的CPU使用量是否超过预设值,如果是,判定所述当前计算环境需要扩容。
  5. 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述方法还包括:
    在所述新服务器上初始化所述容器之后,生成一实例,该实例执行所述目标函数的计算任务。
  6. 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述目标函数的类型采用python深度学习算法进行推断。
  7. 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述目标模板函数以编程语言、if-else逻辑结构、for循环位置特征、函数特征为核心。
  8. 一种基于进程寄生的无服务器计算的分支预测装置,其特征在于,包括:
    接收模块,用于接收用户针对目标函数的调用请求;
    调度模块,用于在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;
    调用模块,用于在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;
    训练模块,用于将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
  9. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器上存储有计算机程序,所述计算机程序被所述处理器执行时,实现权利要求1至7中任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
  10. 一种可读存储介质,其特征在于,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1至7中任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
PCT/CN2022/138141 2021-12-18 2022-12-09 基于进程寄生的无服务器计算的分支预测方法及装置 WO2023109700A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2022416127A AU2022416127B2 (en) 2021-12-18 2022-12-09 Process parasitism-based branch prediction method and device for serverless computing
CA3212167A CA3212167A1 (en) 2021-12-18 2022-12-09 Process parasitism-based branch prediction method and device for serverless computing, electronic device, and readable storage medium
US18/459,397 US11915003B2 (en) 2021-12-18 2023-08-31 Process parasitism-based branch prediction method and device for serverless computing, electronic device, and non-transitory readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111560316.2 2021-12-18
CN202111560316.2A CN116266242A (zh) 2021-12-18 2021-12-18 基于进程寄生的无服务器计算的分支预测方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/459,397 Continuation US11915003B2 (en) 2021-12-18 2023-08-31 Process parasitism-based branch prediction method and device for serverless computing, electronic device, and non-transitory readable storage medium

Publications (1)

Publication Number Publication Date
WO2023109700A1 true WO2023109700A1 (zh) 2023-06-22

Family

ID=86743986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138141 WO2023109700A1 (zh) 2021-12-18 2022-12-09 基于进程寄生的无服务器计算的分支预测方法及装置

Country Status (5)

Country Link
US (1) US11915003B2 (zh)
CN (1) CN116266242A (zh)
AU (1) AU2022416127B2 (zh)
CA (1) CA3212167A1 (zh)
WO (1) WO2023109700A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837408A (zh) * 2019-09-16 2020-02-25 中国科学院软件研究所 一种基于资源缓存的高性能无服务器计算方法及系统
US20200081745A1 (en) * 2018-09-10 2020-03-12 Nuweba Labs Ltd. System and method for reducing cold start latency of serverless functions
CN112860450A (zh) * 2020-12-04 2021-05-28 武汉悦学帮网络技术有限公司 一种请求处理方法及装置
US20210184941A1 (en) * 2019-12-13 2021-06-17 Hewlett Packard Enterprise Development Lp Proactively accomodating predicted future serverless workloads using a machine learning prediction model and a feedback control system
CN113656179A (zh) * 2021-08-19 2021-11-16 北京百度网讯科技有限公司 云计算资源的调度方法及装置、电子设备和存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817916B2 (en) * 2013-09-16 2020-10-27 Amazon Technologies, Inc. Client-selectable power source options for network-accessible service units
US10891153B1 (en) * 2017-02-22 2021-01-12 Virtuozzo International Gmbh System and method for switching file systems underneath working processes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200081745A1 (en) * 2018-09-10 2020-03-12 Nuweba Labs Ltd. System and method for reducing cold start latency of serverless functions
CN110837408A (zh) * 2019-09-16 2020-02-25 中国科学院软件研究所 一种基于资源缓存的高性能无服务器计算方法及系统
US20210184941A1 (en) * 2019-12-13 2021-06-17 Hewlett Packard Enterprise Development Lp Proactively accomodating predicted future serverless workloads using a machine learning prediction model and a feedback control system
CN112860450A (zh) * 2020-12-04 2021-05-28 武汉悦学帮网络技术有限公司 一种请求处理方法及装置
CN113656179A (zh) * 2021-08-19 2021-11-16 北京百度网讯科技有限公司 云计算资源的调度方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
AU2022416127B2 (en) 2024-03-07
CN116266242A (zh) 2023-06-20
US11915003B2 (en) 2024-02-27
AU2022416127A1 (en) 2023-09-28
US20230409330A1 (en) 2023-12-21
CA3212167A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
RU2658190C2 (ru) Управление доступом во время выполнения к интерфейсам прикладного программирования
US20190361753A1 (en) Methods, systems and apparatus to dynamically facilitate boundaryless, high availability system management
KR100898315B1 (ko) 인핸스드 런타임 호스팅
JP2018533795A (ja) 計算グラフのストリームベースのアクセラレータ処理
CN114780225B (zh) 一种分布式模型训练系统、方法及装置
US20090106730A1 (en) Predictive cost based scheduling in a distributed software build
US9904574B2 (en) Parallel computing without requiring antecedent code deployment
KR20140054948A (ko) 임베디드 시스템을 위한 오픈씨엘 응용 소프트웨어 개발 지원 도구 구성 및 방법
US8458710B2 (en) Scheduling jobs for execution on a computer system
US11294729B2 (en) Resource provisioning for multiple invocations to an electronic design automation application
CN111625317A (zh) 一种业务系统的容器云构建方法及相关装置
Harichane et al. KubeSC‐RTP: Smart scheduler for Kubernetes platform on CPU‐GPU heterogeneous systems
CN113391921B (zh) 一种应用实例的资源配额校验方法
CN111597035A (zh) 基于多线程的仿真引擎时间推进方法及系统
CN110381150A (zh) 区块链上的数据处理方法、装置、电子设备及存储介质
CN111782335A (zh) 通过进程内操作系统的扩展应用机制
WO2021098257A1 (zh) 一种基于异构计算平台的业务处理方法
Li et al. Easyscale: Accuracy-consistent elastic training for deep learning
US10552135B1 (en) Reducing a size of an application package
WO2023109700A1 (zh) 基于进程寄生的无服务器计算的分支预测方法及装置
JP5542643B2 (ja) シミュレーション装置及びシミュレーションプログラム
JP2019526091A (ja) 1つまたは複数の通信チャネルにより相互接続された複数の異なるメモリ・ロケーションを有するコンピューティング・システムのアプリケーションを最適化する方法、コンピュータ可読プログラムを含む非一時的コンピュータ可読記憶媒体、およびシステム
JP3777092B2 (ja) 分散アプリケーションを実行する方法およびシステム
US20050086667A1 (en) Symmetric Scheduling for parallel execution
CN112783729A (zh) 一种针对灰度发布的异常处理方法及异常处理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22906447

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022416127

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 3212167

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2022416127

Country of ref document: AU

Date of ref document: 20221209

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE