WO2023109700A1 - 基于进程寄生的无服务器计算的分支预测方法及装置 - Google Patents
基于进程寄生的无服务器计算的分支预测方法及装置 Download PDFInfo
- Publication number
- WO2023109700A1 WO2023109700A1 PCT/CN2022/138141 CN2022138141W WO2023109700A1 WO 2023109700 A1 WO2023109700 A1 WO 2023109700A1 CN 2022138141 W CN2022138141 W CN 2022138141W WO 2023109700 A1 WO2023109700 A1 WO 2023109700A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- function
- parasitic
- branch prediction
- target
- container
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 119
- 230000024241 parasitism Effects 0.000 title abstract 2
- 230000006870 function Effects 0.000 claims abstract description 146
- 230000003071 parasitic effect Effects 0.000 claims abstract description 59
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000013135 deep learning Methods 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 2
- 230000000977 initiatory effect Effects 0.000 abstract description 2
- 238000013461 design Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3842—Speculative instruction execution
- G06F9/3844—Speculative instruction execution using dynamic branch prediction, e.g. using branch history tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3802—Instruction prefetching
- G06F9/3804—Instruction prefetching for branches, e.g. hedging, branch folding
- G06F9/3806—Instruction prefetching for branches, e.g. hedging, branch folding using address prediction, e.g. return stack, branch history buffer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/046—Forward inferencing; Production systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/02—Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/10—Interfaces, programming languages or software development kits, e.g. for simulating neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates to the technical field of serverless computing, in particular to a branch prediction method and device for serverless computing based on process parasitic, electronic equipment and a readable storage medium.
- Serverless computing refers to building and running applications without managing infrastructure such as servers. It describes a more fine-grained deployment model in which an application is broken down into one or more fine-grained functions that are uploaded to a platform and then executed, scaled and billed based on current needs.
- Serverless computing does not mean that servers are no longer used to host and run code, nor does it mean that operation and maintenance engineers are no longer needed, but that consumers of serverless computing no longer need to configure, maintain, update, expand, and In terms of capacity planning, these tasks and functions are all handled by the serverless platform and completely abstracted from developers and IT/operations teams. As a result, developers focus on writing the business logic of the application, and operations engineers are able to raise their focus to more critical business tasks.
- a branch predictor In computer architecture, a branch predictor (Branch predictor) is a digital circuit that guesses which branch will be executed before the execution of the branch instruction ends to improve the performance of the processor's instruction pipeline. The purpose of using a branch predictor is to improve the flow of instruction pipelines.
- branch predictor needs a certain amount of training to achieve a relatively stable high prediction accuracy. Therefore, when functions in serverless computing are scheduled to servers, branch prediction accuracy is usually very low at the very beginning. However, the running time of functions in serverless computing is usually at the millisecond level, and high branch prediction errors usually lead to more performance overhead, thereby reducing the execution performance of functions in serverless computing.
- the current solution is usually to redesign the branch predictor, and redesign the branch predictor algorithm.
- a branch predictor is a hardware device, and redesigning a branch predictor requires modifications at the hardware level, which will reduce the versatility of branch prediction.
- the purpose of the present invention is to provide a branch prediction method and device based on process parasitic serverless computing, electronic equipment and readable storage media, without changing the hardware of the branch predictor, to improve the accuracy of branch prediction and improve the accuracy of serverless computing. Computes the execution performance of the function.
- the present invention provides a branch prediction method based on process parasitic serverless computing, comprising the following steps:
- the parasitic process is triggered when the container is initialized on the new server, and the parasitic process is used to initiate a system call, triggering the system kernel to select a target template function according to the type of the target function and copy N times;
- the branch predictor on the new server is trained by using the copied execution data of the N target template functions as training data.
- the objective function is dispatched to an instance in the current computing environment that is not executing the function task, and the instance executes the calculation task of the objective function.
- branching method of serverless computing based on process parasitic further includes:
- an instance is generated in the current computing environment, and the instance executes the computing task of the objective function.
- the judging whether the current computing environment needs to be expanded includes:
- branching method of serverless computing based on process parasitic further includes:
- an instance is generated that performs the computation tasks of the target function.
- the type of the objective function is inferred using a python deep learning algorithm.
- the core of the target template function is programming language, if-else logic structure, for loop position feature, and function feature.
- the present invention also provides a branch prediction device for serverless computing based on process parasitic, including:
- a receiving module configured to receive a call request from a user for a target function
- a scheduling module configured to schedule the container executing the target function to a new server that has not executed the target function in a short period of time when capacity expansion is required; wherein a parasitic process;
- a call module configured to trigger the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N Second-rate;
- a training module configured to use the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
- the present invention also provides an electronic device, including a processor and a memory, and a computer program is stored on the memory, and when the computer program is executed by the processor, it can realize any of the above-mentioned Steps of a branch prediction method for serverless computing based on process parasitic.
- the present invention also provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the process-based parasitic The steps of the branch prediction method for serverless computing.
- the present invention has universal applicability.
- the present invention can improve the branch prediction accuracy of all types of servers by pre-executing template functions, improve the execution performance of functions in serverless computing, and is applicable to all architectures (including ARM, RISC-V, etc.).
- the present invention executes the template function in advance, making full use of the time locality of the branch predictor.
- FIG. 1 is an overall design architecture diagram of a branch prediction method based on process parasitic serverless computing provided by an embodiment of the present invention
- FIG. 2 is a flowchart of a branch prediction method for serverless computing based on process parasitic provided by an embodiment of the present invention
- FIG. 3 is a flowchart of a branch prediction method based on process parasitic serverless computing in a specific example of the present invention
- FIG. 4 is a structural diagram of a branch prediction device for serverless computing based on process parasitic according to an embodiment of the present invention.
- the present invention provides a branch prediction method, device, electronic equipment and readable storage medium based on process parasitic serverless computing.
- Serverless computing is a method of providing back-end services on demand. Serverless providers allow users to write and deploy code without worrying about the underlying infrastructure. Users who obtain backend services from serverless providers will be charged based on the amount of computation and resource usage, and since this service is automatically scalable, there is no need to reserve and pay for a fixed amount of bandwidth or servers.
- the container contains the application and all the elements required for the application to function properly, including system libraries, system settings, and other dependencies. Any type of application can run in a container, and no matter where the containerized application is hosted, it will function the same way. By the same token, containers can also carry serverless computing applications (that is, functions) and run on any server on the cloud platform.
- An instance refers to the runtime environment in which an application is running.
- a container A running a certain service can be considered as an instance of the service.
- the functions of the serverless computing platform can be reduced to zero. Due to the automatic scaling of serverless computing, a large number of serverless function instances can be pulled up in a short period of time.
- the present invention designs a template function with programming language, if-else logic structure, for loop position feature and function feature as the core by investigating the mainstream serverless function workload.
- the code size of the template function is usually 20-30% of the normal function, does not generate any network requests and disk operations, and the execution time is usually 5-10ms.
- multiple functions use python to perform deep learning and infer the same type of function, then multiple functions correspond to a template function, because their execution process is basically the same: load the library, load the algorithm model, read the parameters, perform Infer, return the result.
- the present invention adds pre-execution processes to the base image by redesigning the base container image.
- the pre-executed process starts to execute at the beginning of the container startup, calling the system call in advance to trigger the process of the kernel copying the template function.
- the invention realizes fast duplication of specified template functions by adding a system call in the system kernel.
- the system call passes parameters which template function needs to be copied, such as python deep learning, template functions such as web template, bigdata template, ML template, Stream template.
- template functions such as web template, bigdata template, ML template, Stream template.
- a branch prediction method based on process parasitic serverless computing provided by the present invention, including the following steps:
- Step S100 receiving a call request from a user for a target function
- Step S200 when capacity expansion is required, dispatch the container executing the target function to a new server that has not executed the target function in a short period of time; wherein a parasitic process is pre-added to the base image of the container;
- Step S300 triggering the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N times;
- Step S400 using the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
- step S100 the user initiates a calling request for the target function through the client, and the client can make the request in the form of a web interface, a command line tool, or a RESTful API.
- step S200 it is first judged whether there is an instance in the current computing environment that does not execute the function task; Execute the calculation task of the objective function. It is understandable that if there are function instances running in the environment, it means that the function is in a warm-up state at this time, so scheduling the target function tasks to these machines will improve the accuracy of branch prediction. If not, then consider how to use the present invention to improve performance.
- no instance of the function task is running in the current computing environment, it is judged whether the current computing environment needs to be expanded; if no expansion is required, an instance is generated in the current computing environment, and the instance executes the target function computing tasks. Specifically, it is judged whether capacity expansion is required according to whether the CPU usage of all instances in the current computing environment exceeds a preset value. For example, when the CPU usage of all instances exceeds the situation, it is considered that the load is already heavy, so capacity expansion is required. If expansion is not required, an instance can be directly generated in the current computing environment, and the instance can be used to execute the computing task of the objective function.
- step S200 perform step S200 to schedule the container executing the objective function to a new server that has not executed the objective function in a short period of time (that is, schedule the container to the new server).
- step S300 since a parasitic process is pre-added in the basic image of the container, when the container is initialized on the new server, the process embedded in the container image (that is, the parasitic process) will be executed first. process), the parasitic process will initiate a system call to trigger the system kernel to select a target template function according to the type of the target function and copy it N times.
- the type of the objective function is inferred using a python deep learning algorithm. Since a function type corresponds to a template function, the corresponding target template function can be selected after the target function type is determined.
- step S400 the copied N target template functions are automatically executed, and the execution data can be used as training data to train the branch predictor on the new server.
- the branch predictor (hardware design) is not familiar with this type of function, so it will mispredict more. Therefore, in the present invention, the template function is executed in advance, so that the branch predictor is familiar with this function to achieve a warm-up effect. Branch prediction generally only occurs in the case of code logic routing such as if-else. Therefore, as long as the template function is also the result of this design, the branch predictor can be familiar with this logic structure in advance. After the same type of function is executed many times, the branch predictor will automatically become familiar with this function model, so as to make accurate predictions. The specific training process of the branch predictor belongs to the category of branch predictor algorithm design, and will not be repeated here.
- an instance is generated, and the instance executes the calculation task of the objective function. Since the triggering of the parasitic process, the initiation of the system call, and the copying of N template processes are performed during the container initialization process, the calculation task of generating an instance to execute the target function is performed after the container initialization is successful, while the calculation task of executing the target function.
- the branch predictor has been trained by the execution data of N target template functions, that is, the branch predictor has a warm-up effect on the target function when the calculation task of the target function is executed. Therefore, the accuracy of branch prediction can be improved, which in turn improves the execution performance of functions in serverless computing.
- the present invention designs a template function based on function features.
- a parasitic process is used to call the system call, and the system calls the fast fork template process, and then the template process is used to improve the accuracy of branch prediction and improve the function of serverless computing. execution performance.
- the present invention has carried out sufficient experiments, and the results show that the present invention improves the branch prediction accuracy rate by 49%, and improves the overall throughput by 38%, which shows that the design scheme of the present invention is feasible.
- the present invention also provides a branch prediction device based on process parasitic serverless computing, as shown in Figure 4, including:
- a receiving module 100 configured to receive a call request from a user for a target function
- the scheduling module 200 is configured to schedule the container executing the target function to a new server that has not executed the target function in a short period of time when capacity expansion is required; wherein a base image of the container is pre-added with a parasitic process;
- the calling module 300 is used to trigger the parasitic process when the container is initialized on the new server, the parasitic process is used to initiate a system call, trigger the system kernel to select a target template function according to the type of the target function and copy N times;
- the training module 400 is configured to use the copied execution data of the N target template functions as training data to train the branch predictor on the new server.
- the device for branch prediction based on process parasitic serverless computing further includes:
- the first judging module is used to judge whether there are instances of unexecuted function tasks running in the current computing environment after the receiving module 100 receives the user's call request for the target function; if so, trigger the first executing module;
- the first execution module is configured to schedule the target function to an instance in the current computing environment that is not executing function tasks, and the instance executes the computing tasks of the target function.
- the branch prediction based on process parasitic serverless computing further includes:
- the second judging module is used to judge whether the current computing environment needs to be expanded if there is no instance of the unexecuted function task in the current computing environment; if no expansion is required, trigger the second execution module;
- the second execution module is further configured to generate an instance in the current computing environment, and the instance executes the computing task of the objective function.
- the second judging module judges whether the current computing environment needs to be expanded, specifically:
- the device for branch prediction based on process parasitic serverless computing further includes:
- the third execution module is configured to generate an instance after the container is initialized on the new server, and the instance executes the calculation task of the target function.
- the type of the objective function is inferred using a python deep learning algorithm.
- the core of the target template function is programming language, if-else logic structure, for loop position feature, and function feature.
- the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
- the present invention also provides an electronic device, including a processor and a memory, where a computer program is stored in the memory, and when the processor executes the computer program, the process parasitic-based wireless The steps of the branch prediction method computed by the server.
- the processor may be a central processing unit (Central Processing Unit) in some embodiments Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chips.
- the processor is typically used to control the overall operation of the electronic device.
- the processor is configured to run program codes stored in the memory or process data, for example, run program codes of a branch prediction method based on process parasitic serverless computing.
- the memory includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), random access memory (RAM), SRAM Access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
- the storage may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device.
- the memory may also be an external storage device of the electronic device, such as a plug-in hard disk equipped on the electronic device, a smart memory card (SmartMedia Card,SMC), Secure Digital (Secure Digital, SD) card, flash memory card (FlashCard), etc.
- the storage may also include both an internal storage unit of the electronic device and an external storage device thereof.
- the memory is usually used to store the operation method and various application software installed in the electronic device, such as the program code of the branch prediction method based on process parasitic serverless computing.
- the memory can also be used to temporarily store various types of data that have been output or will be output.
- the present invention also provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, serverless computing based on process parasitic as described above is realized The steps of the branch prediction method.
- the present invention provides a branch prediction method, device, electronic equipment and readable storage medium based on process parasitic serverless computing, which has the following advantages and positive effects:
- the present invention has universal applicability.
- the present invention can improve the branch prediction accuracy of all types of servers by pre-executing template functions, improve the execution performance of functions in serverless computing, and is applicable to all architectures (including ARM, RISC-V, etc.).
- the present invention executes the template function in advance, making full use of the time locality of the branch predictor.
- the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
- a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
- the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Stored Programmes (AREA)
Abstract
Description
Claims (10)
- 一种基于进程寄生的无服务器计算的分支预测方法,其特征在于,包括如下步骤:接收用户针对目标函数的调用请求;在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
- 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,在接收到用户针对目标函数的调用请求之后,还包括:判断当前计算环境中是否有未执行函数任务的实例正在运行;如果有,则将所述目标函数调度到所述当前计算环境中未执行函数任务的实例上,该实例执行所述目标函数的计算任务。
- 如权利要求2所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述方法还包括:如果当前计算环境中没有未执行函数任务的实例正在运行,则判断所述当前计算环境是否需要扩容;如果不需要扩容,则在所述当前计算环境中生成一实例,该实例执行所述目标函数的计算任务。
- 如权利要求3所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述判断所述当前计算环境是否需要扩容,包括:判断所述当前计算环境中所有实例的CPU使用量是否超过预设值,如果是,判定所述当前计算环境需要扩容。
- 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述方法还包括:在所述新服务器上初始化所述容器之后,生成一实例,该实例执行所述目标函数的计算任务。
- 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述目标函数的类型采用python深度学习算法进行推断。
- 如权利要求1所述的基于进程寄生的无服务器计算的分支预测方法,其特征在于,所述目标模板函数以编程语言、if-else逻辑结构、for循环位置特征、函数特征为核心。
- 一种基于进程寄生的无服务器计算的分支预测装置,其特征在于,包括:接收模块,用于接收用户针对目标函数的调用请求;调度模块,用于在需要扩容的情况下,将执行所述目标函数的容器调度到短时间内没执行过所述目标函数的新服务器上;其中所述容器的基础镜像中预先添加了一寄生进程;调用模块,用于在所述新服务器上初始化所述容器时触发所述寄生进程,所述寄生进程用于发起系统调用,触发系统内核根据所述目标函数的类型选择一目标模板函数并复制N次;训练模块,用于将复制的N个所述目标模板函数的执行数据作为训练数据,对所述新服务器上的分支预测器进行训练。
- 一种电子设备,其特征在于,包括处理器和存储器,所述存储器上存储有计算机程序,所述计算机程序被所述处理器执行时,实现权利要求1至7中任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
- 一种可读存储介质,其特征在于,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1至7中任一项所述的基于进程寄生的无服务器计算的分支预测方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2022416127A AU2022416127B2 (en) | 2021-12-18 | 2022-12-09 | Process parasitism-based branch prediction method and device for serverless computing |
CA3212167A CA3212167A1 (en) | 2021-12-18 | 2022-12-09 | Process parasitism-based branch prediction method and device for serverless computing, electronic device, and readable storage medium |
US18/459,397 US11915003B2 (en) | 2021-12-18 | 2023-08-31 | Process parasitism-based branch prediction method and device for serverless computing, electronic device, and non-transitory readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111560316.2 | 2021-12-18 | ||
CN202111560316.2A CN116266242A (zh) | 2021-12-18 | 2021-12-18 | 基于进程寄生的无服务器计算的分支预测方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/459,397 Continuation US11915003B2 (en) | 2021-12-18 | 2023-08-31 | Process parasitism-based branch prediction method and device for serverless computing, electronic device, and non-transitory readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023109700A1 true WO2023109700A1 (zh) | 2023-06-22 |
Family
ID=86743986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/138141 WO2023109700A1 (zh) | 2021-12-18 | 2022-12-09 | 基于进程寄生的无服务器计算的分支预测方法及装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11915003B2 (zh) |
CN (1) | CN116266242A (zh) |
AU (1) | AU2022416127B2 (zh) |
CA (1) | CA3212167A1 (zh) |
WO (1) | WO2023109700A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837408A (zh) * | 2019-09-16 | 2020-02-25 | 中国科学院软件研究所 | 一种基于资源缓存的高性能无服务器计算方法及系统 |
US20200081745A1 (en) * | 2018-09-10 | 2020-03-12 | Nuweba Labs Ltd. | System and method for reducing cold start latency of serverless functions |
CN112860450A (zh) * | 2020-12-04 | 2021-05-28 | 武汉悦学帮网络技术有限公司 | 一种请求处理方法及装置 |
US20210184941A1 (en) * | 2019-12-13 | 2021-06-17 | Hewlett Packard Enterprise Development Lp | Proactively accomodating predicted future serverless workloads using a machine learning prediction model and a feedback control system |
CN113656179A (zh) * | 2021-08-19 | 2021-11-16 | 北京百度网讯科技有限公司 | 云计算资源的调度方法及装置、电子设备和存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10817916B2 (en) * | 2013-09-16 | 2020-10-27 | Amazon Technologies, Inc. | Client-selectable power source options for network-accessible service units |
US10891153B1 (en) * | 2017-02-22 | 2021-01-12 | Virtuozzo International Gmbh | System and method for switching file systems underneath working processes |
-
2021
- 2021-12-18 CN CN202111560316.2A patent/CN116266242A/zh active Pending
-
2022
- 2022-12-09 CA CA3212167A patent/CA3212167A1/en active Pending
- 2022-12-09 AU AU2022416127A patent/AU2022416127B2/en active Active
- 2022-12-09 WO PCT/CN2022/138141 patent/WO2023109700A1/zh active Application Filing
-
2023
- 2023-08-31 US US18/459,397 patent/US11915003B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200081745A1 (en) * | 2018-09-10 | 2020-03-12 | Nuweba Labs Ltd. | System and method for reducing cold start latency of serverless functions |
CN110837408A (zh) * | 2019-09-16 | 2020-02-25 | 中国科学院软件研究所 | 一种基于资源缓存的高性能无服务器计算方法及系统 |
US20210184941A1 (en) * | 2019-12-13 | 2021-06-17 | Hewlett Packard Enterprise Development Lp | Proactively accomodating predicted future serverless workloads using a machine learning prediction model and a feedback control system |
CN112860450A (zh) * | 2020-12-04 | 2021-05-28 | 武汉悦学帮网络技术有限公司 | 一种请求处理方法及装置 |
CN113656179A (zh) * | 2021-08-19 | 2021-11-16 | 北京百度网讯科技有限公司 | 云计算资源的调度方法及装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
AU2022416127B2 (en) | 2024-03-07 |
CN116266242A (zh) | 2023-06-20 |
US11915003B2 (en) | 2024-02-27 |
AU2022416127A1 (en) | 2023-09-28 |
US20230409330A1 (en) | 2023-12-21 |
CA3212167A1 (en) | 2023-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2658190C2 (ru) | Управление доступом во время выполнения к интерфейсам прикладного программирования | |
US20190361753A1 (en) | Methods, systems and apparatus to dynamically facilitate boundaryless, high availability system management | |
KR100898315B1 (ko) | 인핸스드 런타임 호스팅 | |
JP2018533795A (ja) | 計算グラフのストリームベースのアクセラレータ処理 | |
CN114780225B (zh) | 一种分布式模型训练系统、方法及装置 | |
US20090106730A1 (en) | Predictive cost based scheduling in a distributed software build | |
US9904574B2 (en) | Parallel computing without requiring antecedent code deployment | |
KR20140054948A (ko) | 임베디드 시스템을 위한 오픈씨엘 응용 소프트웨어 개발 지원 도구 구성 및 방법 | |
US8458710B2 (en) | Scheduling jobs for execution on a computer system | |
US11294729B2 (en) | Resource provisioning for multiple invocations to an electronic design automation application | |
CN111625317A (zh) | 一种业务系统的容器云构建方法及相关装置 | |
Harichane et al. | KubeSC‐RTP: Smart scheduler for Kubernetes platform on CPU‐GPU heterogeneous systems | |
CN113391921B (zh) | 一种应用实例的资源配额校验方法 | |
CN111597035A (zh) | 基于多线程的仿真引擎时间推进方法及系统 | |
CN110381150A (zh) | 区块链上的数据处理方法、装置、电子设备及存储介质 | |
CN111782335A (zh) | 通过进程内操作系统的扩展应用机制 | |
WO2021098257A1 (zh) | 一种基于异构计算平台的业务处理方法 | |
Li et al. | Easyscale: Accuracy-consistent elastic training for deep learning | |
US10552135B1 (en) | Reducing a size of an application package | |
WO2023109700A1 (zh) | 基于进程寄生的无服务器计算的分支预测方法及装置 | |
JP5542643B2 (ja) | シミュレーション装置及びシミュレーションプログラム | |
JP2019526091A (ja) | 1つまたは複数の通信チャネルにより相互接続された複数の異なるメモリ・ロケーションを有するコンピューティング・システムのアプリケーションを最適化する方法、コンピュータ可読プログラムを含む非一時的コンピュータ可読記憶媒体、およびシステム | |
JP3777092B2 (ja) | 分散アプリケーションを実行する方法およびシステム | |
US20050086667A1 (en) | Symmetric Scheduling for parallel execution | |
CN112783729A (zh) | 一种针对灰度发布的异常处理方法及异常处理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22906447 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022416127 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 3212167 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2022416127 Country of ref document: AU Date of ref document: 20221209 Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |