WO2021000570A1 - 模型加载方法及系统、控制节点及执行节点 - Google Patents

模型加载方法及系统、控制节点及执行节点 Download PDF

Info

Publication number
WO2021000570A1
WO2021000570A1 PCT/CN2020/071406 CN2020071406W WO2021000570A1 WO 2021000570 A1 WO2021000570 A1 WO 2021000570A1 CN 2020071406 W CN2020071406 W CN 2020071406W WO 2021000570 A1 WO2021000570 A1 WO 2021000570A1
Authority
WO
WIPO (PCT)
Prior art keywords
execution
node
several
loading
model
Prior art date
Application number
PCT/CN2020/071406
Other languages
English (en)
French (fr)
Inventor
王跃铭
李继良
Original Assignee
创新先进技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 创新先进技术有限公司 filed Critical 创新先进技术有限公司
Priority to US16/802,655 priority Critical patent/US11003501B2/en
Priority to US16/939,740 priority patent/US10929191B2/en
Publication of WO2021000570A1 publication Critical patent/WO2021000570A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Definitions

  • the present invention relates to the field of computer technology, in particular to a model loading method and system, control node and execution node.
  • model prediction has become an important online service.
  • the corresponding model needs to be loaded in the cluster nodes.
  • embodiments of the present invention provide a model loading method and system, control node and execution node, which can improve the usability of the system.
  • an embodiment of the present invention provides a model loading method, including:
  • the execution nodes Send load requests to the execution nodes respectively, so that the execution nodes start a plurality of execution processes according to the corresponding load requests, and make the execution processes start a plurality of model service frameworks, and each of the model service frameworks loads a plurality of models ;
  • the loading request includes: the loading task corresponding to the execution node; the execution process corresponds to the model service framework one-to-one.
  • an embodiment of the present invention provides a model loading method, including:
  • the loading request includes: a loading task corresponding to an execution node; the loading task corresponding to the execution node is determined by the control node according to a preset execution script and resource information of several execution nodes; Among them, different execution nodes are deployed on different cluster nodes;
  • the loading request several execution processes are started, so that the several execution processes start several model service frameworks, and each of the model service frameworks loads several models; wherein, the execution process corresponds to the model service framework one-to-one.
  • an embodiment of the present invention provides a control node, including:
  • the determining unit is configured to determine the loading task corresponding to each execution node according to the preset execution script and resource information of several execution nodes; wherein, different execution nodes are deployed on different cluster nodes;
  • the sending unit is configured to respectively send loading requests to the several execution nodes, so that the execution nodes start several execution processes according to the corresponding loading requests, and cause the several execution processes to start several model service frameworks and each model
  • the service framework loads several models; wherein, the load request includes: a load task corresponding to the execution node; the execution process corresponds to the model service framework one-to-one.
  • an execution node including:
  • the receiving unit is configured to receive a loading request sent by a control node; the loading request includes: a loading task corresponding to the execution node; the loading task corresponding to the execution node is determined by the control node according to a preset execution script and a number of execution nodes The resource information of is determined; among them, different execution nodes are deployed on different cluster nodes;
  • the starting unit is configured to start several execution processes according to the loading request, so that the several execution processes start several model service frameworks, and each of the model service frameworks loads several models; wherein, the execution process and the model service The frames correspond one to one.
  • an embodiment of the present invention provides a model loading system, including: the control node described in any of the foregoing embodiments and the execution node described in any of the foregoing embodiments.
  • the above-mentioned at least one technical solution adopted in the embodiment of the present invention can achieve the following beneficial effects:
  • the method starts several model service frameworks in each cluster node through execution nodes deployed in different cluster nodes, and loads several model service frameworks through each model service framework. model.
  • This method can deploy several model service frameworks in a cluster node. When a model service framework is abnormal, the cluster node does not need to be restarted, and other model service frameworks in the cluster node can still work normally, which can improve the availability of the system.
  • Figure 1 is a flowchart of a model loading method provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of a model loading method provided by another embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a control node provided by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an execution node provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an execution node provided by another embodiment of the present invention.
  • Fig. 6 is a schematic structural diagram of a model loading system provided by an embodiment of the present invention.
  • FIG. 7 is a flowchart of a model loading method provided by another embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a cluster provided by an embodiment of the present invention.
  • Fig. 9 is a schematic structural diagram of a Ray-based cluster provided by an embodiment of the present invention.
  • model loading method only one model service framework is started in a cluster node, and several models are loaded through the model service framework.
  • the deployment method is to load different models on the same model service framework.
  • the model service framework when the model service framework is abnormal, you need to restart the cluster node where the model service framework is located. After the cluster node restarts successfully, restart the model service framework and reload the model that has been deployed on the cluster node, and restart the cluster The cost of the node is higher.
  • the deployment method of this method is segmented according to machine granularity, that is, only one model service framework can be started in a cluster node, which is likely to cause a waste of cluster node resources.
  • loading multiple computationally intensive models on the same model service framework will cause resource preemption and affect service performance.
  • the embodiment of the present invention provides a model loading method, which is applied to a control node. As shown in FIG. 1, the method may include the following steps:
  • Step 101 Determine the loading task corresponding to each execution node according to the preset execution script and resource information of several execution nodes; wherein, different execution nodes are deployed on different cluster nodes.
  • a cluster node is a unit in a cluster and has a relatively independent operating environment, which can be any one or more of physical machines, virtual machines, and containers.
  • the execution node is an independent process responsible for task scheduling of this node.
  • Control node It is an independent process that is responsible for global coordination of task scheduling between different execution nodes.
  • Execution process user-level process, responsible for starting the model service framework and managing its life cycle.
  • the model service framework may include: HTTP framework and Tensorflow framework, among which the Tensorflow framework is an open source machine learning framework.
  • the following embodiments will take the Tensorflow framework as an example for description.
  • the model service framework is started by the execution process, and is responsible for executing specific prediction calculation requests, such as receiving the characteristic data in the request, calculating the prediction score, and returning it.
  • the resource information of the execution node includes: the number of CPU cores of the cluster node where the execution node is located, and/or the remaining memory capacity of the cluster node where the execution node is located.
  • the control node is deployed by running the deployment script, and the execution nodes are respectively deployed in each cluster node and linked to the control node.
  • Each execution node reports its resource information to the control node, and the service deployment is completed.
  • determine the loading task corresponding to each execution node including:
  • the load task corresponding to the execution node includes: the number of models corresponding to the execution node.
  • the load task corresponding to the execution node may also include the statement for the model and the model service framework in the execution script.
  • the resource information corresponding to the model refers to the memory capacity corresponding to the model, that is, the memory capacity required to load the model.
  • model service framework in the execution script can have only one type or different types.
  • models declared in the execution script can also belong to different types.
  • the model corresponding to each execution node and the resource information of several execution nodes are determined according to the total number of different types of models declared in the execution script, the resource information corresponding to each model, and the resource information of several execution nodes. The number of models.
  • Step 102 Send loading requests to several execution nodes respectively, so that the execution nodes start several execution processes according to the corresponding loading requests, and make several execution processes start several model service frameworks, and each model service framework loads several models; where, the load request It includes: load tasks corresponding to the execution node, and the execution process corresponds to the model service framework one to one.
  • the method starts several model service frameworks in each cluster node through execution nodes deployed in different cluster nodes, and loads several models through each model service framework.
  • This method can deploy at least two model service frameworks in one cluster node. When one model service framework is abnormal, the cluster node does not need to be restarted, and other model service frameworks in the cluster node can still work normally, which can improve the availability of the system.
  • each model service framework loads a model.
  • an embodiment of the present invention provides a model loading method, which is applied to an execution node, and includes:
  • Step 201 Receive a load request sent by the control node; the load request includes: the load task corresponding to the execution node; the load task corresponding to the execution node is determined by the control node according to the preset execution script and the resource information of several execution nodes; where different The execution nodes of are deployed on different cluster nodes.
  • Step 202 Start several execution processes according to the loading request, so that the several execution processes start several model service frameworks, and each model service framework loads several models; wherein the execution processes correspond to the model service frameworks one-to-one.
  • the loading request further includes: a statement for the execution process in the execution script;
  • Starting several execution processes according to the loading request including: starting several execution processes according to the declaration of the execution process in the execution script.
  • the loading request further includes: a statement in the execution script for the model service framework;
  • the method further includes: rebuilding the target execution process when it is detected that the target execution process is disconnected in the several execution processes.
  • an embodiment of the present invention provides a control node, including:
  • the determining unit 301 is configured to determine the loading task corresponding to each execution node according to a preset execution script and resource information of several execution nodes; wherein, different execution nodes are deployed on different cluster nodes;
  • the sending unit 302 is configured to send loading requests to several execution nodes respectively, so that the execution nodes start several execution processes according to the corresponding loading requests, and make several execution processes start several model service frameworks, and each model service framework loads several models; where ,
  • the loading request includes: the loading task corresponding to the execution node, and the execution process corresponds to the model service framework one to one.
  • the determining unit 301 is configured to determine the number of models corresponding to each execution node according to the total number of models declared in the execution script, the resource information corresponding to each model, and the resource information of several execution nodes.
  • the cluster node includes any one or more of physical machines, virtual machines, and containers.
  • the resource information of the execution node includes: the number of CPU cores of the cluster node where the execution node is located, and/or the remaining memory capacity of the cluster node where the execution node is located.
  • an embodiment of the present invention provides an execution node, including:
  • the receiving unit 401 is configured to receive a loading request sent by a control node; the loading request includes: a loading task corresponding to the execution node; the loading task corresponding to the execution node is determined by the control node according to a preset execution script and resource information of several execution nodes; Among them, different execution nodes are deployed on different cluster nodes;
  • the starting unit 402 is configured to start several execution processes according to the loading request, so that the several execution processes start several model service frameworks, and each model service framework loads several models; wherein the execution process corresponds to the model service framework one-to-one.
  • the execution node further includes: a monitoring unit 403;
  • the monitoring unit 403 is configured to rebuild the target execution process when it is detected that the target execution process is disconnected in several execution processes.
  • the loading request further includes: a statement for the execution process in the execution script;
  • the starting unit 402 is configured to start several execution processes according to the statement in the execution script for the execution process.
  • the loading request further includes: a statement in the execution script for the model service framework;
  • the starting unit 402 is configured to enable a number of execution processes to start a number of model service frameworks according to the statement of the execution script for the model service framework.
  • the cluster node includes any one or more of physical machines, virtual machines, and containers.
  • the resource information of the execution node includes: the number of CPU cores of the cluster node where the execution node is located, and/or the remaining memory capacity of the cluster node where the execution node is located.
  • an embodiment of the present invention provides a model loading system, including: a control node 601 of any of the foregoing embodiments and an execution node 602 of any of the foregoing embodiments.
  • the number of execution nodes in the model loading system can be set according to actual requirements.
  • the model loading system includes a control node 601 and two execution nodes 602.
  • the embodiment of the present invention takes the cluster shown in FIG. 8 as an example to describe in detail the model loading method, which includes:
  • Step 701 The control node determines the loading task corresponding to each execution node according to the preset execution script and the resource information of the three execution nodes; wherein, different execution nodes are deployed on different physical machines.
  • the cluster shown in FIG. 8 includes physical machine 1, physical machine 2, physical machine 3, and physical machine 4.
  • the control node is deployed in physical machine 1
  • the three execution nodes are deployed on physical machine 2, physical machine 3, and physical machine 4 respectively.
  • the execution script declares the model service framework, the number of model service frameworks, the execution process, the total number of models, the total number of models, and the resource information corresponding to each model.
  • the model service frameworks are all Tensorflow frameworks, the number of Tensorflow frameworks is 9, and each Tensorflow framework loads 2 identical models, and the total number of models is 18.
  • the number of models corresponding to each execution node is determined according to the remaining memory capacity of the cluster node where each execution node is located, the memory capacity corresponding to each model, and the total number of models.
  • the loading task corresponding to the execution node includes: the number of models corresponding to the execution node.
  • Step 702 The control node sends a load request to the three execution nodes respectively.
  • the load request includes: the statement of the model corresponding to the execution node, the load task corresponding to the execution node, the statement for the execution process in the execution script, and the Tensorflow framework corresponding to the execution node The number of Tensorflow frameworks corresponding to the declaration and execution nodes.
  • the statement of the model corresponding to the execution node, the statement of the execution process in the execution script, the statement of the Tensorflow framework corresponding to the execution node, and the number of Tensorflow frameworks corresponding to the execution node can also be included in the load task corresponding to the execution node in.
  • the content in the loading request received by each execution node is the same, and now only a loading request of one execution node is taken as an example for description.
  • the load request includes: the declaration of the Tensorflow framework corresponding to the execution node, the number of Tensorflow frameworks corresponding to the execution node is 3, the declaration of the execution process in the execution script, the declaration of the model corresponding to the execution node, and the number of models corresponding to the execution node For six.
  • Step 703 The execution node receives the loading request sent by the control node.
  • Step 704 The execution node starts several execution processes according to the statement in the execution script for the execution process and the number of Tensorflow frameworks corresponding to the execution node.
  • the number of Tensorflow frameworks corresponding to the execution node is equal to the number of execution processes
  • Step 705 The several execution processes start several Tensorflow frameworks according to the declaration of the Tensorflow framework corresponding to the execution node, and the execution processes correspond to the Tensorflow framework one-to-one.
  • the execution process starts the Tensorflow framework according to the declaration of the Tensorflow framework corresponding to the execution node.
  • Step 706 Each Tensorflow framework loads several models according to the statement of the model corresponding to the execution node and the load task corresponding to the execution node.
  • the Tensorflow framework loads several models according to the model corresponding to the execution node and the load task corresponding to the execution node (that is, the number of models corresponding to the execution node).
  • each Tensorflow framework loads 2 models.
  • Step 707 The execution node rebuilds the target execution process when it detects that the target execution process is disconnected in several execution processes.
  • Each execution node can monitor the running status of the execution process.
  • the target execution process can be rebuilt in time, which can reduce the impact of the target execution process loss on the model prediction process.
  • the execution script declares various model service frameworks, the number of each model service framework, the execution process corresponding to each model service framework, and various models. , The total number of models, and the resource information corresponding to each model.
  • the execution script can also declare information such as the number of each model.
  • the load request received by the execution node includes: the statement of the model service framework corresponding to the execution node, the number of each model service framework corresponding to the execution node, the declaration of the execution process corresponding to the execution node, and the model number corresponding to the execution node. Declare the number of each model corresponding to the execution node.
  • This method provides a lightweight resource isolation through the execution process, which can ensure resource exclusivity and avoid resource preemption problems caused by all models on a cluster node being loaded onto a model service framework.
  • the method can monitor the execution process through the execution node, and rebuild when the execution process is disconnected, and can automatically realize the failed restart of the model service framework in a lightweight manner without restarting the cluster node where the model service framework is located.
  • the methods and devices provided in the above embodiments can be implemented based on Ray, an open source distributed execution engine.
  • the service is the Ray service
  • the execution script is the Driver script
  • the control node is the Ray head node.
  • the node is the Ray node
  • the execution process is the ray-actor.
  • the cluster shown in FIG. 9 corresponds to the cluster shown in FIG. 8 and is a Ray-based cluster.
  • the Ray head node is the head node of the Ray service
  • the ray-actor is the resource package defined by the Ray service.
  • the Driver script is a user-defined execution script based on the Ray API and Tensorflow API. The Driver script declares the total number and each of the models according to the Ray API. Resource information corresponding to the model. Driver script can also declare ray-actor and model service framework according to Ray API.
  • a programmable logic device Programmable Logic Device, PLD
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
  • controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic.
  • controller in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for implementing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, remote processing devices connected through a communication network perform tasks.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Stored Programmes (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明提供了一种模型加载方法及系统、控制节点及执行节点,其中,该模型加载方法,包括:根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;分别向所述若干执行节点发送加载请求,以使所述执行节点根据对应的加载请求启动若干执行进程,并使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述加载请求中包括:所述执行节点对应的加载任务;所述执行进程与所述模型服务框架一一对应。

Description

模型加载方法及系统、控制节点及执行节点 技术领域
本发明涉及计算机技术领域,特别涉及一种模型加载方法及系统、控制节点及执行节点。
背景技术
随着机器学习技术的快速发展,模型预测已经成为一种重要的线上服务。为了提供模型预测服务,在提供模型预测服务之前,需要先在集群节点中加载相应的模型。
目前,在一个集群节点中仅启动一个模型服务框架,通过该模型服务框架加载若干模型。但是,当该模型服务框架发生异常时,需要重启该模型服务框架所在的集群节点,待集群节点重启成功后重新启动模型服务框架并重新加载已经部署过在该集群节点上的模型。
因此,现有的模型加载方法使得系统的可用性较低。
发明内容
鉴于此,本发明实施例提供了一种模型加载方法及系统、控制节点及执行节点,能够提高系统的可用性。
第一方面,本发明实施例提供了一种模型加载方法,包括:
根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;
分别向所述若干执行节点发送加载请求,以使所述执行节点根据对应的加载请求启动若干执行进程,并使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述加载请求中包括:所述执行节点对应的加载任务;所述执行进程与所述模型服务框架一一对应。
第二方面,本发明实施例提供了一种模型加载方法,包括:
接收控制节点发送的加载请求;所述加载请求中包括:执行节点对应的加载任务;所述执行节点对应的加载任务由所述控制节点根据预设的执行脚本和若干执行节点的 资源信息确定;其中,不同的执行节点部署在不同的集群节点上;
根据所述加载请求启动若干执行进程,以使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述执行进程与所述模型服务框架一一对应。
第三方面,本发明实施例提供了一种控制节点,包括:
确定单元,用于根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;
发送单元,用于分别向所述若干执行节点发送加载请求,以使所述执行节点根据对应的加载请求启动若干执行进程,并使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述加载请求中包括:所述执行节点对应的加载任务;所述执行进程与所述模型服务框架一一对应。
第四方面,本发明实施例提供了一种执行节点,包括:
接收单元,用于接收控制节点发送的加载请求;所述加载请求中包括:执行节点对应的加载任务;所述执行节点对应的加载任务由所述控制节点根据预设的执行脚本和若干执行节点的资源信息确定;其中,不同的执行节点部署在不同的集群节点上;
启动单元,用于根据所述加载请求启动若干执行进程,以使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述执行进程与所述模型服务框架一一对应。
第五方面,本发明实施例提供了一种模型加载系统,包括:上述任一实施例所述的控制节点和上述任一实施例所述的执行节点。
本发明实施例采用的上述至少一个技术方案能够达到以下有益效果:该方法通过部署在不同集群节点中的执行节点在每个集群节点中启动若干模型服务框架,并通过每个模型服务框架加载若干模型。该方法能够在一个集群节点中部署若干模型服务框架,一个模型服务框架发生异常时,不需要重启集群节点,集群节点中的其他模型服务框架仍然可以正常工作,能够提高系统的可用性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有 技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一个实施例提供的一种模型加载方法的流程图;
图2是本发明另一个实施例提供的一种模型加载方法的流程图;
图3是本发明一个实施例提供的一种控制节点的结构示意图;
图4是本发明一个实施例提供的一种执行节点的结构示意图;
图5是本发明另一个实施例提供的一种执行节点的结构示意图;
图6是本发明一个实施例提供的一种模型加载系统的结构示意图;
图7是本发明又一个实施例提供的一种模型加载方法的流程图;
图8是本发明一个实施例提供的一种集群的结构示意图;
图9是本发明一个实施例提供的一种基于Ray的集群的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例,基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
传统的模型加载方法中,在一个集群节点中仅启动一个模型服务框架,通过该模型服务框架加载若干模型。对于不同的模型,其部署的方式是在同一个模型服务框架上,加载不同的模型。
但是,当该模型服务框架发生异常时,需要重启该模型服务框架所在的集群节点,待集群节点重启成功后重新启动模型服务框架并重新加载已经部署过在该集群节点上的模型,而重启集群节点的成本较高。并且,该方法的部署方式是按照机器粒度切分,即在一个集群节点中仅能启动一个模型服务框架,容易造成集群节点资源的浪费。另外,同一个模型服务框架上加载多个计算密集的模型,会形成资源抢占,影响服务性能。
鉴于此,本发明实施例提供了一种模型加载方法,该方法应用于控制节点,如图1 所示,该方法可以包括以下步骤:
步骤101:根据预设的执行脚本和若干执行节点的资源信息,确定各个执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上。
集群节点是集群中的一个单元,具备相对独立的运行环境,可以是物理机、虚拟机和容器中任意一种或多种。
执行节点,是一个独立的进程,负责本节点的任务调度。
控制节点:是一个独立的进程,负责全局协调不同执行节点之间的任务调度。
执行进程:用户级别的进程,负责启动模型服务框架和管理其生命周期。
模型服务框架可以包括:HTTP框架和Tensorflow框架等,其中,Tensorflow框架是一种开源的机器学习框架。以下实施例将以Tensorflow框架为例进行说明。
模型服务框架由执行进程启动,负责执行具体的预测计算请求,例如接收请求中的特征数据进行计算得到预测分数后返回。
执行节点的资源信息包括:执行节点所在的集群节点的CPU核数,和/或,执行节点所在的集群节点的内存剩余容量。
对于集群中的各个节点,可以通过在节点中运行部署脚本,在集群节点中部署服务。通过运行部署脚本部署控制节点,并分别在各个集群节点中部署执行节点,并将其挂靠到控制节点中,各个执行节点将其资源信息上报给控制节点,则服务部署完成。
根据预设的执行脚本和若干执行节点的资源信息,确定各个执行节点对应的加载任务,包括:
根据执行脚本中声明的模型的总数、各个模型对应的资源信息和若干执行节点的资源信息,确定各个执行节点对应的模型的数量。即执行节点对应的加载任务包括:执行节点对应的模型的数量,当然,执行节点对应的加载任务中还可以包括执行脚本中针对模型及模型服务框架的声明。
模型对应的资源信息指的是模型对应的内存容量,即加载模型需要的内存容量。
需要说明的是,执行脚本中针对模型服务框架的声明可以仅有一种类型,也可以有不同的类型。与之类似的是,执行脚本中声明的模型也可以属于不同的类型。
例如,当声明的模型服务框架属于不同的类型时,则根据执行脚本中声明的不同类 型模型的总数、各个模型对应的资源信息和若干执行节点的资源信息,确定各个执行节点对应的模型及该模型的数量。
步骤102:分别向若干执行节点发送加载请求,以使执行节点根据对应的加载请求启动若干执行进程,并使若干执行进程启动若干模型服务框架、每个模型服务框架加载若干模型;其中,加载请求中包括:执行节点对应的加载任务,执行进程与模型服务框架一一对应。
该方法通过部署在不同集群节点中的执行节点在每个集群节点中启动若干模型服务框架,并通过每个模型服务框架加载若干模型。该方法能够在一个集群节点中部署至少两个模型服务框架,一个模型服务框架发生异常时,不需要重启集群节点,集群节点中的其他模型服务框架仍然可以正常工作,能够提高系统的可用性。
在本发明的一个实施例中,为了降低资源消耗,每个模型服务框架加载一个模型。
如图2所示,本发明实施例提供了一种模型加载方法,该方法应用于执行节点,包括:
步骤201:接收控制节点发送的加载请求;加载请求中包括:执行节点对应的加载任务;执行节点对应的加载任务由控制节点根据预设的执行脚本和若干执行节点的资源信息确定;其中,不同的执行节点部署在不同的集群节点上。
步骤202:根据加载请求启动若干执行进程,以使若干执行进程启动若干模型服务框架、每个模型服务框架加载若干模型;其中,执行进程与模型服务框架一一对应。
在本发明的一个实施例中,加载请求中还包括:执行脚本中针对执行进程的声明;
根据加载请求启动若干执行进程,包括:根据执行脚本中针对执行进程的声明启动若干执行进程。
在本发明的一个实施例中,加载请求中还包括:执行脚本中针对模型服务框架的声明;
若干执行进程启动若干模型服务框架,包括:若干执行进程根据执行脚本中针对模型服务框架的声明启动若干模型服务框架。
在本发明的一个实施例中,为了进一步提高系统的可用性,该方法还包括:当监测到若干执行进程中目标执行进程失联时,重建目标执行进程。
如图3所示,本发明实施例提供了一种控制节点,包括:
确定单元301,用于根据预设的执行脚本和若干执行节点的资源信息,确定各个执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;
发送单元302,用于分别向若干执行节点发送加载请求,以使执行节点根据对应的加载请求启动若干执行进程,并使若干执行进程启动若干模型服务框架、每个模型服务框架加载若干模型;其中,加载请求中包括:执行节点对应的加载任务,执行进程与模型服务框架一一对应。
在本发明的一个实施例中,确定单元301,用于根据执行脚本中声明的模型的总数、各个模型对应的资源信息和若干执行节点的资源信息,确定各个执行节点对应的模型的数量。
在本发明的一个实施例中,集群节点,包括:物理机、虚拟机和容器中任意一种或多种。
在本发明的一个实施例中,执行节点的资源信息,包括:执行节点所在的集群节点的CPU核数,和/或,执行节点所在的集群节点的内存剩余容量。
如图4所示,本发明实施例提供了一种执行节点,包括:
接收单元401,用于接收控制节点发送的加载请求;加载请求中包括:执行节点对应的加载任务;执行节点对应的加载任务由控制节点根据预设的执行脚本和若干执行节点的资源信息确定;其中,不同的执行节点部署在不同的集群节点上;
启动单元402,用于根据加载请求启动若干执行进程,以使若干执行进程启动若干模型服务框架、每个模型服务框架加载若干模型;其中,执行进程与模型服务框架一一对应。
在本发明的一个实施例中,如图5所示,该执行节点还包括:监测单元403;
监测单元403,用于当监测到若干执行进程中目标执行进程失联时,重建目标执行进程。
在本发明的一个实施例中,加载请求中还包括:执行脚本中针对执行进程的声明;
启动单元402,用于根据执行脚本中针对执行进程的声明启动若干执行进程。
在本发明的一个实施例中,加载请求中还包括:执行脚本中针对模型服务框架的声明;
启动单元402,用于以使若干执行进程根据执行脚本中针对模型服务框架的声明启 动若干模型服务框架。
在本发明的一个实施例中,集群节点,包括:物理机、虚拟机和容器中任意一种或多种。
在本发明的一个实施例中,执行节点的资源信息,包括:执行节点所在的集群节点的CPU核数,和/或,执行节点所在的集群节点的内存剩余容量。
如图6所示,本发明实施例提供了一种模型加载系统,包括:上述任一实施例的控制节点601和上述任一实施例的执行节点602。
模型加载系统中的执行节点的数量可以根据实际需求进行设置,例如,模型加载系统中包括控制节点601和2个执行节点602。
如图7所示,本发明实施例以图8所示的集群为例,对模型加载方法进行详细的说明,该方法包括:
步骤701:控制节点根据预设的执行脚本和3个执行节点的资源信息,确定各个执行节点对应的加载任务;其中,不同的执行节点部署在不同的物理机上。
图8所示的集群包括物理机1、物理机2、物理机3和物理机4。其中,控制节点部署在物理机1中,3个执行节点分别部署在物理机2、物理机3和物理机4上。
执行脚本中声明模型服务框架、模型服务框架的数量、执行进程、模型、模型的总数、各个模型对应的资源信息。
在本发明实施例中,模型服务框架皆为Tensorflow框架,Tensorflow框架的数量为9个,每个Tensorflow框架加载2个相同的模型,模型的总数为18个。
根据各个执行节点所在的集群节点的内存剩余容量、各个模型对应的内存容量和模型的总数,确定各个执行节点对应的模型的数量。在本发明实施例中,执行节点对应的加载任务包括:执行节点对应的模型的数量。
假设加载任务被分配至3个执行节点,每个执行节点对应的模型的数量均为6个。
步骤702:控制节点分别向3个执行节点发送加载请求,加载请求中包括:执行节点对应的模型的声明、执行节点对应的加载任务、执行脚本中针对执行进程的声明、执行节点对应的Tensorflow框架的声明和执行节点对应的Tensorflow框架的数量。
在实际应用场景中,执行节点对应的模型的声明、执行脚本中针对执行进程的声明、执行节点对应的Tensorflow框架的声明和执行节点对应的Tensorflow框架的数量还可以 包含在执行节点对应的加载任务中。
在本发明实施例中,各个执行节点接收到的加载请求中的内容相同,现仅以一个执行节点的加载请求为例进行说明。加载请求中包括:执行节点对应的Tensorflow框架的声明、执行节点对应的Tensorflow框架的数量为3个、执行脚本中针对执行进程的声明、执行节点对应的模型的声明、执行节点对应的模型的数量为6个。
步骤703:执行节点接收控制节点发送的加载请求。
步骤704:执行节点根据执行脚本中针对执行进程的声明、执行节点对应的Tensorflow框架的数量启动若干执行进程。
执行节点对应的Tensorflow框架的数量等于执行进程的数量
步骤705:若干执行进程根据执行节点对应的Tensorflow框架的声明启动若干Tensorflow框架,执行进程与Tensorflow框架一一对应。
执行进程根据执行节点对应的Tensorflow框架的声明启动Tensorflow框架。
步骤706:每个Tensorflow框架根据执行节点对应的模型的声明、执行节点对应的加载任务加载若干模型。
Tensorflow框架根据执行节点对应的模型、执行节点对应的加载任务(即执行节点对应的模型的数量)加载若干模型。
在本发明实施例中,每个Tensorflow框架加载2个模型。
步骤707:执行节点当监测到若干执行进程中目标执行进程失联时,重建目标执行进程。
各个执行节点可以对执行进程的运行情况进行监测,当目标执行进程失联时,及时重建目标执行进程,能够降低目标执行进程失联对模型预测过程的影响。
当模型服务框架存在多种类型、加载的模型也存在多种类型时,执行脚本中声明各种模型服务框架、每种模型服务框架的数量、每种模型服务框架对应的执行进程、各种模型、模型的总数、各个模型对应的资源信息。执行脚本中还可以声明每种模型的数量等信息。
相应的,执行节点接收到的加载请求中包括:执行节点对应的模型服务框架的声明、执行节点对应的每种模型服务框架的数量、执行节点对应的执行进程的声明、执行节点对应的模型的声明、执行节点对应的每种模型的数量。
在实际应用场景中,技术人员可以通过改变执行脚本来实现模型服务框架的扩容、缩容、上线、下线等,实现对模型服务框架的动态调整。该方法通过执行进程提供一个轻量级的资源隔离,可以保证资源的独占性,避免一个集群节点上的模型全部加载到一个模型服务框架上导致的资源抢占问题。该方法可以通过执行节点监测执行进程,并在执行进程失联时进行重建,可以轻量级地自动实现模型服务框架的失败重启,无需重启模型服务框架所在的集群节点。
在实际应用场景中,上述实施例提供的方法及装置,可以基于一种开源的分布式执行引擎Ray实现,此时,服务为Ray服务,执行脚本为Driver脚本,控制节点为Ray head节点,执行节点为Ray节点,执行进程为ray-actor。图9所示的集群与图8所示的集群相对应,是一种基于Ray的集群。
Ray head节点为Ray服务的头节点,ray-actor为Ray服务定义的资源封装,Driver脚本是用户自定义的基于Ray API和Tensorflow API的执行脚本,Driver脚本中按照Ray API声明模型的总数、各个模型对应的资源信息。Driver脚本中还可以按照Ray API声明ray-actor、模型服务框架等。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最 普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供 这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固 有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种模型加载方法,包括:
    根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;
    分别向所述若干执行节点发送加载请求,以使所述执行节点根据对应的加载请求启动若干执行进程,并使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述加载请求中包括:所述执行节点对应的加载任务;所述执行进程与所述模型服务框架一一对应。
  2. 如权利要求1所述的模型加载方法,
    所述根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务,包括:
    根据所述执行脚本中声明的所述模型的总数、各个所述模型对应的资源信息和所述若干执行节点的资源信息,确定各个所述执行节点对应的所述模型的数量。
  3. 如权利要求1或2所述的模型加载方法,
    所述集群节点,包括:物理机、虚拟机和容器中任意一种或多种;
    和/或,
    所述执行节点的资源信息,包括:所述执行节点所在的集群节点的CPU核数,和/或,所述执行节点所在的集群节点的内存剩余容量。
  4. 一种模型加载方法,包括:
    接收控制节点发送的加载请求;所述加载请求中包括:执行节点对应的加载任务;所述执行节点对应的加载任务由所述控制节点根据预设的执行脚本和若干执行节点的资源信息确定;其中,不同的执行节点部署在不同的集群节点上;
    根据所述加载请求启动若干执行进程,以使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述执行进程与所述模型服务框架一一对应。
  5. 如权利要求4所述的模型加载方法,进一步包括:
    当监测到所述若干执行进程中目标执行进程失联时,重建所述目标执行进程。
  6. 如权利要求4所述的模型加载方法,
    所述加载请求中还包括:所述执行脚本针对执行进程的声明;
    所述根据所述加载请求启动若干执行进程,包括:
    根据所述执行脚本针对执行进程的声明启动所述若干执行进程;
    和/或,
    所述加载请求中还包括:所述执行脚本针对模型服务框架的声明;
    所述若干执行进程启动若干模型服务框架,包括:
    所述若干执行进程根据所述执行脚本针对模型服务框架的声明启动若干模型服务框架。
  7. 如权利要求4-6中任一所述的模型加载方法,
    所述集群节点,包括:物理机、虚拟机和容器中任意一种或多种;
    和/或,
    所述执行节点的资源信息,包括:所述执行节点所在的集群节点的CPU核数,和/或,所述执行节点所在的集群节点的内存剩余容量。
  8. 一种控制节点,包括:
    确定单元,用于根据预设的执行脚本和若干执行节点的资源信息,确定各个所述执行节点对应的加载任务;其中,不同的执行节点部署在不同的集群节点上;
    发送单元,用于分别向所述若干执行节点发送加载请求,以使所述执行节点根据对应的加载请求启动若干执行进程,并使所述若干执行进程启动若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述加载请求中包括:所述执行节点对应的加载任务;所述执行进程与所述模型服务框架一一对应。
  9. 如权利要求8所述的控制节点,
    所述确定单元,用于根据所述执行脚本中声明的所述模型的总数、各个所述模型对应的资源信息和所述若干执行节点的资源信息,确定各个所述执行节点对应的所述模型的数量。
  10. 如权利要求8或9所述的控制节点,
    所述集群节点,包括:物理机、虚拟机和容器中任意一种或多种;
    和/或,
    所述执行节点的资源信息,包括:所述执行节点所在的集群节点的CPU核数,和/或,所述执行节点所在的集群节点的内存剩余容量。
  11. 一种执行节点,包括:
    接收单元,用于接收控制节点发送的加载请求;所述加载请求中包括:执行节点对应的加载任务;所述执行节点对应的加载任务由所述控制节点根据预设的执行脚本和若干执行节点的资源信息确定;其中,不同的执行节点部署在不同的集群节点上;
    启动单元,用于根据所述加载请求启动若干执行进程,以使所述若干执行进程启动 若干模型服务框架、每个所述模型服务框架加载若干模型;其中,所述执行进程与所述模型服务框架一一对应。
  12. 如权利要求11所述的执行节点,进一步包括:监测单元;
    所述监测单元,用于当监测到所述若干执行进程中目标执行进程失联时,重建所述目标执行进程。
  13. 如权利要求11所述的执行节点,
    所述加载请求中还包括:所述执行脚本针对执行进程的声明;
    所述启动单元,用于根据所述执行脚本针对执行进程的声明启动所述若干执行进程;
    和/或,
    所述加载请求中还包括:所述执行脚本针对模型服务框架的声明;
    所述启动单元,用于以使所述若干执行进程根据所述执行脚本针对模型服务框架的声明启动若干模型服务框架。
  14. 如权利要求11-13中任一所述的执行节点,
    所述集群节点,包括:物理机、虚拟机和容器中任意一种或多种;
    和/或,
    所述执行节点的资源信息,包括:所述执行节点所在的集群节点的CPU核数,和/或,所述执行节点所在的集群节点的内存剩余容量。
  15. 一种模型加载系统,包括:权利要求8-10任一所述的控制节点和权利要求11-14任一所述的执行节点。
PCT/CN2020/071406 2019-07-03 2020-01-10 模型加载方法及系统、控制节点及执行节点 WO2021000570A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/802,655 US11003501B2 (en) 2019-07-03 2020-02-27 Loading models on nodes having multiple model service frameworks
US16/939,740 US10929191B2 (en) 2019-07-03 2020-07-27 Loading models on nodes having multiple model service frameworks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910596970.5 2019-07-03
CN201910596970.5A CN110401700B (zh) 2019-07-03 2019-07-03 模型加载方法及系统、控制节点及执行节点

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/802,655 Continuation US11003501B2 (en) 2019-07-03 2020-02-27 Loading models on nodes having multiple model service frameworks

Publications (1)

Publication Number Publication Date
WO2021000570A1 true WO2021000570A1 (zh) 2021-01-07

Family

ID=68323954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071406 WO2021000570A1 (zh) 2019-07-03 2020-01-10 模型加载方法及系统、控制节点及执行节点

Country Status (2)

Country Link
CN (1) CN110401700B (zh)
WO (1) WO2021000570A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799782A (zh) * 2021-01-20 2021-05-14 北京迈格威科技有限公司 模型生成系统、方法、电子设备及存储介质
CN116501474A (zh) * 2023-06-08 2023-07-28 之江实验室 一种批量同质任务的处理系统、方法以及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11003501B2 (en) 2019-07-03 2021-05-11 Advanced New Technologies Co., Ltd. Loading models on nodes having multiple model service frameworks
CN110401700B (zh) * 2019-07-03 2020-10-16 阿里巴巴集团控股有限公司 模型加载方法及系统、控制节点及执行节点
CN111027713B (zh) * 2019-12-10 2022-09-02 支付宝(杭州)信息技术有限公司 共享机器学习系统及方法
CN111885105A (zh) * 2020-06-16 2020-11-03 广州三七互娱科技有限公司 任务执行方法、装置、系统、计算机设备和存储介质
CN113254438A (zh) * 2020-11-20 2021-08-13 云智慧(北京)科技有限公司 一种基于树结构的日志解析方法和系统
CN117555697B (zh) * 2024-01-11 2024-04-05 之江实验室 一种面向分布式训练的缓存加载系统、方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150082271A1 (en) * 2013-09-19 2015-03-19 Oracle International Corporation System and method for providing an editor for use with a business process design environment
CN107885762A (zh) * 2017-09-19 2018-04-06 北京百度网讯科技有限公司 智能大数据系统、提供智能大数据服务的方法和设备
CN107943794A (zh) * 2016-10-12 2018-04-20 阿里巴巴集团控股有限公司 一种翻译方法及系统
CN110401700A (zh) * 2019-07-03 2019-11-01 阿里巴巴集团控股有限公司 模型加载方法及系统、控制节点及执行节点

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107733977B (zh) * 2017-08-31 2020-11-03 北京百度网讯科技有限公司 一种基于Docker的集群管理方法及装置
CN109857475B (zh) * 2018-12-27 2020-06-16 深圳云天励飞技术有限公司 一种框架管理的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150082271A1 (en) * 2013-09-19 2015-03-19 Oracle International Corporation System and method for providing an editor for use with a business process design environment
CN107943794A (zh) * 2016-10-12 2018-04-20 阿里巴巴集团控股有限公司 一种翻译方法及系统
CN107885762A (zh) * 2017-09-19 2018-04-06 北京百度网讯科技有限公司 智能大数据系统、提供智能大数据服务的方法和设备
CN110401700A (zh) * 2019-07-03 2019-11-01 阿里巴巴集团控股有限公司 模型加载方法及系统、控制节点及执行节点

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799782A (zh) * 2021-01-20 2021-05-14 北京迈格威科技有限公司 模型生成系统、方法、电子设备及存储介质
CN112799782B (zh) * 2021-01-20 2024-04-12 北京迈格威科技有限公司 模型生成系统、方法、电子设备及存储介质
CN116501474A (zh) * 2023-06-08 2023-07-28 之江实验室 一种批量同质任务的处理系统、方法以及装置
CN116501474B (zh) * 2023-06-08 2023-09-22 之江实验室 一种批量同质任务的处理系统、方法以及装置

Also Published As

Publication number Publication date
CN110401700B (zh) 2020-10-16
CN110401700A (zh) 2019-11-01

Similar Documents

Publication Publication Date Title
WO2021000570A1 (zh) 模型加载方法及系统、控制节点及执行节点
TWI696083B (zh) 一種基於區塊鏈的共識方法及裝置
JP6921206B2 (ja) データベース状態決定方法およびデバイスならびに整合性検証方法およびデバイス
KR102140414B1 (ko) 블록체인 합의 방법 및 디바이스
TWI680656B (zh) 基於區塊鏈的資料處理方法及設備
WO2018177235A1 (zh) 一种区块链共识方法及装置
AU2018240159A1 (en) Method and device for sending transaction information and for consensus verification
WO2020199709A1 (zh) 一种刷新级联缓存的方法、系统及设备
US10540284B2 (en) Cache-coherent multiprocessor system and a method for detecting failures in a cache-coherent multiprocessor system
US10467106B2 (en) Data processing method, data processing system, and non-transitory computer program product for controlling a workload delay time
WO2020143410A1 (zh) 数据存储方法及装置、电子设备、存储介质
CN110609749B (zh) 一种分布式任务运行方法、系统及设备
WO2020168901A1 (zh) 一种数据计算方法及引擎
CN117075930B (zh) 一种计算框架管理系统
CN116151363B (zh) 分布式强化学习系统
US10929191B2 (en) Loading models on nodes having multiple model service frameworks
US10430245B2 (en) Systems and methods for dynamic low latency optimization
WO2021164368A1 (zh) 一种容器应用启动方法、系统、装置及电子设备
WO2021184901A1 (zh) 一种数据的写入方法、装置以及设备
TWI698137B (zh) 無線設備的掃描啟停方法及無線設備
CN115981751A (zh) 一种近存计算系统以及近存计算方法、装置、介质及设备
JP6622926B2 (ja) 分散型サービス処理の完全性をチェックする方法及び装置
US11392406B1 (en) Alternative interrupt reporting channels for microcontroller access devices
CN114546672A (zh) 一种无人驾驶通信方法、装置、设备及存储介质
CN116743550B (zh) 一种分布式存储集群的故障存储节点的处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20834273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20834273

Country of ref document: EP

Kind code of ref document: A1