WO2022166715A1 - 一种智能流水线处理方法、装置、存储介质及电子装置 - Google Patents

一种智能流水线处理方法、装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2022166715A1
WO2022166715A1 PCT/CN2022/074034 CN2022074034W WO2022166715A1 WO 2022166715 A1 WO2022166715 A1 WO 2022166715A1 CN 2022074034 W CN2022074034 W CN 2022074034W WO 2022166715 A1 WO2022166715 A1 WO 2022166715A1
Authority
WO
WIPO (PCT)
Prior art keywords
intelligent
unit
pipeline
environment information
intelligent pipeline
Prior art date
Application number
PCT/CN2022/074034
Other languages
English (en)
French (fr)
Inventor
牛小兵
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2022166715A1 publication Critical patent/WO2022166715A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Definitions

  • Embodiments of the present disclosure relate to the field of communications, and in particular, to an intelligent pipeline processing method, device, storage medium, and electronic device.
  • ASON Automatically Switched Optical Network
  • a centralized controller such as a Software Defined Network (SDN controller for short), is based on a logically centralized control architecture.
  • SDN controller Software Defined Network
  • Complete centralized connection scheduling for control components such as RC). Since it has global network resource information and can process all connection requests collaboratively, resource allocation can be optimized as a whole.
  • Figure 1 is a schematic diagram of the related components of the SDN controller in the related art, as shown in Figure 1, including a Network Call Controller (NCC), a Link Resources Manager (LRM for short), a connection control It includes the CC, the routing controller RC, the Notification Component, the Termination and Adaptation Performer (TAP for short) components, etc., as well as the forwarding plane equipment network elements connected to the TAP.
  • NCC Network Call Controller
  • LRM Link Resources Manager
  • TAP Termination and Adaptation Performer
  • the ASON control plane also has these functional components.
  • MCS Management Control System
  • MC Components management control components
  • management control components perform management control functions (hereinafter, for simplified description, the management control components are simply referred to as management control components).
  • the client context Client Context represents a component in the service layer controller controller, which is the interaction for the management and control services between the client/service controller Service;
  • Service Context Server Context represents a component in the client layer controller controller, which serves the interaction between the client/service controller for the management and control services.
  • AI Artificial Intelligence
  • ML Machine Learning
  • Figure 2 is the overall architecture diagram of machine learning in related technologies, as shown in Figure 2, including the management subsystem, the machine learning sandbox system ML sandbox subsystem, the machine learning pipeline subsystem ML pipeline subsystem and the machine learning lower layer network ML Underlay networks.
  • the Machine Learning Function Orchestrator (MLFO for short) is responsible for configuring the ML pipeline in the ML sandbox subsystem and the simulated ML underlay network, and for model training, testing and verification based on this.
  • the ML pipeline and simulated ML underlay network in the sandbox system are generated by MLFO.
  • a Machine Learning Pipeline includes logical nodes such as a source SRC, a collector C, a preprocessor PP, a model M, a policy P, a distributor D, and a sink SINK.
  • the source node provides data input to the MLP.
  • the collector node (Collector C) is responsible for collecting data from one or more source nodes.
  • the Preprocessor Node (Preprocessor PP) is responsible for preprocessing the data to enable the data to be used or consumed by the machine learning model.
  • Model nodes represent data processing rules and logic.
  • the policy node (policy P) will generate the concrete policy and apply the policy to the output of the model node.
  • the distributor node (distributor D) is responsible for determining the sink node (sink SINK) and distributing the model output to the sink node.
  • the sink node is responsible for executing the model output.
  • Embodiments of the present disclosure provide an intelligent pipeline processing method, device, storage medium, and electronic device, so as to at least solve the problem in the related art of how to introduce MLFO in the management and control system MCS to deploy the machine learning function.
  • an intelligent pipeline processing method is provided, applied to the second unit, and the method includes:
  • the smart model and the smart pipeline are deployed.
  • an intelligent pipeline processing method which is applied to the first unit, and the method includes:
  • the running environment information is used to instruct the second unit to construct the intelligent pipeline and the simulation network, simulate and train the intelligent model in the intelligent pipeline, and deploy all the intelligent pipelines.
  • the intelligent model and the intelligent pipeline are used to instruct the second unit to construct the intelligent pipeline and the simulation network, simulate and train the intelligent model in the intelligent pipeline, and deploy all the intelligent pipelines.
  • the intelligent model and the intelligent pipeline are used to instruct the second unit to construct the intelligent pipeline and the simulation network, simulate and train the intelligent model in the intelligent pipeline, and deploy all the intelligent pipelines.
  • an intelligent pipeline processing apparatus which is applied to the second unit, and the apparatus includes:
  • a receiving module configured to receive the operating environment information of the intelligent pipeline sent by the first unit
  • a building module configured to construct the intelligent pipeline according to the operating environment information, and simulate and train the intelligent model in the intelligent pipeline
  • a deployment module configured to deploy the intelligent model and the intelligent pipeline.
  • an intelligent pipeline processing device which is applied to the first unit, and the device includes:
  • the sending module is configured to send the operation environment information of the intelligent pipeline to the second unit, wherein the operation environment information is used to instruct the second unit to construct the intelligent pipeline and the simulation network, and simulate the training of the intelligent pipeline in the intelligent pipeline. model, and deploy the smart model with the smart pipeline.
  • a computer-readable storage medium is also provided, where a computer program is stored in the storage medium, wherein the computer program is configured to execute any one of the above method embodiments when running steps in .
  • an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor is configured to run the computer program to execute any of the above Steps in Method Examples.
  • the operation environment information of the intelligent pipeline sent by the first unit is received, the intelligent pipeline is constructed according to the operation environment information, and the intelligent model in the artificial intelligence pipeline is simulated and trained;
  • the above-mentioned intelligent pipeline can solve the problem of how to introduce MLFO in the management and control system MCS (including SDN controller and ASON control plane) to deploy machine learning functions in the related art, establish communication with the first unit, receive operating environment information, and according to The operating environment information is deployed, and MLFO is introduced into the management and control system MCS to deploy machine learning functions.
  • MCS management and control system MCS
  • FIG. 1 is a schematic diagram of the related components of an SDN controller in the related art
  • Figure 2 is the overall architecture diagram of machine learning in related technologies
  • FIG. 3 is a block diagram of a hardware structure of a mobile terminal of an intelligent pipeline processing method according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart 1 of an intelligent pipeline processing method according to an embodiment of the present disclosure
  • FIG. 5 is a second flowchart of an intelligent pipeline processing method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of the relationship between MLFO and SDN controller according to the present embodiment.
  • FIG. 7 is a flowchart of implementing a machine learning pipeline according to the present embodiment.
  • FIG. 8 is a schematic diagram of the connection relationship between the source and sink nodes and the SDN controller in the machine learning pipeline according to the present embodiment
  • FIG. 9 is a schematic diagram of a machine learning pipeline deployment according to the present embodiment.
  • FIG. 10 is a schematic diagram of the connection relationship between the machine learning pipeline according to the present embodiment and the SDN controller through the agent;
  • 11 is a schematic diagram of the connection relationship between the machine learning pipeline according to the present embodiment and the SDN controller through the customer context;
  • FIG. 12 is a block diagram 1 of the intelligent pipeline processing apparatus according to the present embodiment.
  • FIG. 13 is a second block diagram of the intelligent pipeline processing apparatus according to the present embodiment.
  • FIG. 3 is a block diagram of the hardware structure of the mobile terminal of the intelligent pipeline processing method according to the embodiment of the present disclosure.
  • the mobile terminal may include one or more (only shown in FIG. 3 ).
  • a processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA or a graphics processor GPU, etc.
  • a memory 104 for storing data
  • the above-mentioned mobile terminal may also Transmission devices 106 and input and output devices 108 are included for communication functions.
  • the structure shown in FIG. 3 is only for illustration, and does not limit the structure of the above-mentioned mobile terminal.
  • the mobile terminal may further include more or less components than those shown in FIG. 3 , or have a different configuration than that shown in FIG. 3 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the intelligent pipeline processing method in the embodiments of the present disclosure.
  • the processor 102 executes the computer programs stored in the memory 104 to execute Various functional applications and business chain address pool slicing processing implement the above methods.
  • Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memory located remotely from the processor 102, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a bearer network, a mobile communication network, and combinations thereof.
  • Transmission means 106 are used to receive or transmit data via a network.
  • the specific example of the above-mentioned network may include a wireless network provided by a communication provider of the mobile terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network equipment through a base station, an optical fiber, and can communicate with other transmission devices.
  • the transmission device 106 may be an optical module used to communicate with other transmission devices through optical fibers.
  • FIG. 4 is a flowchart of the intelligent pipeline processing method according to the embodiment of the present disclosure. As shown in FIG. Two units, the process includes the following steps:
  • Step S402 receiving the operation environment information of the intelligent pipeline sent by the first unit
  • Step S404 constructing the intelligent pipeline according to the operating environment information, and simulating training the intelligent model in the intelligent pipeline;
  • Step S406 deploying the intelligent model and the intelligent pipeline.
  • step S406 may specifically include:
  • the operating environment information sent by the first unit is received through the client context.
  • step S404 may specifically include:
  • the intelligent application requirements are determined based on the operating environment information, and further, the intelligent application requirements may be determined according to data sources, data characteristics of data sources, execution units, configuration commands of execution units, resources, and operation and maintenance policies, wherein the The operating environment information includes data sources, data characteristics, execution units, configuration commands, resources, and operation and maintenance strategies;
  • the intelligent pipeline is constructed according to the requirements of the intelligent application.
  • the method further includes: constructing a simulated network according to the operating environment information, wherein the simulated network is used to run the intelligent model.
  • the above-mentioned step S406 may specifically include: running the intelligent model in the simulated network, and obtaining an operation result; and deploying the intelligent model and the intelligent pipeline when the operation result is normal.
  • step S406 may further include: deploying the intelligent model and the intelligent pipeline on computing resources, storage resources and network resources controlled by the first unit.
  • a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit, and the SINK and the first unit are established.
  • the binding relationship between the SRC and the data source of the first unit in the intelligent pipeline, and the binding relationship between the SINK and the execution unit of the first unit may be established through an agent; or The binding relationship between the SRC in the intelligent pipeline and the data source of the first unit, and the binding relationship between the SINK and the execution unit of the first unit are established through the client context.
  • a binding relationship between the SRC and the data source may also be established by establishing an access interface between the SRC and the data source, The binding relationship between the SINK and the execution unit is established by establishing an access interface.
  • the intelligent pipeline includes an artificial intelligence pipeline and a machine learning pipeline
  • the intelligent model includes an artificial intelligence model and a machine learning model
  • FIG. 5 is a second flowchart of the intelligent pipeline processing method according to an embodiment of the present disclosure. As shown in FIG. 5 , the The process includes the following steps:
  • Step S502 sending the operation environment information of the intelligent pipeline to the second unit, wherein the operation environment information is used to instruct the second unit to construct the intelligent pipeline and the simulation network, and simulate and train the intelligent model in the intelligent pipeline, And deploy the smart model and the smart pipeline.
  • step S502 may specifically include:
  • the present embodiment is described in detail below by taking the first unit as an SDN controller and the second unit as an MLFO as an example.
  • FIG. 6 is a schematic diagram of the relationship between MLFO and a management and control system MCS (which can be an SDN controller, an ASON control plane) according to this embodiment.
  • MCS management and control system
  • a network client proposes to MLFO to dynamically monitor the virtual network VN
  • MLFO After MLFO receives the network client request, MLFO queries the management and control system MCS for ML operating environment information.
  • the management and control system MCS is the SDN controller controller
  • the SDN controller creates a client context ClientContext, and communicates with MLFO through the ClientContext.
  • ClientContext is the context provided by the SDN controller for serving clients (here, MLFO).
  • FIG. 7 is a flowchart of implementing a machine learning pipeline according to the present embodiment, as shown in FIG. 7 , including:
  • Step S702 the SDN controller provides the machine learning pipeline MLP operating environment information to the MLFO;
  • the Machine Learning Pipeline Runtime Environment information describes the configuration information for simulating the MLP and network, and running the MLP.
  • the operating environment information includes:
  • Data sources include components in the SDN controller that provide data for analysis, such as databases, and also include transmission resources controlled by the SDN controller that can provide data for analysis. Data characteristics, including data type (such as topology, link, connection, alarm, performance, etc.), transmission method and bandwidth, etc.
  • the database provides data types such as the history and current topology, link, and connection configuration of the VN; The historical data can be transmitted in batches, and the current data can be transmitted after changes; it can be ignored when the bandwidth does not affect the data transmission.
  • Execution unit and configuration command The execution unit includes the components in the SDN controller that can execute the MLP output strategy or configuration, as well as other SDN controllers or transmission resources that are controlled and configurable by the SDN controller. Configuration commands, including command types, interface parameters, etc.
  • the execution unit includes the NCC in the SDN controller, and is configured to receive a connection dynamic scheduling request, the corresponding configuration command includes initiating connection creation, and the interface parameter includes the new bandwidth.
  • SDN controller can be used as execution units when they can receive and execute the configuration output by the MLP.
  • the transmission resources controlled by the SDN controller such as the transmission network element equipment, can also receive and execute the configuration output by the MLP (such as link resource status, time slot crossing), and can also be used as an execution unit.
  • Resources include computing resources, storage resources, and network resources that can be used or controlled by the SDN controller, and MLP can be deployed to these resources.
  • the SDN controller can be deployed in a cloud environment, and using computing resources, storage resources, and network resources in the cloud, the MLP and the SDN controller can be deployed in the same cloud environment.
  • computing resources and storage resources can be allocated to the SDN controller at the same time, and these resources can be used to deploy MLP.
  • the operation and maintenance preference requires the use of the resources of the SDN controller, so when the MLP is subsequently deployed, the MLP cannot be deployed outside these resources.
  • the SDN controller can describe and transfer the operating environment information through the Representational State Transfer (REST) interface, and can also transfer the operating environment information by calling the RPC message interface remotely. , referred to as NETCONF) protocol configuration implementation.
  • REST Representational State Transfer
  • NETCONF NETCONF
  • Step S704 MLFO builds a machine learning pipeline and simulates a lower-layer network, and simulates training an ML model
  • a method for MLFO to build a machine learning pipeline can be that MLFO disassembles the constraint information for building a machine learning pipeline according to the machine learning intention input by the customer.
  • the constraint information includes: specifying the source node SRC and a specific data format; configuring a data collection command to the collector node (collector C), so that the collector node can collect data from one or more source nodes, such as collecting network topology ; Specify the preprocessing algorithm of the preprocessor node (preprocessor PP), such as abnormal data filtering, so that the data can be used or consumed by the machine learning model through preprocessing; specify the machine learning model, such as using supervised learning and specific Algorithms and parameters, such as Graph Neural Network (GNN for short), indicate data processing rules and logic; the strategy node (Policy P) will generate a specific application strategy and apply the strategy to the output of the model node.
  • the distributor node is responsible for determining the sink node (sink SINK) and
  • MLFO is a method of constructing a simulated lower-layer network, such as constructing a multi-node Optical Transmission Net (OTN) network, which requires specifying nodes and inter-node links (port, bandwidth, link cost, etc.); It is clear that in this lower network, the data source that can provide data, such as the database in the node, also needs to be clear, the node and interface that execute the machine learning model configuration command, such as the interface for adjusting the link cost between two nodes.
  • OTN Optical Transmission Net
  • Methods for training ML models include supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, etc.
  • supervised learning annotated sample data needs to be provided to the model. Based on these data, supervised learning can generate a mapping relationship between samples and targets. Sample data can come from existing operating networks or from simulated networks.
  • the SDN controller Based on the operating environment information provided by the SDN controller, determine the appropriate data sources and data characteristics, execution units and configuration commands, resources, operation and maintenance preferences, and ML models to meet the needs of ML applications, build a machine learning pipeline and simulate the underlying network;
  • the simulated MLP and the simulated lower-layer network are deployed on the computing, storage and network resources controlled by the SDN controller;
  • the following information is determined to meet the ML application requirements, and a machine learning pipeline and a simulated lower-layer network are constructed:
  • the determined execution unit is the NCC in the SDN controller, and the configuration command is to initiate connection routing adjustment; determine the computing and storage resources used to For building machine learning pipelines and simulating lower-layer networks, such as computers using 8G video memory and 4 GPUs, virtual machines can also be used; in operation and maintenance preferences, such as allowing the collected data and training models to be deployed outside the SDN controller Among the computing storage resources, the collected data and training models can be deployed to computing storage resources other than the SDN controller.
  • the machine learning pipeline can only be built and the underlying network can be simulated in the computing storage resources provided by the SDN controller; Determine the machine learning model, such as using a graph neural network (a specific graph time-space network Graph Spatial-Temporal Network) for model training, and predict which connections need to be rerouted based on the given data.
  • a graph neural network a specific graph time-space network Graph Spatial-Temporal Network
  • Step S706 MLFO deploys the trained ML model and MLP.
  • the trained ML model and MLP are deployed to a given resource according to the operating environment information.
  • the deployment process includes integrating the ML model into the computing resources determined in the operating environment information, and integrating the logical nodes in the MLP into the computing resources, wherein the binding relationship between the source node SRC and the data source in the MLP is established, and The binding relationship between the SINK node and the execution unit.
  • the binding relationship between the source node SRC and the data source, and the binding relationship between the SINK and the execution unit can be established directly, and the binding relationship can also be indirectly through an agent.
  • the interface for accessing the data source through the SRC is provided, and the interface for configuring the execution command to the execution unit through the SINK can be provided.
  • the data source is the database in the SDN controller
  • the execution unit is the NCC
  • the binding relationship between the SRC and the database in the MLP and the binding relationship between the SINK and the NCC are established.
  • Figure 8 is a schematic diagram of the connection relationship between the source and sink nodes and the SDN controller in the machine learning pipeline according to the present embodiment, as shown in Figure 8, is to establish an access between the SRC and the database interface, and establishing an access interface between SINK and NCC; an implementation method when indirectly bound through an agent.
  • Figure 9 is a schematic diagram of the deployment of the machine learning pipeline according to this embodiment. As shown in Figure 9, SRC and SINK pass through The common agent then establishes the access interface with the database and NCC respectively.
  • the binding relationship between the SRC and the data source, and the binding relationship between the SINK and the execution unit are established through the client context ClientContext.
  • Figure 10 is a schematic diagram of the connection relationship between the machine learning pipeline and the SDN controller through the agent according to the present embodiment. As shown in Figure 10, after the SRC and SINK in the MLP pass through the ClientContext, they are respectively connected with the data source (in this embodiment, the database), The execution unit (in this embodiment, the NCC) establishes an access interface.
  • the data source in this embodiment, the database
  • the execution unit in this embodiment, the NCC
  • step S706 when the MLFO is deployed, the trained ML model and the MLP are deployed to a given resource according to the operating environment information.
  • FIG. 11 is a schematic diagram of the connection relationship between the machine learning pipeline according to the present embodiment and the SDN controller through the client context, as shown by the dotted line in FIG. 11 , indicating the scope of the SDN controller.
  • the machine learning pipeline and simulated MLP, MCS, and resources are deployed within the SDN controller.
  • FIG. 12 is a block diagram of the intelligent pipeline processing device according to the first embodiment. As shown in FIG. 12 , the device include:
  • the receiving module 122 is configured to receive the operating environment information of the intelligent pipeline sent by the first unit;
  • the building module 124 is configured to construct the intelligent pipeline according to the operating environment information, and simulate and train the intelligent model in the intelligent pipeline;
  • the deployment module 126 is configured to deploy the intelligent model and the intelligent pipeline.
  • the above receiving module 122 includes:
  • a sending submodule configured to send a query request for querying the operating environment information to the first unit, wherein the query request is used to instruct the first unit to create a client context
  • the receiving sub-module is configured to receive the operating environment information sent by the first unit through the client context.
  • the building blocks 124 include:
  • determining a submodule configured to determine intelligent application requirements based on the operating environment information
  • a construction sub-module is configured to construct the intelligent pipeline according to the requirements of the intelligent application.
  • the determining sub-module is further set to
  • the intelligent application requirements are determined according to data sources, data characteristics of data sources, execution units, configuration commands of execution units, resources, and operation and maintenance policies, wherein the operating environment information includes data sources, data characteristics, execution units, and configuration commands , resources, and operation and maintenance strategies.
  • the building module 124 is also set to
  • a simulated network is constructed according to the operating environment information, wherein the simulated network is used for running the intelligent model.
  • the deployment module 126 is further configured to
  • the intelligent model and the intelligent pipeline are deployed.
  • the deployment module 126 is further configured to
  • the second unit deploys the intelligent model and the intelligent pipeline on computing resources, storage resources and network resources controlled by the first unit.
  • the apparatus further includes:
  • the establishment module is configured to, in the process of deploying the intelligent model and the intelligent pipeline, establish the binding relationship between the SRC and the data source of the first unit in the intelligent pipeline, and the relationship between the SINK and the first unit. The binding relationship of the execution unit.
  • the establishing module includes:
  • the first establishment submodule is configured to establish the binding relationship between the SRC and the data source of the first unit in the intelligent pipeline, and the binding relationship between the SINK and the execution unit of the first unit through an agent; or
  • the second establishment sub-module is configured to establish the binding relationship between the SRC and the data source of the first unit in the machine learning pipeline, and the binding relationship between the SINK and the execution unit of the first unit through the client context.
  • the establishing module includes:
  • the third establishment submodule is set to establish the binding relationship between the SRC and the data source by establishing an access interface between the SRC and the data source;
  • the fourth establishing sub-module is configured to establish a binding relationship between the SINK and the execution unit by establishing an access interface between the SINK and the NCC.
  • the intelligent pipeline includes an artificial intelligence pipeline and a machine learning pipeline
  • the intelligent model includes an artificial intelligence model and a machine learning model
  • FIG. 13 is a second block diagram of the intelligent pipeline processing device according to this embodiment. As shown in FIG. 13 , the device include:
  • the sending module 132 is configured to send the operation environment information of the intelligent pipeline to the second unit, wherein the operation environment information is used to instruct the second unit to construct the intelligent pipeline and the simulation network, and simulate the training of the intelligent pipeline. smart model, and deploy the smart model and the smart pipeline.
  • the above-mentioned sending module 132 includes:
  • a receiving sub-module configured to receive a query request for querying the operating environment information sent by the second unit, and create a client context for communication according to the query request;
  • the sending submodule is configured to send the operating environment information to the first unit through the client context.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein the computer program is configured to execute the steps in any one of the above method embodiments when running.
  • the above-mentioned computer-readable storage medium may include, but is not limited to, a USB flash drive, a read-only memory (Read-Only Memory, referred to as ROM for short), and a random access memory (Random Access Memory, referred to as RAM for short) , mobile hard disk, magnetic disk or CD-ROM and other media that can store computer programs.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • An embodiment of the present disclosure also provides an electronic device, including a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • modules or steps of the present disclosure can be implemented by a general-purpose computing device, and they can be centralized on a single computing device or distributed in a network composed of multiple computing devices
  • they can be implemented in program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, in a different order than shown here.
  • the described steps, or they are respectively made into individual integrated circuit modules, or a plurality of modules or steps in them are made into a single integrated circuit module to realize.
  • the present disclosure is not limited to any particular combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Debugging And Monitoring (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

提供了一种智能流水线处理方法、装置、存储介质及电子装置,方法包括:接收第一单元发送的智能流水线的运行环境信息(S402),根据运行环境信息构建智能流水线,并模拟训练人工智能流水线中的智能模型;部署智能模型与智能流水线(S406),可以解决相关技术中如何在管控系统MCS中引入MLFO以部署机器学习功能的问题,建立与第一单元的通信,接收运行环境信息,并根据运行环境信息进行部署,实现了在管控系统MCS中引入MLFO以部署机器学习功能。

Description

一种智能流水线处理方法、装置、存储介质及电子装置
相关申请的交叉引用
本公开基于2021年02月07日提交的发明名称为“一种智能流水线处理方法、装置、存储介质及电子装置”的中国专利申请CN202110169290.2,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本公开。
技术领域
本公开实施例涉及通信领域,具体而言,涉及一种智能流水线处理方法、装置、存储介质及电子装置。
背景技术
在承载网络中,通过控制能够创建业务连接,提供连接服务。常用的控制方式包括分布式控制和集中式控制。
分布式控制器,如自动交换光网络(Automatically Switched Optical Network,简称为ASON),能够基于信令、路由、自动发现等机制通过控制组件之间的交互完成分布式连接调度。具有分布式处理、网络动态控制灵活等优点。相对于管理平面和传送平面,ASON称为控制平面。
集中式控制器,如软件定义网络控制器(Software Defined Network,简称为SDN controller),基于逻辑集中的控制架构,通过连接控制器(Connection Controller,简称为CC)、路由控制器(Routing Controller,简称为RC)等控制组件完成集中式的连接调度。因具有全局网络资源信息并能协同处理全部连接请求,能够从整体来优化资源配置。
图1是相关技术中的SDN控制器相关组件的示意图,如图1所示,包括网络呼叫控制器NCC(Network Call Controller)、链路资源管理器(Link Resources Manager,简称为LRM)、连接控制器CC、路由控制器RC、通告组件(Notification Component)、终结和适配执行器(Termination and Adaptation Performer,简称为TAP)组件等,以及与TAP相连的转发面设备网元。ASON控制平面也具有这些功能组件。
考虑网络管理功能与控制功能本质上是相同的,上述分布式控制器和集中式控制器,以及网络管理系统统称为管理控制系统(Management Control System,简称为MCS)。在MCS中,包括多个管理控制组件(MC Component),管理控制组件执行管理控制功能(后文为简化描述,将管理控制组件简称为管控组件)。
在软件定义网络控制器(Software Defined Networking,SDN)controller架构中,客户上下文Client Context代表服务层控制器controller中的一个组件,该组件是为客户/服务控制器之间的管控服务而进行的交互服务的;服务上下文Server Context代表客户层控制器controller中的一个组件,该组件是为客户/服务控制器之间的管控服务而进行的交互服务的。
人工智能(Artificial Intelligence,简称为AI)及机器学习(Machine learning,简称为ML)正在影响着软件的智能化。随着AI/ML的发展,对分布式控制平面或集中式控制 器,引入人工智能尤其机器学习到控制平面或控制器,将大幅提高控制网络的智能水平,进而提升业务调度维护效率。在分布式控制平台或集中式控制器中引入AI/ML,将对现有管控系统组件的交互关系、接口产生影响。
图2是相关技术中机器学习总体架构图,如图2所示,包括管理子系统、机器学习沙箱子系统ML sandbox subsystem、机器学习流水线子系统ML pipeline subsystem和机器学习下层网络ML underlay networks。
在管理子系统中,机器学习功能编排器(Machine Learning Function Orchestrator,简称为MLFO)负责配置ML sandbox subsystem中的ML pipeline和模拟下层网络simulated ML underlay network,并基于此进行模型训练、测试及验证。由MLFO来生成沙箱子系统中的ML pipeline和simulated ML underlay network。
一般的,机器学习流水线(Machine Learning Pipeline,简称为MLP)包括源SRC、收集器C、预处理器PP、模型M、策略P、分发器D、宿SINK等逻辑节点。源节点(源SRC)给MLP提供数据输入。收集器节点(收集器C)负责从一个或多个源节点中收集数据。预处理器节点(预处理器PP)负责预处理数据使能数据能被机器学习模型所使用或消费。模型节点表明数据处理规则及逻辑。策略节点(策略P)将生成具体策略,并将策略施加到模型节点的输出。分发器节点(分发器D)负责确定宿节点(宿SINK),并将模型输出分发到宿节点。宿节点负责执行模型输出。
现有的机器学习架构中,对于如何在管控系统MCS中引入MLFO以部署机器学习功能,有哪些约束需要考虑,这些问题并没有清楚描述。
发明内容
本公开实施例提供了一种智能流水线处理方法、装置、存储介质及电子装置,以至少解决相关技术中如何在管控系统MCS中引入MLFO以部署机器学习功能的问题。
根据本公开的一个实施例,提供了一种智能流水线处理方法,应用于第二单元,所述方法包括:
接收第一单元发送的智能流水线的运行环境信息;
根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
部署所述智能模型与所述智能流水线。
根据本公开的另一个实施例,还提供了一种智能流水线处理方法,应用于第一单元,所述方法包括:
向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
根据本公开的另一个实施例,还提供了一种智能流水线处理装置,应用于第二单元,所述装置包括:
接收模块,设置为接收第一单元发送的智能流水线的运行环境信息;
构建模块,设置为根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
部署模块,设置为部署所述智能模型与所述智能流水线。
根据本公开的另一个实施例,还提供了一种智能流水线处理装置,应用于第一单元,所述装置包括:
发送模块,设置为向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
根据本公开的又一个实施例,还提供了一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本公开的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
本公开实施例,接收第一单元发送的智能流水线的运行环境信息,根据所述运行环境信息构建所述智能流水线,并模拟训练所述人工智能流水线中的智能模型;部署所述智能模型与所述智能流水线,可以解决相关技术中如何在管控系统MCS(包括SDN控制器、ASON控制平面)中引入MLFO以部署机器学习功能的问题,建立与第一单元的通信,接收运行环境信息,并根据运行环境信息进行部署,实现了在管控系统MCS中引入MLFO以部署机器学习功能。
附图说明
图1是相关技术中的SDN控制器相关组件的示意图;
图2是相关技术中机器学习总体架构图;
图3是本公开实施例的智能流水线处理方法的移动终端的硬件结构框图;
图4是根据本公开实施例的智能流水线处理方法的流程图一;
图5是根据本公开实施例的智能流水线处理方法的流程图二;
图6是根据本实施例的MLFO与SDN controller关系的示意图;
图7是根据本实施例的实现机器学习流水线的流程图;
图8是根据本实施例的机器学习流水线中源宿节点与SDN controller连接关系的示意图;
图9是根据本实施例的机器学习流水线部署的示意图;
图10是根据本实施例的机器学习流水线通过代理与SDN controller的连接关系的示意图;
图11是根据本实施例的机器学习流水线通过客户上下文与SDN controller的连接关系的示意图;
图12是根据本实施例的智能流水线处理装置的框图一;
图13是根据本实施例的智能流水线处理装置的框图二。
具体实施方式
下文中将参考附图并结合实施例来详细说明本公开的实施例。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例中所提供的方法实施例可以在移动终端、计算机终端、网络设备或者类似 的运算装置中执行。以运行在移动终端上为例,图3是本公开实施例的智能流水线处理方法的移动终端的硬件结构框图,如图3所示,移动终端可以包括一个或多个(图3中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA或图形处理器GPU等的处理装置)和用于存储数据的存储器104,其中,上述移动终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图3所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图3中所示更多或者更少的组件,或者具有与图3所示不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的智能流水线处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及业务链地址池切片处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、承载网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站、光纤、与其他网络设备相连从而可与其它传输装置进行通讯。在一个实例中,传输装置106可以为光模块,其用于通过光纤与其它传输装置进行通讯。
在本实施例中提供了一种运行于上述移动终端或网络架构的智能流水线处理方法,图4是根据本公开实施例的智能流水线处理方法的流程图一,如图4所示,应用于第二单元,该流程包括如下步骤:
步骤S402,接收第一单元发送的智能流水线的运行环境信息;
步骤S404,根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
步骤S406,部署所述智能模型与所述智能流水线。
通过上述步骤S402至S406,可以解决相关技术中如何在管控系统MCS中引入MLFO以部署机器学习功能的问题,建立与第一单元的通信,接收运行环境信息,并根据运行环境信息进行部署,实现了在管控系统MCS中引入MLFO以部署机器学习功能。
本实施例中,上述步骤S406具体可以包括:
向所述第一单元发送查询所述运行环境信息的查询请求,其中,所述查询请求用于指示所述第一单元创建客户上下文;
通过所述客户上下文接收所述第一单元发送的所述运行环境信息。
本实施例中,上述步骤S404具体可以包括:
基于所述运行环境信息确定智能应用需求,进一步的,可以根据数据源、数据源的数据特性、执行单元、执行单元的配置命令、资源、运维策略确定所述智能应用需求,其中,所述运行环境信息包括数据源、数据特性、执行单元、配置命令、资源、运维策略;
根据所述智能应用需求构建所述智能流水线。
在一可选的实施例中,所述方法还包括:根据所述运行环境信息构建模拟网络,其中, 所述模拟网络用于运行所述智能模型。对应的,上述步骤S406具体可以包括:将所述智能模型在所述模拟网络中运行,并获取运行结果;在所述运行结果为正常的情况下,部署所述智能模型与所述智能流水线。
在另一可选的实施例中,上述步骤S406还可以包括:将所述智能模型与所述智能流水线部署在所述第一单元控制的计算资源、存储资源以及网络资源上。
在一示例性实施例中,在部署所述智能模型与所述智能流水线的过程中,建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
在一可选的实施例中,可以通过代理建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系;或者通过客户上下文建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
在另一可选的实施例中,还可以通过在所述SRC与所述数据源之间建立访问接口的方式建立所述SRC与所述数据源的绑定关系,通过在SINK与NCC之间建立访问接口的方式建立所述SINK与所述执行单元的绑定关系。
本实施例中,所述智能流水线包括人工智能流水线与机器学习流水线,所述智能模型包括人工智能模型与机器学习模型。
根据本实施例的另一方面,还提供了一种智能流水线处理方法,应用于第一单元,图5是根据本公开实施例的智能流水线处理方法的流程图二,如图5所示,该流程包括如下步骤:
步骤S502,向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
在一示例性实施例中,上述步骤S502具体可以包括:
接收所述第二单元发送的查询所述运行环境信息的查询请求,根据所述查询请求创建用于通信的客户上下文;通过所述客户上下文向所述第一单元发送所述运行环境信息。
下面以第一单元为SDN controller,第二单元为MLFO为例,对本实施例进行详细说明。
图6是根据本实施例的MLFO与管控系统MCS(可以是SDN controller、ASON控制平面)关系的示意图,如图6所示,在本实施例中,网络客户向MLFO提出动态监测虚拟网络VN中承载的流量特性,并在预测到流量的部署将使得传送资源中的某些链路容量超过指定阈值后,由MLP触发业务连接的动态调整,从而为传送资源中预测的流量做好准备。
MLFO收到网络客户请求后,MLFO向管控系统MCS查询ML运行环境信息,当管控系统MCS为SDN控制器controller时,SDN controller创建客户上下文ClientContext,并通过该ClientContext与MLFO通信。ClientContext是SDN controller为服务客户(这里指MLFO)而提供的上下文。
图7是根据本实施例的实现机器学习流水线的流程图,如图7所示,包括:
步骤S702,SDN controller提供机器学习流水线MLP运行环境信息给MLFO;
机器学习流水线运行环境信息描述了用于模拟MLP和网络、以及运行MLP的配置信息。
在本实施例中,运行环境信息包括:
1)数据源及数据特性:数据源包括SDN controller中提供数据以进行分析的组件,如数据库,也包括SDN controller所控制的、可提供数据以进行分析的传送资源。数据特性,包括数据类型(如拓扑、链路、连接、告警、性能等)、传送方式及带宽等,本实施例中,数据库提供VN的历史和当前拓扑、链路、连接配置等数据类型;历史数据可采用批量传送方式,当前数据可采用变动后再传送的方式;带宽不影响数据传递时可不考虑。
2)执行单元及配置命令:执行单元包括SDN controller中可执行MLP输出策略或配置的组件,也包括SDN controller所控制的、可配置的其他SDN controller或传送资源。配置命令,包括命令类型、接口参数等。
在本实施例中,执行单元包括SDN controller中的NCC,设置为接收连接动态调度请求,对应的配置命令包括发起连接创建,接口参数包括新的带宽。
SDN controller内的其他组件,如RC、CC、TAP等,能接收并执行MLP输出的配置时,可作为执行单元。SDN controller所控制的传送资源,如传送网元设备,也能接收并执行MLP输出的配置(如链路资源状态、时隙交叉),也可作为执行单元。
3)资源:包括SDN controller可使用的或控制的计算资源、存储资源、网络资源,可以将MLP部署到这些资源。
在本实施例中,SDN controller可部署在云环境中,使用云中的计算资源、存储资源和网络资源,MLP可与SDN controller部署在同一云环境中。
也可以在部署SDN controller时,同时为SDN controller分配计算资源、存储资源,这些资源可用于部署MLP。
4)运维偏好或策略:出于管理或安全考虑,在使用数据以及资源时提出的要求,如数据不能离开controller;优先使用SDN controller资源等。
在本实施例中,运维偏好要求使用SDN controller的资源,那么在后续部署MLP时,就不能将MLP部署到这些资源之外。
SDN controller可以通过表征性状态传送(Representational State Transfer,简称为REST)接口来描述并传递运行环境信息,也可通过远程调用RPC消息接口来传递运行环境信息,还可以通过网络配置协议(Network Configuration Protocol,简称为NETCONF)协议配置实现。
步骤S704,MLFO构建机器学习流水线和模拟下层网络,并模拟训练ML模型;
MLFO构建机器学习流水线的一种方法,可以是MLFO根据客户输入的机器学习意图来拆解为具体的构建机器学习流水线的约束信息。其中,约束信息包括:指定源节点SRC及具体的数据格式;配置收集数据命令到收集器节点(收集器C),使得收集器节点可以从一个或多个源节点中收集数据,如收集网络拓扑;明确预处理器节点(预处理器PP)的预处理算法,如异常数据过滤,通过预处理使数据能被机器学习模型所使用或消费;指定机器学习的模型,如采用监督学习及具体的算法和参数,如图神经网络(Graph Neural Network,简称为GNN),表明数据处理规则及逻辑;策略节点(策略P)将生成具体应用策略,并将策略施加到模型节点的输出。分发器节点(分发器D)负责确定宿节点(宿SINK),并将模型输出(如配置命令)分发到宿节点。宿节点负责执行模型输出。
MLFO构建模拟下层网络的一种方法,如构建一个多节点光传送网(Optical Transmission Net,简称为OTN)网络,需要明确节点、节点间链路(端口、带宽、链路代价等);还需要 明确,这个下层网络中,能提供数据的数据源,比如节点中的数据库;还需要明确,执行机器学习模型配置命令的节点和接口,如调整某两个节点间的链路代价的接口。
训练ML模型的方法包括监督学习,无监督学习,半监督学习,强化学习等。监督学习中,需要提供标注过的样本数据给模型,基于这些数据,监督学习能够生成样本与目标之间的映射关系。样本数据可来自现有运行网络,也可来自模拟的网络。
基于SDN controller提供的运行环境信息,确定合适的数据源及数据特性、执行单元及配置命令、资源、运维偏好、ML模型以满足ML应用需求,构建起机器学习流水线和模拟下层网络;
优选的,按照运行环境信息中的运维偏好,模拟MLP以及模拟下层网络部署在SDN controller控制的计算、存储和网络资源上;
本实施例中,基于SDN controller提供的运行环境信息,确定以下信息以满足ML应用需求,构建起机器学习流水线和模拟下层网络:
从SDN controller中的数据库获取虚拟网络VN的历史和当前拓扑、链路、连接配置等数据;确定的执行单元为SDN controller中的NCC,配置命令为发起连接路由调整;确定使用的计算存储资源以用于构建机器学习流水线和模拟下层网络,如使用8G显存和4个GPU的计算机,也可以使用虚拟机;在运维偏好中,如容许将采集的数据及训练模型部署到SDN controller之外的计算存储资源中,则可将采集的数据及训练模型部署到SDN controller之外的计算存储资源,如不允许,则只能在SDN controller提供的计算存储资源中构建机器学习流水线和模拟下层网络;确定机器学习模型,如采用图神经网络(具体的图时间空间网络Graph Spatial-Temporal Network)用于模型训练,基于给定的数据,预测出哪些连接需要进行重路由调整。
步骤S706,MLFO部署训练的ML模型及MLP。
部署时,根据运行环境信息,将训练的ML模型及MLP部署到给定的资源中。部署过程中包括集成ML模型到所述运行环境信息中确定的计算资源中,将MLP中的逻辑节点也集成到计算资源中,其中,建立MLP中源节点SRC与数据源的绑定关系,以及SINK节点与执行单元的绑定关系。
优选的,可直接建立源节点SRC与数据源的绑定关系,以及SINK与执行单元之间绑定关系,也可通过代理间接绑定。通过绑定关系,提供了通过SRC访问数据源的接口,可提供了通过SINK配置执行命令到执行单元的接口。
本实施例中,数据源是SDN controller中的数据库,执行单元是NCC,建立MLP中的SRC与数据库的绑定关系,以及SINK与NCC的绑定关系。直接建立绑定关系时,一种实现方式,图8是根据本实施例的机器学习流水线中源宿节点与SDN controller连接关系的示意图,如图8所示,是在SRC与数据库之间建立访问接口,以及在SINK与NCC之间建立访问接口;通过代理间接绑定时,一种实现方式,图9是根据本实施例的机器学习流水线部署的示意图,如图9所示,SRC和SINK通过共同的代理再和数据库及NCC分别建立访问接口。
优选的,通过客户上下文ClientContext建立SRC与数据源的绑定关系,以及SINK与执行单元之间绑定关系。
图10是根据本实施例的机器学习流水线通过代理与SDN controller的连接关系的示意图,如图10所示,MLP中的SRC和SINK通过ClientContext后,分别与数据源(本实施例 中指数据库)、执行单位(本实施例中指NCC)建立访问接口。
在上述步骤S706中,MLFO部署时,根据运行环境信息,将训练的ML模型及MLP部署到给定的资源中。
如图8虚线所示,表示SDN controller的范围。在图8中,机器学习流水线以及模拟的MLP、MCS及资源部署在SDN controller之外。
图11是根据本实施例的机器学习流水线通过客户上下文与SDN controller的连接关系的示意图,如图11虚线所示,表示SDN controller的范围。在图11中,机器学习流水线以及模拟的MLP、MCS及资源部署在SDN controller之内。
根据本实施例的另一方面,还提供了一种智能流水线处理装置,应用于第二单元,图12是根据本实施例的智能流水线处理装置的框图一,如图12所示,所述装置包括:
接收模块122,设置为接收第一单元发送的智能流水线的运行环境信息;
构建模块124,设置为根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
部署模块126,设置为部署所述智能模型与所述智能流水线。
在一示例性实施例中,上述接收模块122包括:
发送子模块,设置为向所述第一单元发送查询所述运行环境信息的查询请求,其中,所述查询请求用于指示所述第一单元创建客户上下文;
接收子模块,设置为通过所述客户上下文接收所述第一单元发送的所述运行环境信息。
在一示例性实施例中,所述构建模块124包括:
确定子模块,设置为基于所述运行环境信息确定智能应用需求;
构建子模块,设置为根据所述智能应用需求构建所述智能流水线。
在一示例性实施例中,所述确定子模块,还设置为
根据数据源、数据源的数据特性、执行单元、执行单元的配置命令、资源、运维策略确定所述智能应用需求,其中,所述运行环境信息包括数据源、数据特性、执行单元、配置命令、资源、运维策略。
在一示例性实施例中,所述构建模块124,还设置为
根据所述运行环境信息构建模拟网络,其中,所述模拟网络用于运行所述智能模型。
在一示例性实施例中,所述部署模块126,还设置为
将所述智能模型在所述模拟网络中运行,并获取运行结果;
在所述运行结果为正常的情况下,部署所述智能模型与所述智能流水线。
在一示例性实施例中,所述部署模块126,还设置为
所述第二单元将所述智能模型与所述智能流水线部署在所述第一单元控制的计算资源、存储资源以及网络资源上。
在一示例性实施例中,所述装置还包括:
建立模块,设置为在部署所述智能模型与所述智能流水线的过程中,建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
在一示例性实施例中,所述建立模块包括:
第一建立子模块,设置为通过代理建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系;或者
第二建立子模块,设置为通过客户上下文建立所述机器学习流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
在一示例性实施例中,所述建立模块包括:
第三建立子模块,设置为通过在所述SRC与所述数据源之间建立访问接口的方式建立所述SRC与所述数据源的绑定关系;
第四建立子模块,设置为通过在SINK与NCC之间建立访问接口的方式建立所述SINK与所述执行单元的绑定关系。
在一示例性实施例中,所述智能流水线包括人工智能流水线与机器学习流水线,所述智能模型包括人工智能模型与机器学习模型。
根据本实施例的另一方面,还提供了一种智能流水线处理装置,应用于第一单元,图13是根据本实施例的智能流水线处理装置的框图二,如图13所示,所述装置包括:
发送模块132,设置为向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
在一示例性实施例中,上述发送模块132包括:
接收子模块,设置为接收所述第二单元发送的查询所述运行环境信息的查询请求,根据所述查询请求创建用于通信的客户上下文;
发送子模块,设置为通过所述客户上下文向所述第一单元发送所述运行环境信息。
本公开的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路 模块来实现。这样,本公开不限制于任何特定的硬件和软件结合。
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (17)

  1. 一种智能流水线处理方法,应用于第二单元,所述方法包括:
    接收第一单元发送的智能流水线的运行环境信息;
    根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
    部署所述智能模型与所述智能流水线。
  2. 根据权利要求1所述的方法,其中,接收所述第一单元发送的智能流水线的运行环境信息包括:
    向所述第一单元发送查询所述运行环境信息的查询请求,其中,所述查询请求用于指示所述第一单元创建客户上下文;
    通过所述客户上下文接收所述第一单元发送的所述运行环境信息。
  3. 根据权利要求1所述的方法,其中,根据所述运行环境信息构建所述智能流水线包括:
    基于所述运行环境信息确定智能应用需求;
    根据所述智能应用需求构建所述智能流水线。
  4. 根据权利要求3所述的方法,其中,基于所述运行环境信息确定智能应用需求包括:
    根据数据源、数据源的数据特性、执行单元、执行单元的配置命令、资源、运维策略确定所述智能应用需求,其中,所述运行环境信息包括数据源、数据特性、执行单元、配置命令、资源、运维策略。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据所述运行环境信息构建模拟网络,其中,所述模拟网络用于运行所述智能模型。
  6. 根据权利要求5所述的方法,其中,部署所述智能模型与所述智能流水线包括:
    将所述智能模型在所述模拟网络中运行,并获取运行结果;
    在所述运行结果为正常的情况下,部署所述智能模型与所述智能流水线。
  7. 根据权利要求1所述的方法,其中,部署所述智能模型与所述智能流水线包括:
    将所述智能模型与所述智能流水线部署在所述第一单元控制的计算资源、存储资源以及网络资源上。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    在部署所述智能模型与所述智能流水线的过程中,建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
  9. 根据权利要求8所述的方法,其中,建立所述智能中SRC与数据源的绑定关系,以及SINK与执行单元的绑定关系包括:
    通过代理建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系;或者
    通过客户上下文建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系。
  10. 根据权利要求8所述的方法,其中,建立所述智能流水线中SRC与所述第一单元的数据源的绑定关系,以及SINK与所述第一单元的执行单元的绑定关系包括:
    通过在所述SRC与所述数据源之间建立访问接口的方式建立所述SRC与所述数据源的绑定关系;
    通过在SINK与NCC之间建立访问接口的方式建立所述SINK与所述执行单元的绑定关系。
  11. 根据权利要求1至10中任一项所述的方法,其中,所述智能流水线包括人工智能流水线与机器学习流水线,所述智能模型包括人工智能模型与机器学习模型。
  12. 一种智能流水线处理方法,应用于第一单元,所述方法包括:
    向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
  13. 根据权利要求12所述的方法,其中,向所述第二单元发送智能流水线的运行环境信息包括:
    接收所述第二单元发送的查询所述运行环境信息的查询请求,根据所述查询请求创建用于通信的客户上下文;
    通过所述客户上下文向所述第一单元发送所述运行环境信息。
  14. 一种智能流水线处理装置,应用于第二单元,所述装置包括:
    接收模块,设置为接收第一单元发送的智能流水线的运行环境信息;
    构建模块,设置为根据所述运行环境信息构建所述智能流水线,并模拟训练所述智能流水线中的智能模型;
    部署模块,设置为部署所述智能模型与所述智能流水线。
  15. 一种智能流水线处理装置,应用于第一单元,所述装置包括:
    发送模块,设置为向第二单元发送智能流水线的运行环境信息,其中,所述运行环境信息用于指示所述第二单元构建所述智能流水线和模拟网络,模拟训练所述智能流水线中的智能模型,并部署所述智能模型与所述智能流水线。
  16. 一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至11、12至13任一项中所述的方法。
  17. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至11、12至13任一项中所述的方法。
PCT/CN2022/074034 2021-02-07 2022-01-26 一种智能流水线处理方法、装置、存储介质及电子装置 WO2022166715A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110169290.2A CN112801303A (zh) 2021-02-07 2021-02-07 一种智能流水线处理方法、装置、存储介质及电子装置
CN202110169290.2 2021-02-07

Publications (1)

Publication Number Publication Date
WO2022166715A1 true WO2022166715A1 (zh) 2022-08-11

Family

ID=75814735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074034 WO2022166715A1 (zh) 2021-02-07 2022-01-26 一种智能流水线处理方法、装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN112801303A (zh)
WO (1) WO2022166715A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801303A (zh) * 2021-02-07 2021-05-14 中兴通讯股份有限公司 一种智能流水线处理方法、装置、存储介质及电子装置

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529673A (zh) * 2016-11-17 2017-03-22 北京百度网讯科技有限公司 基于人工智能的深度学习网络训练方法及装置
CN109947567A (zh) * 2019-03-14 2019-06-28 深圳先进技术研究院 一种多智能体强化学习调度方法、系统及电子设备
US20200167607A1 (en) * 2018-11-22 2020-05-28 Thales Holdings Uk Plc Methods and Systems for Determining One or More Actions to Carry Out in an Environment
CN111488254A (zh) * 2019-01-25 2020-08-04 顺丰科技有限公司 一种机器学习模型的部署与监控装置和方法
CN111555907A (zh) * 2020-04-19 2020-08-18 北京理工大学 基于强化学习的数据中心网络能耗和服务质量优化方法
CN111598237A (zh) * 2020-05-21 2020-08-28 上海商汤智能科技有限公司 量化训练、图像处理方法及装置、存储介质
CN111666713A (zh) * 2020-05-15 2020-09-15 清华大学 一种电网无功电压控制模型训练方法及系统
US10816978B1 (en) * 2018-02-22 2020-10-27 Msc.Software Corporation Automated vehicle artificial intelligence training based on simulations
CN112801303A (zh) * 2021-02-07 2021-05-14 中兴通讯股份有限公司 一种智能流水线处理方法、装置、存储介质及电子装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529673A (zh) * 2016-11-17 2017-03-22 北京百度网讯科技有限公司 基于人工智能的深度学习网络训练方法及装置
US10816978B1 (en) * 2018-02-22 2020-10-27 Msc.Software Corporation Automated vehicle artificial intelligence training based on simulations
US20200167607A1 (en) * 2018-11-22 2020-05-28 Thales Holdings Uk Plc Methods and Systems for Determining One or More Actions to Carry Out in an Environment
CN111488254A (zh) * 2019-01-25 2020-08-04 顺丰科技有限公司 一种机器学习模型的部署与监控装置和方法
CN109947567A (zh) * 2019-03-14 2019-06-28 深圳先进技术研究院 一种多智能体强化学习调度方法、系统及电子设备
CN111555907A (zh) * 2020-04-19 2020-08-18 北京理工大学 基于强化学习的数据中心网络能耗和服务质量优化方法
CN111666713A (zh) * 2020-05-15 2020-09-15 清华大学 一种电网无功电压控制模型训练方法及系统
CN111598237A (zh) * 2020-05-21 2020-08-28 上海商汤智能科技有限公司 量化训练、图像处理方法及装置、存储介质
CN112801303A (zh) * 2021-02-07 2021-05-14 中兴通讯股份有限公司 一种智能流水线处理方法、装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN112801303A (zh) 2021-05-14

Similar Documents

Publication Publication Date Title
EP3497884B1 (en) Method and apparatus for network slicing
US10862758B2 (en) Generation of network configuration and configuration commands for impacted nodes of a software defined wide area network
CN113364850B (zh) 软件定义云边协同网络能耗优化方法和系统
US9760391B2 (en) Method and apparatus for network virtualization
EP2930884B1 (en) Object-oriented network virtualization
CN108449350B (zh) 一种多协议编排方法及装置
US9204207B2 (en) Hierarchy of control in a data center network
RU2637419C2 (ru) Способ и система для защиты отображения виртуальной сети, а также компьютерный носитель данных
US20150215195A1 (en) Generating optimal pathways in software-defined networking (sdn)
US20140280864A1 (en) Methods of Representing Software Defined Networking-Based Multiple Layer Network Topology Views
US20160352815A1 (en) Data Distribution Based on Network Information
CN108540328B (zh) Ason的控制平面建模方法和装置
CN112166579B (zh) 提供虚拟化网络功能的多服务器架构集群
CN103746911A (zh) 一种sdn网络结构及其通信方法
CN105122864A (zh) 通过硬件自检和反射的能力识别和修改
CN105052113A (zh) 针对网络设备的共同代理框架
US20150312215A1 (en) Generating optimal pathways in software-defined networking (sdn)
CN114553689A (zh) 连接性模板
CN113179299B (zh) 面向工业互联网应用的服务功能链协同控制系统及方法
WO2022166715A1 (zh) 一种智能流水线处理方法、装置、存储介质及电子装置
CN113992590A (zh) 基于软件定义网络的链路负载均衡方法
WO2023274304A1 (zh) 分布式路由确定方法、电子设备及存储介质
US20220206865A1 (en) Distributed artificial intelligence fabric controller
WO2024032100A1 (zh) 基于微服务的工业无线网络设备接入IPv6测试系统及方法
Maciel et al. Cloud-network slicing MANO towards an efficient IoT-cloud continuum

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.12.2023)