CN112801303A - Intelligent pipeline processing method and device, storage medium and electronic device - Google Patents

Intelligent pipeline processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112801303A
CN112801303A CN202110169290.2A CN202110169290A CN112801303A CN 112801303 A CN112801303 A CN 112801303A CN 202110169290 A CN202110169290 A CN 202110169290A CN 112801303 A CN112801303 A CN 112801303A
Authority
CN
China
Prior art keywords
intelligent
unit
environment information
pipeline
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110169290.2A
Other languages
Chinese (zh)
Inventor
牛小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202110169290.2A priority Critical patent/CN112801303A/en
Publication of CN112801303A publication Critical patent/CN112801303A/en
Priority to PCT/CN2022/074034 priority patent/WO2022166715A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Debugging And Monitoring (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

The embodiment of the invention provides an intelligent pipeline processing method, an intelligent pipeline processing device, a storage medium and an electronic device, wherein the method comprises the following steps: receiving the operating environment information of the intelligent assembly line sent by the first unit, constructing the intelligent assembly line according to the operating environment information, and simulating and training an intelligent model in the artificial intelligent assembly line; the intelligent model and the intelligent assembly line are deployed, so that the problem of how to introduce the MLFO into the management and control system MCS to deploy the machine learning function in the related technology can be solved, the communication with the first unit is established, the operation environment information is received, the deployment is carried out according to the operation environment information, and the MLFO is introduced into the management and control system MCS to deploy the machine learning function.

Description

Intelligent pipeline processing method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to an intelligent pipeline processing method, an intelligent pipeline processing device, a storage medium and an electronic device.
Background
In the bearer network, a service connection can be created by controlling, and a connection service is provided. Common control methods include distributed control and centralized control.
A distributed controller, such as an Automatic Switched Optical Network (ASON), can complete distributed connection scheduling through interaction between control components based on mechanisms such as signaling, routing, auto discovery, and the like. The method has the advantages of distributed processing, flexible network dynamic control and the like. ASON is referred to as control plane with respect to the management plane and transport plane.
A centralized Controller, such as a Software Defined Network Controller (SDN Controller), completes centralized Connection scheduling through control components such as a Connection Controller (CC) and a Routing Controller (RC) based on a control architecture in a logic set. The resource configuration can be optimized from the whole body because of having the global network resource information and being capable of cooperatively processing all the connection requests.
Fig. 1 is a schematic diagram of related components of an SDN controller in the related art, as shown in fig. 1, including a network Call controller ncc (network Call controller), a Link Resources Manager (LRM), a connection controller CC, a routing controller RC, a Notification Component (Notification Component), a Termination and Adaptation executor (TAP) Component, and a forwarding plane device network element connected to the TAP. The ASON control plane also has these functional components.
Considering that the network Management function and the Control function are substantially the same, the above-mentioned distributed controller and centralized controller, and the network Management System are collectively referred to as a Management Control System (MCS). In the MCS, a plurality of management control components (MC components) are included, which perform management control functions (hereinafter, for simplicity of description, management control components are simply referred to as policing components).
In a Software Defined network controller (SDN) controller architecture, a Client Context represents a component in a service layer controller that is interactively serviced for governing services between clients/service controllers; the service Context Server Context represents a component in the client layer controller that is an interactive service for the governing service between the client/service controllers.
Artificial Intelligence (AI) and Machine Learning (ML) are affecting the Intelligence of software. With the development of AI/ML, artificial intelligence, particularly machine learning, is introduced to a distributed control plane or a centralized controller, so that the intelligent level of a control network is greatly improved, and the service scheduling and maintaining efficiency is further improved. The AI/ML is introduced into a distributed control platform or a centralized controller, and influences on the interaction relation and the interface of the existing management and control system components.
Fig. 2 is a diagram of the overall architecture of machine learning in the related art, as shown in fig. 2, and includes a management subsystem, a machine learning sandbox subsystem ML sandbox subsystem, a machine learning pipeline subsystem ML pipeline subsystem, and a machine learning underlying network ML underlay networks.
In the management subsystem, a Machine Learning Function Orchestrator (MLFO) is responsible for configuring ML pipeline in the ML sandbox subsystem and simulating a underlying network simulated ML underlay network, and performing model training, testing and verification based on the ML pipeline and the MLFO. The ML pipeline and the scaled ML underservery network in the sandbox subsystem are generated from the MLFO.
Generally, a Machine Learning Pipeline (MLP) includes logical nodes such as a source SRC, a collector C, a preprocessor PP, a model M, a policy P, a distributor D, and a SINK. The source node (source SRC) provides data input to the MLP. The collector node (collector C) is responsible for collecting data from one or more source nodes. The pre-processor node (pre-processor PP) is responsible for pre-processing data enabling data that can be used or consumed by the machine learning model. The model nodes indicate data processing rules and logic. The policy node (policy P) will generate a specific policy and apply the policy to the output of the model node. The distributor node (distributor D) is responsible for determining the SINK node (SINK) and distributing the model output to the SINK node. The sink node is responsible for executing the model output.
In the existing machine learning architecture, there are constraints to be considered on how to introduce MLFO in the management and control system MCS to deploy the machine learning function, and these problems are not clearly described.
Disclosure of Invention
The embodiment of the invention provides an intelligent pipeline processing method, an intelligent pipeline processing device, a storage medium and an electronic device, and at least solves the problem of how to introduce MLFO into a management and control system MCS to deploy a machine learning function in the related art.
According to an embodiment of the invention, an intelligent pipeline processing method is provided, which is applied to a second unit, and comprises the following steps:
receiving operating environment information of the intelligent assembly line sent by a first unit;
constructing the intelligent assembly line according to the operating environment information, and simulating and training an intelligent model in the intelligent assembly line;
and deploying the intelligent model and the intelligent pipeline.
According to another embodiment of the present invention, there is also provided an intelligent pipeline processing method applied to a first unit, the method including:
and sending the operating environment information of the intelligent pipeline to a second unit, wherein the operating environment information is used for indicating the second unit to construct the intelligent pipeline and a simulation network, simulating and training an intelligent model in the intelligent pipeline, and deploying the intelligent model and the intelligent pipeline.
According to another embodiment of the present invention, there is also provided an intelligent pipeline processing apparatus applied to a second unit, the apparatus including:
the receiving module is used for receiving the running environment information of the intelligent assembly line sent by the first unit;
the construction module is used for constructing the intelligent assembly line according to the operating environment information and simulating and training an intelligent model in the intelligent assembly line;
and the deployment module is used for deploying the intelligent model and the intelligent assembly line.
According to another embodiment of the present invention, there is also provided an intelligent pipeline processing apparatus applied to a first unit, the apparatus including:
and the sending module is used for sending the operating environment information of the intelligent assembly line to the second unit, wherein the operating environment information is used for indicating the second unit to construct the intelligent assembly line and the simulation network, simulating and training an intelligent model in the intelligent assembly line, and deploying the intelligent model and the intelligent assembly line.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
Receiving operating environment information of an intelligent assembly line sent by a first unit, constructing the intelligent assembly line according to the operating environment information, and simulating and training an intelligent model in the artificial intelligent assembly line; the intelligent model and the intelligent pipeline are deployed, the problem of how to introduce MLFO into a management and control system MCS (including an SDN controller and an ASON control plane) to deploy a machine learning function in the related technology can be solved, communication with the first unit is established, operating environment information is received and deployment is carried out according to the operating environment information, and the MLFO is introduced into the management and control system MCS to deploy the machine learning function.
Drawings
Fig. 1 is a schematic diagram of SDN controller-related components in the related art;
FIG. 2 is a diagram of a machine learning architecture in the related art;
fig. 3 is a block diagram of a hardware configuration of a mobile terminal of the intelligent pipeline processing method according to the embodiment of the present invention;
FIG. 4 is a first flowchart of an intelligent pipeline processing method according to an embodiment of the invention;
FIG. 5 is a flow chart diagram two of an intelligent pipeline processing method according to an embodiment of the invention;
fig. 6 is a schematic diagram of MLFO versus SDN controller according to the present embodiment;
FIG. 7 is a flow diagram of an implementation of a machine learning pipeline according to the present embodiments;
fig. 8 is a schematic diagram of a source-sink node and an SDN controller connection relationship in a machine learning pipeline according to the present embodiment;
FIG. 9 is a schematic diagram of a machine learning pipeline deployment according to the present embodiments;
fig. 10 is a schematic diagram of a machine learning pipeline connection relationship with an SDN controller through an agent according to the present embodiment;
fig. 11 is a schematic diagram of a connection relationship of a machine learning pipeline with an SDN controller through a client context according to the present embodiment;
FIG. 12 is a first block diagram of an intelligent pipeline processing apparatus according to the present embodiment;
fig. 13 is a block diagram ii of the intelligent pipeline processing apparatus according to the present embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, a network device, or a similar computing device. Taking a mobile terminal as an example, fig. 3 is a hardware structure block diagram of the mobile terminal of the intelligent pipeline processing method according to the embodiment of the present invention, and as shown in fig. 3, the mobile terminal may include one or more processors 102 (only one is shown in fig. 3) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA or a graphics processor GPU) and a memory 104 for storing data, where the mobile terminal may further include a transmission device 106 for communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 3 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 3, or have a different configuration than shown in FIG. 3.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the intelligent pipeline processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and service chain address pool slicing processing by running the computer programs stored in the memory 104, thereby implementing the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, carrier networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices via a base station, an optical fiber, and so on to communicate with other transmission devices. In one example, the transmission device 106 may be an optical module that is used to communicate with other transmission devices over optical fibers.
In this embodiment, an intelligent pipeline processing method operating in the mobile terminal or the network architecture is provided, and fig. 4 is a first flowchart of the intelligent pipeline processing method according to the embodiment of the present invention, as shown in fig. 4, applied to a second unit, where the flowchart includes the following steps:
step S402, receiving the running environment information of the intelligent pipeline sent by the first unit;
s404, constructing the intelligent production line according to the operating environment information, and simulating and training an intelligent model in the intelligent production line;
and S406, deploying the intelligent model and the intelligent pipeline.
Through the above steps S402 to S406, the problem of how to introduce the MLFO in the management and control system MCS to deploy the machine learning function in the related art can be solved, communication with the first unit is established, the operating environment information is received, and deployment is performed according to the operating environment information, so that introduction of the MLFO in the management and control system MCS to deploy the machine learning function is realized.
In this embodiment, the step S406 may specifically include:
sending a query request for querying the running environment information to the first unit, wherein the query request is used for instructing the first unit to create a client context;
and receiving the operating environment information sent by the first unit through the client context.
In this embodiment, the step S404 may specifically include:
determining an intelligent application requirement based on the operating environment information, and further determining the intelligent application requirement according to a data source, data characteristics of the data source, an execution unit, and a configuration command, a resource, and an operation and maintenance policy of the execution unit, wherein the operating environment information includes the data source, the data characteristics, the execution unit, the configuration command, the resource, and the operation and maintenance policy;
and constructing the intelligent assembly line according to the intelligent application requirements.
In an optional embodiment, the method further comprises: and constructing a simulation network according to the operating environment information, wherein the simulation network is used for operating the intelligent model. Correspondingly, the step S406 may specifically include: operating the intelligent model in the simulation network and obtaining an operation result; and under the condition that the operation result is normal, deploying the intelligent model and the intelligent assembly line.
In another optional embodiment, the step S406 may further include: deploying the intelligent model and the intelligent pipeline on computing resources, storage resources, and network resources controlled by the first unit.
In an exemplary embodiment, in the process of deploying the intelligent model and the intelligent pipeline, a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit are established.
In an optional embodiment, a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit may be established by an agent; or establishing a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit through a client context.
In another optional embodiment, the binding relationship between the SRC and the data source may also be established by establishing an access interface between the SRC and the data source, and the binding relationship between the SINK and the execution unit may also be established by establishing an access interface between the SINK and the NCC.
In this embodiment, the intelligent pipeline includes an artificial intelligence pipeline and a machine learning pipeline, and the intelligent model includes an artificial intelligence model and a machine learning model.
According to another aspect of the present embodiment, there is also provided an intelligent pipeline processing method applied to the first unit, and fig. 5 is a second flowchart of the intelligent pipeline processing method according to the embodiment of the present invention, as shown in fig. 5, the flowchart includes the following steps:
step S502, sending running environment information of the intelligent pipeline to a second unit, wherein the running environment information is used for instructing the second unit to construct the intelligent pipeline and a simulation network, simulating and training an intelligent model in the intelligent pipeline, and deploying the intelligent model and the intelligent pipeline.
In an exemplary embodiment, the step S502 may specifically include:
receiving a query request for querying the operating environment information sent by the second unit, and creating a client context for communication according to the query request; sending the runtime environment information to the first unit via the client context.
The present embodiment will be described in detail below with the first unit being an SDN controller and the second unit being an MLFO as an example.
Fig. 6 is a schematic diagram of a relationship between an MLFO and a management and control system MCS (which may be an SDN controller, ASON control plane) according to this embodiment, as shown in fig. 6, in this embodiment, a network client proposes to dynamically monitor traffic characteristics carried in a virtual network VN to the MLFO, and triggers dynamic adjustment of service connection by an MLP after it is predicted that deployment of traffic will cause some link capacities in transport resources to exceed a specified threshold, so as to prepare for predicted traffic in the transport resources.
And after receiving the network client request, the MLFO inquires ML operation environment information from the management and control system MCS, and when the management and control system MCS is an SDN controller, the SDN controller creates a client context ClientContext and communicates with the MLFO through the ClientContext. ClientContext is the context provided by the SDN controller for the service client (here MLFO).
Fig. 7 is a flowchart of implementing a machine learning pipeline according to the present embodiment, as shown in fig. 7, including:
step S702, the SDN controller provides MLP operation environment information of a machine learning assembly line to the MLFO;
the machine learning pipeline runtime environment information describes configuration information for simulating the MLP and the network, and running the MLP.
In this embodiment, the operating environment information includes:
1) data source and data characteristics: the data source includes a component in the SDN controller that provides data for analysis, such as a database, and also includes a transfer resource controlled by the SDN controller that can provide data for analysis. Data characteristics including data type (such as topology, link, connection, alarm, performance, etc.), transmission mode, bandwidth, etc. in this embodiment, the database provides data types of the history and current topology, link, connection configuration, etc. of the VN; the historical data can adopt a batch transmission mode, and the current data can adopt a mode of transmitting after being changed; bandwidth may not be considered when it does not affect data transfer.
2) Execution unit and configuration commands: the execution unit comprises a component in the SDN controller, which can execute the MLP output policy or configuration, and also comprises other configurable SDN controllers or transmission resources controlled by the SDN controller. Configuration commands, including command type, interface parameters, etc.
In this embodiment, the execution unit includes an NCC in an SDN controller, and is configured to receive a connection dynamic scheduling request, where the corresponding configuration command includes initiating connection creation, and the interface parameter includes a new bandwidth.
Other components within the SDN controller, such as RC, CC, TAP, etc., may act as execution units when they can receive and execute the configuration of the MLP output. The transmission resource controlled by the SDN controller, such as the transmission network element device, can also receive and execute configuration of MLP output (such as link resource state, timeslot crossing), and can also serve as an execution unit.
3) Resource: including computing resources, storage resources, network resources that are available or controlled by the SDN controller, to which MLPs may be deployed.
In this embodiment, the SDN controller may be deployed in a cloud environment, and the MLP may be deployed in the same cloud environment as the SDN controller using computing resources, storage resources, and network resources in the cloud.
When the SDN controller is deployed, the SDN controller may also be allocated with computing resources and storage resources, and these resources may be used for deploying the MLP.
4) Operation and maintenance preferences or policies: requirements placed on the use of data and resources for regulatory or security reasons, such as the data cannot leave the controller; preferably, SDN controller resources and the like are used.
In this embodiment, the operation and maintenance preference requires the use of SDN controller resources, and then the MLP cannot be deployed beyond these resources when the MLP is subsequently deployed.
The SDN controller may describe and transmit the operating environment information through a Representational State Transfer (REST) interface, may transmit the operating environment information through a remote call RPC message interface, and may be implemented through a Network Configuration protocol (NETCONF) protocol Configuration.
Step S704, the MLFO constructs a machine learning production line and a simulation lower layer network, and simulates and trains an ML model;
one way that MLFO builds a machine learning pipeline may be that MLFO disassembles constraint information for a particular build machine learning pipeline based on the machine learning intent input by the customer. Wherein the constraint information includes: appointing a source node SRC and a specific data format; configuring a collect data command to a collector node (collector C) so that the collector node can collect data from one or more source nodes, such as collecting a network topology; a pre-processing algorithm of the definite pre-processor node (pre-processor PP), such as abnormal data filtering, enables data to be used or consumed by a machine learning model through pre-processing; specifying a machine learning model, for example, using supervised learning and specific algorithms and parameters, such as Graph Neural Network (GNN), to indicate data processing rules and logic; the policy node (policy P) will generate an application specific policy and apply the policy to the output of the model node. The distributor node (distributor D) is responsible for determining the SINK node (SINK) and distributing the model output (e.g., configuration commands) to the SINK node. The sink node is responsible for executing the model output.
A method for constructing an analog lower-layer network by MLFO, for example, constructing an Optical Transport Network (OTN) network with multiple nodes, requires defining nodes and links (ports, bandwidths, link costs, etc.) between nodes; it is also clear that in this underlying network, data sources that can provide data, such as databases in nodes; it is also necessary to specify the nodes and interfaces that execute the machine learning model configuration commands, such as the interfaces that adjust the link cost between two nodes.
Methods for training the ML model include supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and the like. In supervised learning, labeled sample data needs to be provided for a model, and based on the data, the supervised learning can generate a mapping relation between the sample and a target. The sample data may be from an existing operating network or from a simulated network.
Based on the operating environment information provided by an SDN controller, determining a proper data source and data characteristics, an execution unit and configuration commands, resources, operation and maintenance preferences and an ML model to meet the ML application requirements, and constructing a machine learning production line and a simulated lower-layer network;
preferably, according to the operation and maintenance preference in the operation environment information, simulating the MLP and simulating the lower-layer network to be deployed on the calculation, storage and network resources controlled by the SDN controller;
in this embodiment, based on the operating environment information provided by the SDN controller, the following information is determined to meet the ML application requirements, and a machine learning pipeline and a simulated lower-layer network are constructed:
acquiring data of the history and current topology, link, connection configuration and the like of a virtual network VN from a database in an SDN controller; the determined execution unit is NCC in an SDN controller, and the configuration command is to initiate connection route adjustment; determining used computing storage resources for constructing a machine learning pipeline and simulating an underlying network, such as a computer using 8G video memory and 4 GPUs, and a virtual machine can also be used; in operation and maintenance preference, if the collected data and the training model are allowed to be deployed to computing storage resources outside an SDN controller, the collected data and the training model can be deployed to the computing storage resources outside the SDN controller, and if the collected data and the training model are not allowed to be deployed to the computing storage resources outside the SDN controller, a machine learning pipeline and a simulation lower-layer network can be built only in the computing storage resources provided by the SDN controller; and determining a machine learning model, for example, adopting a Graph neural Network (a specific Graph time space Network Graph Spatial-Temporal Network) for model training, and predicting which connections need to be rerouted and adjusted based on given data.
Step S706, the MLFO deploys the trained ML model and MLP.
And during deployment, deploying the trained ML model and the MLP to given resources according to the operating environment information. In the deployment process, an ML model is integrated into the computing resources determined in the operating environment information, and the logic nodes in the MLP are also integrated into the computing resources, wherein the binding relationship between the SRC and the data source of the source node in the MLP and the binding relationship between the SINK node and the execution unit are established.
Preferably, the binding relationship between the SRC of the source node and the data source, and the binding relationship between the SINK and the execution unit may be directly established, or may be indirectly bound through an agent. Through the binding relationship, an interface for accessing a data source through the SRC is provided, and an interface for configuring an execution command to an execution unit through the SINK may be provided.
In this embodiment, the data source is a database in an SDN controller, the execution unit is an NCC, and a binding relationship between the SRC in the MLP and the database and a binding relationship between the SINK and the NCC are established. When a binding relationship is directly established, in an implementation manner, fig. 8 is a schematic diagram of a connection relationship between a source-SINK node and an SDN controller in a machine learning pipeline according to this embodiment, as shown in fig. 8, an access interface is established between an SRC and a database, and an access interface is established between an SINK and an NCC; in one implementation manner, fig. 9 is a schematic diagram of machine learning pipeline deployment according to this embodiment, and as shown in fig. 9, the SRC and SINK establish access interfaces with the database and the NCC respectively through a common agent.
Preferably, the binding relationship between the SRC and the data source and the binding relationship between the SINK and the execution unit are established through ClientContext.
Fig. 10 is a schematic diagram of a connection relationship between a machine learning pipeline and an SDN controller through an agent according to this embodiment, and as shown in fig. 10, after the SRC and the SINK in the MLP pass through the ClientContext, access interfaces are respectively established with a data source (in this embodiment, a database) and an execution unit (in this embodiment, an NCC).
In step S706, when the MLFO is deployed, the trained ML model and MLP are deployed to a given resource according to the operating environment information.
As shown by the dotted line in fig. 8, the range of the SDN controller is indicated. In fig. 8, the machine learning pipeline and simulated MLP, MCS and resources are deployed outside the SDN controller.
Fig. 11 is a schematic diagram of a connection relationship of the machine learning pipeline with the SDN controller through the client context according to the present embodiment, as shown by a dotted line in fig. 11, indicating a range of the SDN controller. In fig. 11, the machine learning pipeline and simulated MLP, MCS and resources are deployed within the SDN controller.
According to another aspect of the present embodiment, there is also provided an intelligent pipeline processing apparatus applied to a second unit, and fig. 12 is a first block diagram of the intelligent pipeline processing apparatus according to the present embodiment, as shown in fig. 12, the apparatus includes:
a receiving module 122, configured to receive the running environment information of the intelligent pipeline sent by the first unit;
a building module 124, configured to build the intelligent pipeline according to the operating environment information, and simulate and train an intelligent model in the intelligent pipeline;
a deployment module 126 for deploying the intelligent model and the intelligent pipeline.
In an exemplary embodiment, the receiving module 122 includes:
a sending submodule, configured to send a query request for querying the operating environment information to the first unit, where the query request is used to instruct the first unit to create a client context;
and the receiving submodule is used for receiving the operating environment information sent by the first unit through the client context.
In an exemplary embodiment, the building module 124 includes:
the determining submodule is used for determining the intelligent application requirement based on the operating environment information;
and the construction submodule is used for constructing the intelligent assembly line according to the intelligent application requirement.
In an exemplary embodiment, the determining sub-module is further configured to
And determining the intelligent application requirements according to a data source, the data characteristics of the data source, an execution unit, and a configuration command, a resource, and an operation and maintenance strategy of the execution unit, wherein the operating environment information includes the data source, the data characteristics, the execution unit, the configuration command, the resource, and the operation and maintenance strategy.
In an exemplary embodiment, the building block 124 is further configured to
And constructing a simulation network according to the operating environment information, wherein the simulation network is used for operating the intelligent model.
In an exemplary embodiment, the deployment module 126 is further configured to
Operating the intelligent model in the simulation network and obtaining an operation result;
and under the condition that the operation result is normal, deploying the intelligent model and the intelligent assembly line.
In an exemplary embodiment, the deployment module 126 is further configured to
The second unit deploys the intelligent model and the intelligent pipeline on the computing resources, storage resources, and network resources controlled by the first unit.
In an exemplary embodiment, the apparatus further comprises:
and the establishing module is used for establishing a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit in the process of deploying the intelligent model and the intelligent pipeline.
In an exemplary embodiment, the establishing module includes:
a first establishing submodule, configured to establish, by using an agent, a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit, and a binding relationship between the SINK and the execution unit of the first unit; or
And the second establishing submodule is used for establishing the binding relationship between the SRC in the machine learning production line and the data source of the first unit and the binding relationship between the SINK and the execution unit of the first unit through the client context.
In an exemplary embodiment, the establishing module includes:
a third establishing sub-module, configured to establish a binding relationship between the SRC and the data source by establishing an access interface between the SRC and the data source;
and the fourth establishing submodule is used for establishing the binding relationship between the SINK and the execution unit in a mode of establishing an access interface between the SINK and the NCC.
In an exemplary embodiment, the intelligent pipeline includes an artificial intelligence pipeline and a machine learning pipeline, and the intelligent model includes an artificial intelligence model and a machine learning model.
According to another aspect of the present embodiment, there is also provided an intelligent pipeline processing apparatus applied to a first unit, and fig. 13 is a block diagram ii of the intelligent pipeline processing apparatus according to the present embodiment, as shown in fig. 13, the apparatus includes:
a sending module 132, configured to send operating environment information of the intelligent pipeline to a second unit, where the operating environment information is used to instruct the second unit to construct the intelligent pipeline and a simulation network, simulate and train an intelligent model in the intelligent pipeline, and deploy the intelligent model and the intelligent pipeline.
In an exemplary embodiment, the sending module 132 includes:
the receiving submodule is used for receiving a query request for querying the operating environment information sent by the second unit and creating a client context for communication according to the query request;
a sending submodule, configured to send the operating environment information to the first unit through the client context.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (17)

1. An intelligent pipeline processing method, applied to a second unit, the method comprising:
receiving operating environment information of the intelligent assembly line sent by a first unit;
constructing the intelligent assembly line according to the operating environment information, and simulating and training an intelligent model in the intelligent assembly line;
and deploying the intelligent model and the intelligent pipeline.
2. The method of claim 1, wherein receiving runtime environment information for the intelligent pipeline sent by the first unit comprises:
sending a query request for querying the running environment information to the first unit, wherein the query request is used for instructing the first unit to create a client context;
and receiving the operating environment information sent by the first unit through the client context.
3. The method of claim 1, wherein building the intelligent pipeline from the runtime environment information comprises:
determining intelligent application requirements based on the operating environment information;
and constructing the intelligent assembly line according to the intelligent application requirements.
4. The method of claim 3, wherein determining smart application requirements based on the runtime environment information comprises:
and determining the intelligent application requirements according to a data source, the data characteristics of the data source, an execution unit, and a configuration command, a resource, and an operation and maintenance strategy of the execution unit, wherein the operating environment information includes the data source, the data characteristics, the execution unit, the configuration command, the resource, and the operation and maintenance strategy.
5. The method of claim 1, further comprising:
and constructing a simulation network according to the operating environment information, wherein the simulation network is used for operating the intelligent model.
6. The method of claim 5, wherein deploying the intelligent model and the intelligent pipeline comprises:
operating the intelligent model in the simulation network and obtaining an operation result;
and under the condition that the operation result is normal, deploying the intelligent model and the intelligent assembly line.
7. The method of claim 1, wherein deploying the intelligent model and the intelligent pipeline comprises:
deploying the intelligent model and the intelligent pipeline on computing resources, storage resources, and network resources controlled by the first unit.
8. The method of claim 1, further comprising:
and in the process of deploying the intelligent model and the intelligent assembly line, establishing a binding relationship between the SRC in the intelligent assembly line and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit.
9. The method of claim 8, wherein establishing the binding between the SRC and the data source in the intelligence, and establishing the binding between the SINK and the execution unit comprises:
establishing a binding relationship between the SRC in the intelligent pipeline and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit through an agent; or
And establishing a binding relationship between the SRC in the intelligent assembly line and the data source of the first unit and a binding relationship between the SINK and the execution unit of the first unit through a client context.
10. The method of claim 8, wherein establishing the binding of the SRC and the data source of the first unit in the intelligent pipeline, and wherein establishing the binding of the SINK and the execution unit of the first unit comprises:
establishing a binding relationship between the SRC and the data source in a manner of establishing an access interface between the SRC and the data source;
and establishing the binding relationship between the SINK and the execution unit by establishing an access interface between the SINK and the NCC.
11. The method of any of claims 1 to 10, wherein the intelligent pipeline comprises an artificial intelligence pipeline and a machine learning pipeline, and wherein the intelligent models comprise an artificial intelligence model and a machine learning model.
12. An intelligent pipeline processing method, applied to a first unit, the method comprising:
and sending the operating environment information of the intelligent pipeline to a second unit, wherein the operating environment information is used for indicating the second unit to construct the intelligent pipeline and a simulation network, simulating and training an intelligent model in the intelligent pipeline, and deploying the intelligent model and the intelligent pipeline.
13. The method of claim 12, wherein sending runtime environment information for the intelligent pipeline to the second unit comprises:
receiving a query request for querying the operating environment information sent by the second unit, and creating a client context for communication according to the query request;
sending the runtime environment information to the first unit via the client context.
14. An intelligent pipeline processing apparatus, applied to a second unit, the apparatus comprising:
the receiving module is used for receiving the running environment information of the intelligent assembly line sent by the first unit;
the construction module is used for constructing the intelligent assembly line according to the operating environment information and simulating and training an intelligent model in the intelligent assembly line;
and the deployment module is used for deploying the intelligent model and the intelligent assembly line.
15. An intelligent pipeline processing apparatus, applied to a first unit, the apparatus comprising:
and the sending module is used for sending the operating environment information of the intelligent assembly line to the second unit, wherein the operating environment information is used for indicating the second unit to construct the intelligent assembly line and the simulation network, simulating and training an intelligent model in the intelligent assembly line, and deploying the intelligent model and the intelligent assembly line.
16. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method of any one of claims 1 to 11, 12 to 13 when the computer program is executed.
17. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of any one of claims 1 to 11 and 12 to 13.
CN202110169290.2A 2021-02-07 2021-02-07 Intelligent pipeline processing method and device, storage medium and electronic device Pending CN112801303A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110169290.2A CN112801303A (en) 2021-02-07 2021-02-07 Intelligent pipeline processing method and device, storage medium and electronic device
PCT/CN2022/074034 WO2022166715A1 (en) 2021-02-07 2022-01-26 Intelligent pipeline processing method and apparatus, and storage medium and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110169290.2A CN112801303A (en) 2021-02-07 2021-02-07 Intelligent pipeline processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112801303A true CN112801303A (en) 2021-05-14

Family

ID=75814735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110169290.2A Pending CN112801303A (en) 2021-02-07 2021-02-07 Intelligent pipeline processing method and device, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN112801303A (en)
WO (1) WO2022166715A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166715A1 (en) * 2021-02-07 2022-08-11 中兴通讯股份有限公司 Intelligent pipeline processing method and apparatus, and storage medium and electronic apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529673B (en) * 2016-11-17 2019-05-03 北京百度网讯科技有限公司 Deep learning network training method and device based on artificial intelligence
US10816978B1 (en) * 2018-02-22 2020-10-27 Msc.Software Corporation Automated vehicle artificial intelligence training based on simulations
GB2584380A (en) * 2018-11-22 2020-12-09 Thales Holdings Uk Plc Methods for generating a simulated enviroment in which the behaviour of one or more individuals is modelled
CN111488254A (en) * 2019-01-25 2020-08-04 顺丰科技有限公司 Deployment and monitoring device and method of machine learning model
CN109947567B (en) * 2019-03-14 2021-07-20 深圳先进技术研究院 Multi-agent reinforcement learning scheduling method and system and electronic equipment
CN111555907B (en) * 2020-04-19 2021-04-23 北京理工大学 Data center network energy consumption and service quality optimization method based on reinforcement learning
CN111666713B (en) * 2020-05-15 2022-07-08 清华大学 Power grid reactive voltage control model training method and system
CN111598237B (en) * 2020-05-21 2024-06-11 上海商汤智能科技有限公司 Quantization training, image processing method and device, and storage medium
CN112801303A (en) * 2021-02-07 2021-05-14 中兴通讯股份有限公司 Intelligent pipeline processing method and device, storage medium and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022166715A1 (en) * 2021-02-07 2022-08-11 中兴通讯股份有限公司 Intelligent pipeline processing method and apparatus, and storage medium and electronic apparatus

Also Published As

Publication number Publication date
WO2022166715A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
Martinez et al. Design, resource management, and evaluation of fog computing systems: a survey
EP3497884B1 (en) Method and apparatus for network slicing
CN111083722B (en) Method and device for pushing model and requesting model, and storage medium
US20170093748A1 (en) Virtual network controller
US20190230046A1 (en) Autonomic resource partitions for adaptive networks
CN111600930A (en) Micro-service request traffic management method, device, server and storage medium
CN112540578A (en) Super-fusion architecture of industrial control system
CN110476453A (en) For providing the service granting that network is sliced to client
CN113364850B (en) Software-defined cloud-edge collaborative network energy consumption optimization method and system
CN108449350B (en) Multi-protocol arranging method and device
CN104468688A (en) Method and apparatus for network virtualization
US20160352815A1 (en) Data Distribution Based on Network Information
US20200409744A1 (en) Workflow engine framework
CN115622904A (en) Management and scheduling method, device, node and storage medium
Khezri et al. Deep reinforcement learning for dynamic reliability aware NFV-based service provisioning
US11500895B2 (en) Data blending for multiple data pipelines
CN112929187A (en) Network slice management method, device and system
CN113596925A (en) Slice arranging method and system for 5G base station
WO2023274304A1 (en) Distributed routing determining method, electronic device, and storage medium
CN112801303A (en) Intelligent pipeline processing method and device, storage medium and electronic device
CN112671914B (en) IOT (Internet of things) equipment communication method and system based on actor model
Maciel et al. Cloud-network slicing MANO towards an efficient IoT-cloud continuum
Gand et al. A Lightweight Virtualisation Platform for Cooperative, Connected and Automated Mobility.
CN115460659B (en) Wireless communication data analysis system for bandwidth adjustment
US20230385708A1 (en) Reconciling computing infrastructure and data in federated learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination