WO2023168976A1 - 光传送网性能预测方法、系统、电子设备及存储介质 - Google Patents

光传送网性能预测方法、系统、电子设备及存储介质 Download PDF

Info

Publication number
WO2023168976A1
WO2023168976A1 PCT/CN2022/131433 CN2022131433W WO2023168976A1 WO 2023168976 A1 WO2023168976 A1 WO 2023168976A1 CN 2022131433 W CN2022131433 W CN 2022131433W WO 2023168976 A1 WO2023168976 A1 WO 2023168976A1
Authority
WO
WIPO (PCT)
Prior art keywords
domain
model
performance prediction
prediction model
performance
Prior art date
Application number
PCT/CN2022/131433
Other languages
English (en)
French (fr)
Inventor
王大江
周晓慧
王其磊
薄开涛
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023168976A1 publication Critical patent/WO2023168976A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • H04B17/3912Simulation models, e.g. distribution of spectral power density or received signal strength indicator [RSSI] for a given geographic region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • H04B17/3913Predictive models, e.g. based on neural network models

Definitions

  • Embodiments of the present application relate to the field of communications, and in particular to an optical transmission network performance prediction method, system, electronic equipment and storage medium.
  • the main purpose of the embodiments of this application is to propose an optical transport network performance prediction, system, electronic device and storage medium, which can realize the establishment of a simulated network for establishing an optical transport network.
  • an optical transport network performance prediction method which is applied to a single domain management server and includes the following steps: obtaining a performance prediction model; obtaining model update information of the performance prediction model through horizontal federated learning technology ; Among them, the model update information is obtained after training the performance prediction model based on the performance data of each network element in a single domain, and is used to update the performance prediction model; the model update information and single domain topology information are reported to multi-domain management
  • the server is used by the multi-domain management server to generate an optical transport network digital twin model based on model update information and single-domain topology information.
  • the optical transport network digital twin model is used to predict the performance of the optical transport network.
  • embodiments of the present application also provide an optical transport network performance prediction method, which is applied to a multi-domain management server and includes the following steps: obtaining model update information and single-domain topology information reported by a single-domain management server; wherein, The model update information is obtained by training the performance prediction model of a single domain management server based on the performance data of each network element in a single domain through horizontal federated learning technology, and is used to update the performance prediction model; based on the model update information and single domain Domain topology information is used to generate an optical transport network digital twin model; among which, the optical transport network digital twin model is used to predict the performance of the optical transport network.
  • embodiments of the present application also provide an optical transport network performance prediction system, including: a single domain management server and a multi-domain management server; the single domain management server communicates with the multi-domain management server; wherein, the single domain management server The server is used to obtain the performance prediction model; obtain the model update information of the performance prediction model through horizontal federated learning technology; where the model update information is obtained after training the performance prediction model based on the performance data of each network element in a single domain; Report the parameter updates and single-domain topology information to the multi-domain management server for the multi-domain management server to generate an optical transport network digital twin model; the multi-domain management server is used to obtain the model update information and single-domain topology reported by the single-domain management server.
  • the model update information is obtained by training the performance prediction model of a single-domain management server based on the performance data of each network element in a single domain through horizontal federated learning technology; based on the model update information and single-domain topology information, it is generated Optical transport network digital twin model; among them, the optical transport network digital twin model is used to predict the performance of the optical transport network.
  • Embodiments of the present application also provide an electronic device, including: at least one processor; a memory communicatively connected to the at least one processor; the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor. , so that at least one processor can execute the above optical transport network performance prediction method.
  • Embodiments of the present application also provide a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, the above optical transport network performance prediction method is implemented.
  • the optical transport network performance prediction method proposed in this application uses a single-domain management server to obtain model update information and report it to a multi-domain management server.
  • the multi-domain management server finally generates an optical transport network digital twin model. That is, it uses horizontal federated learning to overcome data distribution.
  • the training difficulty caused by each single domain is because the model update information is obtained after training the performance prediction model based on the performance data of each network element in a single domain. Therefore, based on the model update information, the update of the performance prediction model can be compared with the actual network. Therefore, the single-domain management server reports the model update information and single-domain topology information to the multi-domain management server.
  • the multi-domain management server can generate a digital twin of the optical transport network based on the updated performance prediction model. Model, this optical transport network digital twin model is also a model that is consistent with the actual processing and transmission capabilities in the optical transport network, thereby achieving the establishment of a simulation network for the optical transport network.
  • Figure 1 is a schematic flowchart of applying the optical transport network performance prediction method to a single domain management server according to an embodiment of the present application
  • Figure 2 is a schematic diagram 1 of the architecture of an optical transport network performance prediction system provided according to an embodiment of the present application
  • Figure 3 is a schematic diagram 2 of the architecture of an optical transport network performance prediction system provided according to an embodiment of the present application.
  • Figure 4 is a schematic diagram of a network element structure provided according to an embodiment of the present application.
  • Figure 5 is a schematic diagram of a performance prediction model provided according to an embodiment of the present application.
  • Figure 6 is a schematic diagram of performance data sampling provided according to an embodiment of the present application.
  • Figure 7 is a schematic flowchart 1 of a performance prediction model training method provided according to an embodiment of the present application.
  • Figure 8 is a schematic flowchart 2 of a performance prediction model training method provided according to an embodiment of the present application.
  • Figure 9 is a schematic flowchart three of a performance prediction model training method provided according to an embodiment of the present application.
  • Figure 10 is a schematic flowchart 4 of a performance prediction model training method provided according to an embodiment of the present application.
  • Figure 11 is a schematic flowchart of applying the optical transport network performance prediction method to a multi-domain management server according to an embodiment of the present application
  • Figure 12 is a schematic structural diagram of an optical transport network performance prediction system provided according to an embodiment of the present application.
  • Figure 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the embodiment of the present application relates to an optical transport network performance prediction method, as shown in Figure 1, including the following steps:
  • Step 101 obtain the performance prediction model
  • Step 102 Obtain the model update information of the performance prediction model through horizontal federated learning technology; where the model update information is obtained after training the performance prediction model based on the performance data of each network element in a single domain and is used to update the performance prediction model. make updates;
  • Step 103 Report the model update information and single-domain topology information to the multi-domain management server, so that the multi-domain management server can generate an optical transport network digital twin model based on the model update information.
  • the optical transport network digital twin model is used to analyze the optical transport network. Make performance predictions.
  • the optical transport network performance prediction method in this embodiment is applied to a single-domain management server.
  • the single-domain management server is a management server that manages each network element in a domain in the optical transport network.
  • This single-domain management server can run on each single-domain management server.
  • the optical transport network (OTN for short) can be divided into different management domains in the horizontal direction.
  • a single management domain can be composed of the OTN equipment of a single equipment manufacturer, or it can be composed of a certain network or sub-network of the operator. Network composition.
  • a single management domain is a single domain.
  • the multi-domain orchestration system can be run on multiple single-domain management servers to generate a digital twin of the optical transport network. Model.
  • the digital twin model of the optical transport network can predict the performance of the optical transport network or implement other data simulation needs.
  • the global communications industry is moving from the Internet era and the cloud era to the intelligent era. New opportunities and new challenges are driving the network to accelerate comprehensive transformation and upgrading.
  • AN Autonomous Networks
  • the AN concept aims to enable the digital transformation of operator networks through the integration of network technology and digital technology, and provide benefits to vertical industries and consumers. Provide users with zero-wait, zero-contact, and zero-fault services and user experience, and create self-configuration, self-repair, and self-optimization network capabilities for the entire life cycle of operators' network operations.
  • Digital Twins (DT) technology was proposed and is considered by the industry to be the foundation of digital transformation and an important support and component for realizing AN architecture and technology.
  • the network digital twin By accurately sensing the status of itself and the external environment, the network digital twin establishes a digital mirror of the internal and external environment, integrates capabilities such as simulation and preventive prediction, and uses it in scenarios such as “low-cost trial and error”, “intelligent decision-making”, and “predictive maintenance”. Play the role of key enabling technologies.
  • Digital twin can be understood as - constructing a digital (virtual) model of a physical object in a digital space, and using data from the physical object to continuously modify the model and update the model status to make it consistent with the physical object throughout its life cycle. , and can mirror the real-time operating status of the physical object with high fidelity, the model becomes a digital twin of the physical object (referred to as digital twin). Based on digital twins, monitoring, analysis, prediction, diagnosis, training, and simulation can be performed, and simulation results can be fed back to physical objects to help optimize and make decisions on physical objects.
  • Related technologies involving digital twin model construction, real-time updating of digital model status, digital twin-based simulation analysis and control decisions can be collectively referred to as digital twin DT technology. It can be seen from this that how to combine the structural characteristics and status characteristics of the physical objects mirrored by the twin and the application scenarios of DT to build a DT model is the key technology of DT.
  • the DT network layer is a model abstraction of the entire cross-domain physical network. Communication between DT network elements is not limited by physical network space, and the visibility and operability of DT network elements are not subject to space restrictions and control such as physical network division. limit. This requires the OTN DT model to have the ability to universally abstract the functional mechanisms of physical network elements and network element devices in each domain.
  • the network element equipment comes from different equipment manufacturers. Although each network element equipment has the same network element operation function in the same network domain environment, , but the device batches, models, service life and other attributes of each network element are different, resulting in different optical performance changes of equipment, devices, optical fibers and other devices in actual operation and use. Only some network elements or single network elements are sampled. The DT performance prediction model obtained through training can only reflect the performance change characteristics of this part of network elements or this network element. It is not universal and generalizable to other network element devices in a single domain or other cross-domain network element devices.
  • Model training efficiency Model training efficiency is a problem caused by placing too much emphasis on centralized cloud computing training and ignoring the computing capabilities of edge devices.
  • federated learning The advantages of federated learning are: 1. Only local data is used for training, the data itself is not exchanged, and updated model parameters are exchanged in an encrypted manner. 2. Use the data of the device in different environments (time and location) for model training. The public model is updated and sent to the device to improve the model generalization ability. 3. Utilize the computing capabilities of edge devices for parallel training to improve model training efficiency.
  • the process of horizontal federated learning includes: Step 1. Each participant locally calculates the model gradient, and uses encryption technologies such as homomorphic encryption, differential privacy or secret sharing to mask the gradient information, and the masked results (referred to as encryption for short) gradient) is sent to the aggregation server. Step 2. The server performs a secure aggregation operation, such as using a weighted average based on homomorphic encryption. Step 3. The server sends the aggregated results to each participant. Step 4. Each participant decrypts the received gradient and uses the decrypted gradient results to update their respective model parameters.
  • this application proposes a method and mechanism for constructing an OTN DT network function model using horizontal federated learning technology.
  • the optical transport network performance prediction method of this application obtains model update information from a single-domain management server and reports it to a multi-domain management server.
  • the multi-domain management server finally generates an optical transport network digital twin model. That is, it uses horizontal federated learning to overcome the data distribution in various fields.
  • the training difficulty problem caused by a single domain is because the model update information is obtained after training the performance prediction model based on the performance data of each network element in a single domain. Therefore, based on the model update information, the update of the performance prediction model can be compared with the actual network element. The actual processing capabilities are more suitable. Therefore, the single-domain management server reports the model update information and single-domain topology information to the multi-domain management server.
  • the multi-domain management server can generate an optical transport network digital twin model based on the updated performance prediction model.
  • This digital twin model of the optical transport network is also a model that is consistent with the actual processing and transmission capabilities in the optical transport network, thereby enabling the establishment of a simulation network for the optical transport network.
  • optical transport network performance prediction method in this embodiment The implementation details of the optical transport network performance prediction method in this embodiment are described in detail below. The following content is only implementation details provided for convenience of understanding and is not necessary to implement this solution.
  • the single-domain management server obtains the performance prediction model.
  • the performance prediction model may be pre-stored in the single-domain management server, or may be obtained by the single-domain management server from the multi-domain management server or other electronic devices.
  • the optical transport network performance prediction method can be applied in a scenario where each network element in a single domain has training capabilities (hereinafter referred to as solution A).
  • the performance prediction model can is a single-domain performance prediction model
  • the model update information can be a single-domain digital twin model obtained based on the updated single-domain performance prediction model.
  • the single-domain management server obtains the model update information of the performance prediction model in the following manner: The domain performance prediction model is sent to each network element in this single domain to obtain the parameter update amount of the single-domain performance prediction model reported by each network element; among them, each network element performs the single-domain performance prediction model separately based on local performance data.
  • the single-domain management server can update the single-domain performance
  • the prediction model is delivered to each network element in this single domain, and each network element trains the single-domain performance prediction model respectively, so that the updated single-domain performance prediction model fits the performance characteristics of each network element and can Solve the problem of different device batches, models, service years and other attributes of each network element, resulting in poor generalization ability of performance prediction model training, and prone to over-fitting and other problems during the inference process, and achieve strong generalization ability of the performance prediction model , high prediction accuracy and other effects.
  • the optical transport network performance prediction method can be applied in a scenario where only the single-domain management server has the training capability and other network elements do not have the training capability in a single domain (hereinafter referred to as solution B).
  • the performance The prediction model can be a network performance prediction model
  • the model update information can be the parameter update amount of the network performance prediction model.
  • the single-domain management server can obtain the model update information of the performance prediction model in the following ways: According to the performance data of each network element, the network The performance prediction model is trained to obtain the parameter update amount of the network performance prediction model; among them, the parameter update amount of the network performance prediction model is used to report to the multi-domain management server, and the multi-domain management server updates based on the parameter update amount of the network performance prediction model. Network performance prediction model.
  • the single-domain management server can train the network performance prediction model on its own based on the performance data of each network element.
  • the parameter update amount of the network performance prediction model is obtained.
  • the multi-domain management server integrates the parameter update amount of each single-domain network performance prediction model to update the network performance prediction model.
  • the single-domain management server performs model training, which can reduce the time required for training. The number of devices saves computing resources. Even in scenarios where computing resources are relatively scarce, the optical transport network performance prediction method can be implemented. At the same time, it can also solve the problem of different device batches, models, service years and other attributes of each network element, resulting in performance problems.
  • the generalization ability of prediction model training is poor, and problems such as overfitting are prone to occur during the inference process.
  • the performance prediction model has strong generalization ability and high prediction accuracy.
  • the single-domain management server obtains the model update information of the performance prediction model; the model update information is obtained after training the performance prediction model based on the performance data of each network element in the single domain, and is used to update the performance prediction model. Make an update.
  • the model update information can be calculated by the single-domain management server or other network elements in the single domain where the single-domain management server is located.
  • the performance prediction model is updated according to the following formula:
  • the performance data sampling sequence set of each device is P k and there are i ⁇ P k , is the performance data prediction vector of device k for the i-th sample sequence at time t+1, is the performance data vector of the i-th sample sequence at time t.
  • the sample sequence refers to a sample set that samples the same prediction object in time series from 0 to time t.
  • device k is a network element node in solution A, and is a single-domain management server in solution B.
  • the performance prediction model is updated based on and the actual performance data vector at time t+1
  • the loss function and F k ( ⁇ ) are realized, where,
  • the performance data vector Sampling values are obtained from four dimensions: input optical power a i , output optical power b i , receiving end optical attenuation c i , and receiving end optical signal-to-noise ratio di .
  • the performance data vector is obtained in any of the following three ways. take samples,
  • the first method is to sequentially sample the performance values of each optical layer link in the same sampling period from the four dimensions, and perform a weighted summation convolution feature extraction process on the performance values of each dimension.
  • ⁇ i , ⁇ i , ⁇ i , ⁇ i , ⁇ i are the preset parameters in the convolution feature extraction processing operation, and m is the number of optical layer links in the device k;
  • Method 2 Concatenate the performance values of the four dimensions of each optical layer link at time t into a long vector to obtain and as follows:
  • Method 3 Randomly sample the performance values of the four dimensions of each optical layer link, where the performance values of the same sample sequence at different times in the same time period come from the same optical layer link. .
  • the single-domain management server reports the model update information and single-domain topology information to the multi-domain management server, so that the multi-domain management server generates an optical transport network digital twin model based on the model update information and single-domain topology information.
  • Network digital twin model is used to predict the performance of optical transport networks.
  • the multi-domain management server after receiving the model update information and single-domain topology information reported by multiple single-domain management servers, the multi-domain management server generates an optical transport network digital twin model.
  • the optical transport network digital twin model is a digital twin model of the entire optical transport network. That is, a digital twin model including multiple reported single domains is used to perform digital simulation of the optical transport network.
  • the parameters of the single-domain performance prediction model are updated according to the parameter update amount reported by each network element, and the updated single-domain performance prediction model is obtained.
  • the single-domain management server specifically needs to The domain performance prediction model is iteratively updated; the iterative update includes: delivering the updated single-domain performance prediction model to each network element in the single domain, obtaining the parameter update amount reported again by each network element, and updating the parameters according to the re-reported parameters.
  • Update the single-domain performance prediction model quantitatively and initiate the next round of iterative updates until the conditions for stopping iteration are met.
  • the conditions for stopping iteration include: the parameter update amount of the performance prediction model converges.
  • the single-domain performance prediction model is updated iteratively, so that the prediction effect of the single-domain performance prediction model is more consistent with the actual performance of the device, and the single-domain performance prediction model is updated, so that the multi-domain management server
  • the generated digital twin model of the optical transport network has better simulation results.
  • the parameters of the single-domain performance prediction model are updated according to the parameter updates reported by each network element, and are implemented according to the following formula:
  • the performance data sampling sequence set of each network element is P k and there are i ⁇ P k , is the performance data prediction vector of network element k for the i-th sample sequence at time t+1, is the performance data vector of the i-th sample sequence at time t.
  • the sample sequence refers to the sample set that samples the same prediction object in time series from 0 to time t.
  • the sampling period of a sample sequence is [0...t,t+ 1]
  • the network element structure of each single domain in OTN is shown in Figure 4. It is assumed that the OTN network element node k has 6 optical layer links. Its main optical layer transmission performance parameters include incoming optical power and outgoing optical power. , receiving end light attenuation, optical signal-to-noise ratio (Optical Signal Noise Ratio, referred to as "OSNR"), the performance parameters are shown in the following table:
  • OSNR optical Signal Noise Ratio
  • the RNN model parameters are ⁇ (w a ,w x ,b), and each sample sequence
  • the performance data sampled in each cycle must be input into the RNN model for model training, where, is the performance data prediction vector of network element k for the i-th sample sequence at time t+1, is the performance data vector of the i-th sample sequence at time t, and the RNN algorithm model parameters need to be directed to the prediction vector
  • the average formula F k ( ⁇ ) of the performance prediction loss function sum obtained through the RNN algorithm model after sampling nk sample sequences for network element node (i.e., network element) k is as follows:
  • sample label vector for network element node k to predict the performance of the i-th performance data sample sequence (hereinafter referred to as the sample sequence) at time t+1 is the performance prediction loss function of network element node k after sampling nk sample sequences and the gradient of parameter ⁇ j in the jth iteration of the public model for performance prediction of the manufacturer's management and control system.
  • the loss function of the OTN optical layer performance prediction RNN algorithm public model of the manufacturer's OTN single-domain management and control system is the average of the cumulative sum of the performance prediction loss functions of the RNN algorithm model of each network element node in this network domain (i.e. single domain), and is defined as:
  • the objective function of the OTN optical layer performance prediction RNN algorithm public model training of the manufacturer's OTN single-domain management and control system is:
  • the model parameters obtained in the j+1 iterative update of the OTN optical layer performance prediction RNN algorithm public model of the manufacturer's OTN single-domain management and control system are:
  • Sampling method 1 "zero padding in series”: Taking node (ie network element) k as an example, the performance vectors of all 6 optical layer links contained in node k at time t can be concatenated to construct a long vector as the RNN at time t Input, the sample of the i-th sample sequence at the t-th time is expressed as:
  • m represents the maximum number of optical layer links deployed on a single node in this domain. At this time, if the number of links on node k is less than m, that is, m>6, the remaining 4*(m-6)-dimensional elements of the vector will be filled with zeros. is the link performance prediction value of node k for the i-th sample sequence at time t+1. The length of this vector is 4*m dimensions.
  • m represents the maximum number of optical layer links deployed on a single node in this domain.
  • Sampling method 2 random sampling: Taking node k as an example, the sampling method for the same sample sequence: randomly sample all 6 optical layer links included in node k, and randomly select a certain performance prediction link as a training sample. , and sequentially sample the performance samples of the link object at each moment in the same time period. This method must ensure that the performance samples of the same sample sequence at different times in the same time period come from the same link object.
  • Sampling method 3 weighted summation, convolution feature extraction processing: still taking node k as an example, the sampling method for the same sample sequence: each dimension of performance in the performance vector of all 6 optical layer links contained in node k
  • the parameter elements are subjected to a convolutional feature extraction process using weighted summation, and finally a single 4-dimensional optical layer link performance vector after processing is obtained, which is used as the optical layer link performance vector of this node k.
  • each parameter in Table 1 is used for convolution feature extraction as follows:
  • the optical transport network performance prediction system is applied to solution A.
  • the specific steps include:
  • Step 1 Each manufacturer's single domain obtains the local OTN network performance prediction function model (i.e., single-domain digital twin model) through horizontal federated learning technology training.
  • the local OTN network performance prediction function model i.e., single-domain digital twin model
  • Step 2 Each manufacturer's single domain builds the local DT network layer based on the local network topology model (i.e., single domain topology information), each network element node basic model, local network performance prediction and other functional models. And report according to the request of the multi-domain orchestration system.
  • the local network topology model i.e., single domain topology information
  • each network element node basic model i.e., single domain topology information
  • local network performance prediction i.e., local network performance prediction and other functional models.
  • Step 3 The multi-domain orchestration system splices the DT network layers of each domain, and then builds and generates a DT network layer model of the entire network (that is, the optical transport network digital twin model).
  • the single-domain management and control system parameter model is initialized ⁇ 0 and sent to each node in the domain, and then the single-domain management and control system determines the reported
  • the single-domain management and control system ends this iteration. If this equation does not tend to 0, the single-domain management and control system calculates formally and sends the calculation results to each network element node for execution. j++ rounds of iteration.
  • node k reports the parameter update amount to the local management and control system.
  • This domain management and control system obtains the information of all K nodes After that, adopt The public model parameters ⁇ j+1 are updated and sent to each network element node, thus ending this round of iteration.
  • Each network element node in a single domain has its own AI training capabilities. Through performance sampling and model training of this node, it can timely sense and predict the performance changes of this network element.
  • the single-domain management and control system also has AI training capabilities. From the perspective of federated learning technology, it plays the role of an edge server in this solution.
  • Each network element node in a single domain reports the parameter gradient (or parameter update amount) of the performance prediction model trained by its own AI algorithm to the local management and control system, and the local management and control system evaluates the reported model parameter gradient (or parameter update amount) of all network elements. ) performs aggregation processing, and at the same time updates the parameters of the local performance prediction public model (i.e., single domain performance prediction model) constructed by the local management and control system.
  • the single-domain management and control system broadcasts and delivers the updated local performance prediction public model parameters to each network element node in the local domain.
  • Each network element node uses the public model parameters to refresh the network element performance prediction model parameters, and iteratively initiates the next round of model training and interaction with the single-domain management and control system accordingly.
  • This solution can solve problems such as poor generalization ability of performance prediction model training due to differences in device batches, models, and service years of each network element node in the domain, and prone to over-fitting in model prediction reasoning.
  • Each single-domain management and control system will generate the DT model of each network element in the domain (including the DT infrastructure model of the local network element, the DT performance prediction function model, etc.) based on the public performance prediction model of the domain and the network topology information of the domain obtained through final training.
  • the local OTN DT network layer is formed according to the local OTN physical network topology connection relationship and is sent to the multi-domain orchestration system.
  • the multi-domain orchestration system splices the OTN DT network layers reported by each single domain to obtain the OTN DT network layer of the entire network. Since it is a spliced DT network layer of the entire network, the performance prediction models among the single-domain parts included are different.
  • the optical transport network performance prediction system is applied to solution B.
  • the specific steps include:
  • Step 1 Multi-domain orchestration and each single domain are trained through horizontal federated learning technology to obtain a multi-domain OTN network performance prediction function model (ie, network performance prediction model).
  • a multi-domain OTN network performance prediction function model ie, network performance prediction model
  • Step 2 The multi-domain orchestration system generates the entire network OTN DT network layer model (i.e., optical transport network digital twin model) based on the multi-domain OTN network performance prediction public function model and the encrypted topology information reported by each single domain.
  • OTN DT network layer model i.e., optical transport network digital twin model
  • the multi-domain orchestration system parameter model is initialized ⁇ 0 and sent to each single domain.
  • the multi-domain orchestration system determines the parameters reported by the single-domain management server. If this equation tends to 0, the multi-domain management and control system ends this iteration. If this equation does not tend to 0, the multi-domain orchestration system calculates And issued to each single domain, each single domain executes j++ rounds of iterations.
  • Multi-domain orchestration systems gain encryption for all single domains After that, adopt The public model parameters ⁇ j+1 are updated and sent to each single domain, thus ending this round of iteration.
  • the entire optical transport network has K single domains
  • the performance data sampling sequence set of each single domain management server for its respective single domain is P k and there are i ⁇ P k , is the performance data prediction vector of single domain k for the i-th sample sequence at time t+1, is the performance data vector of the i-th sample sequence at time t.
  • the sample sequence refers to the sample set that samples the same prediction object in time series from 0 to time t.
  • Each network element node in a single domain does not have AI training and modeling capabilities.
  • Each single-domain management and control system has AI training and modeling capabilities.
  • Multi-domain orchestration systems across vendors also have AI training and modeling capabilities.
  • the multi-domain orchestration system plays the role of edge server in this solution.
  • Each single-domain management and control system reports the parameter gradient (or parameter update amount) of the performance prediction model trained by its own AI algorithm to the multi-domain orchestration system through homomorphic encryption technology, and the multi-domain orchestration system compiles the reported models of all single-domain management and control systems.
  • the parameter gradient (or parameter update amount) is processed by homomorphic encryption and aggregation, and the multi-domain performance prediction public model parameters constructed by the multi-domain orchestration system are updated at the same time.
  • the multi-domain orchestration system broadcasts the updated multi-domain performance prediction public model parameters to each single-domain management and control system.
  • Each single-domain management and control system uses public model parameters to refresh the performance prediction model parameters of its own domain, and iteratively initiates the next round of model training and interaction with the multi-domain orchestration system accordingly.
  • the multi-domain orchestration system will generate the cross-domain OTN DT network layer for the entire network based on the multi-domain public performance prediction model obtained through final training and the encrypted network topology information reported by each domain.
  • the performance prediction models of each OTN domain are the same.
  • Each domain is networked and constructed by different equipment manufacturers. Therefore, the management and control system of each domain needs to encrypt the performance prediction model parameter gradient (or parameter update volume) of this domain before reporting the domain model parameter gradient (or parameter update volume). Processing; at the same time, the local network topology information reported by each domain to the multi-domain management and control system is also encrypted as necessary.
  • Embodiments of the present application also relate to an optical transport network performance prediction method and a multi-domain management server, as shown in Figure 11, including:
  • Step 1101 Obtain the model update information and single-domain topology information reported by the single-domain management server; the model update information is based on the performance prediction model of the single-domain management server based on the performance of each network element in the single domain through horizontal federated learning technology. The data is obtained after training and used to update the performance prediction model;
  • Step 1102 Generate an optical transport network digital twin model based on the model update information and single-domain topology information; wherein the optical transport network digital twin model is used to predict the performance of the optical transport network.
  • the optical transport network performance prediction method of this embodiment is applied to a multi-domain management server.
  • the multi-domain management server is a server that manages all single-domain management servers in the optical transport network.
  • a multi-domain orchestration system can be run on the multi-domain management server to achieve Optical transport network performance prediction method.
  • the performance prediction model is a single-domain performance prediction model
  • the model update information is a single-domain digital twin model obtained based on the updated single-domain performance prediction model.
  • the multi-domain management server implements the model update information based on the following method and single-domain topology information to generate an optical transport network digital twin model: Based on the single-domain topology information, the single-domain digital twin models reported by the single-domain management servers of each single domain are spliced together to obtain an optical transport network digital twin model.
  • the performance prediction model is a network performance prediction model
  • the model update information is the parameter update amount of the network performance prediction model.
  • the multi-domain management server generates an optical transport network based on the model update information and single-domain topology information in the following manner.
  • Digital twin model Update the network performance prediction model based on the parameter updates of the network performance prediction model; generate a digital twin model of the optical transport network based on the updated network performance prediction model and single-domain topology information.
  • the multi-domain management server updates the network performance prediction model according to the parameter update amount of the network performance prediction model. It may update the parameters of the single-domain performance prediction model based on the parameter update amount reported by each network element. Specifically, it is implemented according to the following formula:
  • the performance data sampling sequence set of each device is P k and there are i ⁇ P k , is the performance data prediction vector of device k for the i-th sample sequence at time t+1, is the performance data vector of the i-th sample sequence at time t.
  • the sample sequence refers to a sample set that samples the same prediction object in time series from 0 to time t.
  • this embodiment is an embodiment corresponding to the above-mentioned embodiment, and this embodiment can be implemented in cooperation with the above-mentioned embodiment.
  • the relevant technical details mentioned in the above embodiment are still valid in this embodiment, and will not be described again in order to reduce duplication.
  • the relevant technical details mentioned in this embodiment can also be applied to the above embodiments.
  • Embodiments of the present application also relate to an optical transport network performance prediction system, as shown in Figure 12, including:
  • Single domain management server 1201, multi-domain management server 1202; single domain management server 1201 and multi-domain management server 1202 are communicatively connected;
  • the single-domain management server 1201 is used to obtain the performance prediction model; obtain the model update information of the performance prediction model through horizontal federated learning technology; where the model update information is based on the performance of each network element in the single domain for the performance prediction model.
  • the data is obtained after training; the parameter update amount and single domain topology information are reported to the multi-domain management server 1202 for the multi-domain management server 1202 to generate an optical transport network digital twin model;
  • the multi-domain management server 1202 is used to obtain the model update information and single-domain topology information reported by the single-domain management server 1201; the model update information is based on the performance prediction model of the single-domain management server 1201 through horizontal federated learning technology.
  • the performance data of each network element in the domain is obtained after training; based on the model update information and single-domain topology information, an optical transport network digital twin model is generated; among which, the optical transport network digital twin model is used to predict the performance of the optical transport network.
  • this embodiment is a system embodiment corresponding to the above-mentioned embodiment, and this embodiment can be implemented in cooperation with the above-mentioned embodiment.
  • the relevant technical details mentioned in the above embodiment are still valid in this embodiment, and will not be described again in order to reduce duplication.
  • the relevant technical details mentioned in this embodiment can also be applied to the above embodiments.
  • Embodiments of the present application also relate to an electronic device, as shown in Figure 13, including: at least one processor 1301; a memory 1302 communicatively connected to the at least one processor; wherein the memory 1302 stores information that can be used by at least one processor 1301 The instructions are executed by at least one processor 1301 to execute the method of any of the above embodiments.
  • the memory 1302 and the processor 1301 are connected using a bus.
  • the bus may include any number of interconnected buses and bridges.
  • the bus connects various circuits of one or more processors 1301 and the memory 1302 together.
  • the bus may also connect various other circuits together such as peripherals, voltage regulators, and power management circuits, which are all well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface between the bus and the transceiver.
  • a transceiver may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a unit for communicating with various other devices over a transmission medium.
  • the information processed by the processor 1301 is transmitted on the wireless medium through the antenna. Further, the antenna also receives the information and transmits the information to the processor 1301.
  • Processor 1301 is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • Memory 1302 may be used to store information used by the processor when performing operations.
  • Embodiments of the present application relate to a computer-readable storage medium storing a computer program.
  • the above method embodiments are implemented when the computer program is executed by the processor.
  • the program is stored in a storage medium and includes several instructions to cause a device ( It may be a microcontroller, a chip, etc.) or a processor (processor) that executes all or part of the steps of the methods in various embodiments of the application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例涉及通信领域,公开了一种光传送网性能预测方法、系统、电子设备及存储介质。本申请中,光传送网性能预测方法,包括以下步骤:获取性能预测模型;通过横向联邦学习技术,获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新;将模型更新信息和单域拓扑信息,上报至多域管理服务器,供多域管理服务器基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型,光传送网数字孪生模型,用于对光传送网进行性能预测。

Description

光传送网性能预测方法、系统、电子设备及存储介质
相关申请
本申请要求于2022年3月10号申请的、申请号为202210234032.2的中国专利申请的优先权。
技术领域
本申请实施例涉及通信领域,特别涉及一种光传送网性能预测方法、系统、电子设备及存储介质。
背景技术
随着通信技术的发展,全球通信产业正在从互联时代、云时代迈向智能时代,新机遇与新挑战驱动网络加速全面转型升级。为实现运营商网络数字化转型,为垂直行业和消费者用户提供零等待、零接触、零故障的服务和用户体验,自智网络技术被提出,为运营商网络营运全生命周期打造自配置、自修复、自优化的网络能力。为实现网络的数字化转型,需要建立网络内外部环境的数字镜像,融合仿真与预防预测等能力,如何建立这样一个仿真网络,是一个亟待解决的问题。
发明内容
本申请实施例的主要目的在于提出一种光传送网性能预测、系统、电子设备及存储介质,可以实现建立光传送网的仿真网络建立。
为实现上述目的,本申请实施例提供了一种光传送网性能预测方法,应用于单域管理服务器,包括以下步骤:获取性能预测模型;通过横向联邦学习技术,获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新;将模型更新信息和单域拓扑信息,上报至多域管理服务器,供多域管理服务器基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型,光传送网数字孪生模型,用于对光传送网进行性能预测。
为实现上述目的,本申请实施例还提供了一种光传送网性能预测方法,应用于多域管理服务器,包括以下步骤:获取单域管理服务器上报的模型更新信息和单域拓扑信息;其中,模型更新信息,是通过横向联邦学习技术,对单域管理服务器的性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新;基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型;其中,光传送网数字孪生模型,用于对光传送网进行性能预测。
为实现上述目的,本申请实施例还提供了一种光传送网性能预测系统,包括:单域管理服务器、多域管理服务器;单域管理服务器与多域管理服务器通信连接;其中,单域管理服务器,用于获取性能预测模型;通过横向联邦学习技术,获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到;将参数更新量和单域拓扑信息,上报至多域管理服务器,供多域管理服务器生成光传送网数字孪生模型;多域管理服务器,用于获取单域管理服务器上报的模型更新信息和单域拓扑信息;其中,模型更新信息,是通过横向联邦学习技术,对单域管理服务器的性能预测模型基于单域中的各网元性能数据进行训练后得到;根据模型更新信息和单域拓扑信息,生成光传送网数字孪生模型;其中,光传送网数字孪生模型,用于对光传送网进行性能预测。
本申请的实施方式还提供了一种电子设备,包括:至少一个处理器;与至少一个处理器通信连接的存储器;存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述的光传送网性能预测方法。
本申请的实施方式还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执 行时实现上述光传送网性能预测方法。
本申请提出的光传送网性能预测方法,由单域管理服务器获取模型更新信息,上报至多域管理服务器,由多域管理服务器最终生成光传送网数字孪生模型,即是以横向联邦学习克服数据分布在各单域导致的训练困难问题,由于模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,所以基于模型更新信息,性能预测模型的更新可以与实际网元的实际处理能力更加贴合,因此,单域管理服务器将模型更新信息和单域拓扑信息,上报至多域管理服务器,多域管理服务器可以基于更新后的性能预测模型,生成光传送网数字孪生模型,此光传送网数字孪生模型也是与光传送网络中的实际处理传输能力贴合的模型,从而实现建立光传送网的仿真网络建立。
附图说明
图1是根据本申请一实施例提供的光传送网性能预测方法应用于单域管理服务器的流程示意图;
图2是根据本申请一实施例提供的光传送网性能预测系统架构示意图一;
图3是根据本申请一实施例提供的光传送网性能预测系统架构示意图二;
图4是根据本申请一实施例提供的网元结构示意图;
图5是根据本申请一实施例提供的性能预测模型示意图;
图6是根据本申请一实施例提供的性能数据采样示意图;
图7是根据本申请一实施例提供的性能预测模型训练方法流程示意图一;
图8是根据本申请一实施例提供的性能预测模型训练方法流程示意图二;
图9是根据本申请一实施例提供的性能预测模型训练方法流程示意图三;
图10是根据本申请一实施例提供的性能预测模型训练方法流程示意图四;
图11是根据本申请一实施例提供的光传送网性能预测方法应用于多域管理服务器的流程示意图;
图12是根据本申请一实施例提供的光传送网性能预测系统结构示意图;
图13是本申请一个实施例提供的电子设备结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
本申请的实施例涉及一种光传送网性能预测方法,如图1所示,包括以下步骤:
步骤101,获取性能预测模型;
步骤102,通过横向联邦学习技术,获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新;
步骤103,将模型更新信息和单域拓扑信息,上报至多域管理服务器,供多域管理服务器基于模型更新信息,生成光传送网数字孪生模型,光传送网数字孪生模型,用于对光传送网进行性能预测。
本实施例的光传送网性能预测方法,应用于单域管理服务器,单域管理服务器是光传送网中管理一个域中各个网元的管理服务器,每个单域管理服务器上可运行此单域对应的厂商的单域管控系统。光传送网(optical transport network,简称“OTN”)在水平方向上可以分为不同的管理域,其中单个管理域可以由单个设备商的OTN设备组成,也可以由运营商的某个网络或子网络组成。单个管理域即是单域,每个单域中存在一个单域管理服务器,用于和多域管理服务器进行通信,多单域管理服务器上可运行多域编排系统,以生成光传送网数字孪生模型。光传送网数字孪生模型,可以对光传送网进行性能预测,或者,实行其他数据仿真需要。
全球通信产业正在从互联时代、云时代迈向智能时代,新机遇与新挑战驱动网络加速全面转型升级。在此背景下,2019年自智网络(Autonomous Networks,简称“AN”)的理念被提出,AN理念旨在通过网络技术和数字技术融合,使能运营商网络数字化转型,为垂直行业和消费者用户提供零等待、零接触、零故障的服务和用户体验,为运营商网络营运全生命周期打造自配置、自修复、自优化的网络能力。为实现AN,数字孪生(Digital Twins,简称“DT”)技术被提出,被业界认为是数字化转型的基础、实现AN架构及技术的重要支撑和组成部分。网络数字孪生通过对自身及外部环境状态的精准感知,建立内外部环境的数字镜像,融合仿真与预防预测等能力,在“低成本试错”、“智能决策”、“预测性维护”等场景发挥关键使能技术作用。
数字孪生即DT技术可以理解为——数字空间中构建物理对象的数字(虚拟)模型,并利用来自物理对象的数据不断修正该模型和更新模型状态,使其与物理对象在全生命周期保持一致,并能高保真地镜像物理对象的实时运行状态,该模型就成为了物理对象的数字孪生体(简称数字孪生)。基于数字孪生可以进行监测、分析、预测、诊断、训练、仿真,并将仿真结果反馈给物理对象,从而帮助对物理对象进行优化和决策。涉及数字孪生模型构造、数字模型状态的实时更新、基于数字孪生的仿真分析和控制决策等相关技术则可统称为数字孪生DT技术。由此可以看出,如何结合被孪生镜像的物理对象的结构特征、状态特征及DT的应用场景,构建DT模型是DT的关键技术。
在自智网络体系架构下,需要在OTN多域复杂网络环境中,构建DT网络模型,生成OTN DT网络层,以通过DT技术实现对整网OTN的分析能力。DT网络层是对整个跨域物理网络的模型抽象,DT网元间通信不受物理网络空间限制、对DT网元的可视性与可操作性也不受物理网络分域等空间限制、管控限制。这就需要OTN DT模型具备对各域物理网元和网元器件功能机制的通用抽象描述能力。
以单域OTN网络域内,对属于DT感知类算法模型的OTN性能预测算法建模为例,网元设备来自不同设备厂商,各网元设备虽然在同一网络域环境中具备相同的网元运行功能,但各网元自身的器件批次、型号、使用年限等属性不尽相同,造成设备器件、光纤等光性能在实际运行使用中变化规律不一,仅对部分网元或单一网元器件采样训练获得的DT性能预测模型仅能反映这一部分网元或这一网元的性能变化特征,对于单域内其它网元设备、或其它跨域网络网元设备而言不具备普适性,泛化能力差,在推理过程中容易出现过拟合等问题,由此造成对OTN网络性能预测分析的准确度低、权威性不足。但如果将各域各网元性能数据上报给多域编排系统进行统一训练,又存在数据隐私泄密的问题:由于各OTN物理网络域来由不同的设备厂商的单域管控体系,在构建跨域数字孪生层模型训练中,各物理单域有数据隐私诉求,不便于将各域内部的OTN网络性能数据直接上报给多域编排系统进行统一训练。如何解决局部训练造成的性能预测功能模型过拟合问题,同时又要解决集中训练引起的样本数据隐私泄密问题,成为OTNDT case功能模型建模的瓶颈,亟待解决。联邦学习技术的出现为解决上述问题提供了一种有效途径。
联邦学习能克服的建模痛点在于:1、数据不出本地:由于数据安全、隐私保护而导致的数据不出本地,无法汇聚的问题突出。2、模型泛化能力:由于经过标注的训练数据集小而导致的模型泛化能力弱的问题。3、模型训练效率:由于过于重视集中式云计算训练而忽视边缘设备计算能力而导致的模型训练效率问题。
联邦学习的优势在于:1、只利用本地数据训练,不交换数据本身,用加密方式交换更新的模型参数。2、利用设备在不同环境(时间、地点)的数据进行模型训练,公共模型更新后下发至设备,提升模型泛化能力。3、利用边缘设备计算能力进行并行训练,提升模型训练效率。
横向联邦学习的流程包括:步骤1、各参与方在本地计算模型梯度,并使用同态加密、差分隐私或秘密共享等加密技术,对梯度信息进行掩饰,并将掩饰后的结果(简称为加密梯度)发送给聚合服务器。步骤2、服务器进行安全聚合(secure aggregation)操作,如使用基于同态加密的加权平均。步骤3、服务器将聚合后的结果发送给各参与方。步骤4、各参与方对收到的梯度进行解密,并使用解密后的梯度结果更新各自的模型参数。综上所述,本申请提出了一种采用横向联邦学习技术构建OTN DT网络功能 模型的方法机制。
本申请光传送网性能预测方法,由单域管理服务器获取模型更新信息,上报至多域管理服务器,由多域管理服务器最终生成光传送网数字孪生模型,即是以横向联邦学习克服数据分布在各单域导致的训练困难问题,由于模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,所以基于模型更新信息,性能预测模型的更新可以与实际网元的实际处理能力更加贴合,因此,单域管理服务器将模型更新信息和单域拓扑信息,上报至多域管理服务器,多域管理服务器可以基于更新后的性能预测模型,生成光传送网数字孪生模型,此光传送网数字孪生模型也是与光传送网络中的实际处理传输能力贴合的模型,从而实现建立光传送网的仿真网络建立。
下面对本实施例的光传送网性能预测方法实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。
在步骤101中,单域管理服务器获取性能预测模型。其中,性能预测模型可以是预先存储在单域管理服务器中的,也可以是由单域管理服务器从多域管理服务器处或者其他电子设备获得。
在一个例子中,如图2所示,光传送网性能预测方法可以应用在单域中每个网元都有训练能力的场景中(以下简称为方案A),在方案A中性能预测模型可以是单域性能预测模型,模型更新信息可以是基于更新后的所述单域性能预测模型得到的单域数字孪生模型,单域管理服务器通过以下方式实现获取性能预测模型的模型更新信息:将单域性能预测模型下发至本单域中的各个网元,获取各个网元上报的单域性能预测模型的参数更新量;其中,各个网元根据本地性能数据,分别对单域性能预测模型进行训练,并计算得到参数更新量;根据各个网元上报的参数更新量,更新单域性能预测模型的参数,得到更新后的单域性能预测模型,根据所述更新后的所述单域性能预测模型,和本单域的单域拓扑信息,得到所述单域数字孪生模型。
本实施例中,通过以单域性能预测模型为性能预测模型,以基于更新后的所述单域性能预测模型得到的单域数字孪生模型为模型更新信息,单域管理服务器可以将单域性能预测模型下发至本单域中的各个网元,由各个网元分别对单域性能预测模型进行训练,从而使更新后的单域性能预测模型,贴合各个网元的性能特点,并且可以解决各网元自身的器件批次、型号、使用年限等属性不尽相同,造成性能预测模型训练泛化能力差,在推理过程中容易出现过拟合等问题,实现性能预测模型泛化能力强,预测准确度高等效果。
在另一个例子中,光传送网性能预测方法可以应用在单域中仅有单域管理服务器有训练能力,其他网元没有训练能力的场景中(以下简称为方案B),在方案B中性能预测模型可以是网络性能预测模型,模型更新信息可以是网络性能预测模型的参数更新量,单域管理服务器可以通过以下方式实现获取性能预测模型的模型更新信息:根据各网元性能数据,对网络性能预测模型进行训练,得到网络性能预测模型的参数更新量;其中,网络性能预测模型的参数更新量用于上报至多域管理服务器,供多域管理服务器根据网络性能预测模型的参数更新量,更新网络性能预测模型。
本实施例中,通过以网络性能预测模型为性能预测模型,以网络性能预测模型的参数更新量为模型更新信息,单域管理服务器可以自行根据各网元性能数据对网络性能预测模型进行训练,得到网络性能预测模型的参数更新量,由多域管理服务器综合各单域的网络性能预测模型的参数更新量,更新网络性能预测模型,由单域管理服务器进行模型训练,可以减少训练所需的设备数,节约计算资源,即使在计算资源较为匮乏的场景也可以实现光传送网性能预测方法,同时也可以解决各网元自身的器件批次、型号、使用年限等属性不尽相同,造成性能预测模型训练泛化能力差,在推理过程中容易出现过拟合等问题,实现性能预测模型泛化能力强,预测准确度高等效果。
在步骤102中,单域管理服务器获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新。模型更新信息可以是由单域管理服务器计算出的,也可以是由单域管理服务器所在单域中的其他网元计算得到。
在一个例子中,性能预测模型的更新,根据以下公式实现:
Figure PCTCN2022131433-appb-000001
其中,设共有K个设备以横向联邦学习的方式,进行所述性能预测模型的更新,每个设备的性能数据采样序列集为P k且有
Figure PCTCN2022131433-appb-000002
i∈P k
Figure PCTCN2022131433-appb-000003
是设备k对第i个样本序列在t+1时刻的性能数据预测向量,
Figure PCTCN2022131433-appb-000004
是t时刻第i个样本序列的性能数据向量,所述样本序列是指从0到t时刻对同一预测对象按时序采样的样本集合,一个样本序列的采样周期为[0...t,t+1],样本序列量为n k=|P k|,则单域的总样本序列量为
Figure PCTCN2022131433-appb-000005
ω j、ω j+1分别是所述性能预测模型在第j轮迭代和第j+1轮迭代后得到的参数取值,
Figure PCTCN2022131433-appb-000006
是在横向联邦学习过程中,所述设备k第j+1轮迭代得到的性能预测模型的参数取值,
Figure PCTCN2022131433-appb-000007
是所述设备k计算得到的性能预测模型的参数ω j更新至ω j+1的参数更新量,a是性能预测模型的学习率,g k是ω j的梯度。其中,设备k在方案A中为网元节点,在方案B中为单域管理服务器。
具体地,性能预测模型的更新,基于
Figure PCTCN2022131433-appb-000008
与t+1时刻的实际性能数据向量
Figure PCTCN2022131433-appb-000009
的损失函数和F k(ω)实现,其中,
Figure PCTCN2022131433-appb-000010
所述性能数据向量
Figure PCTCN2022131433-appb-000011
从入光功率a i、出光功率b i、收端光衰c i和收端光信噪比d i四个维度进行采样取值,所述性能数据向量,通过以下三种方式的任意一种进行采样,
方式一,从所述四个维度依次采样各光层链路在同一采样周期内的性能值,对每个维度的性能值进行加权求和的卷积化特征提取处理,
所述卷积化特征提取处理得到的
Figure PCTCN2022131433-appb-000012
及其
Figure PCTCN2022131433-appb-000013
如下:
Figure PCTCN2022131433-appb-000014
其中,
Figure PCTCN2022131433-appb-000015
α iiii是卷积化特征提取处理运算中的预设参数,m是所述设备k中,光层链路的条数;
方式二,将各所述光层链路的所述四个维度在t时刻的性能值,串联成长向量,得到的
Figure PCTCN2022131433-appb-000016
及其
Figure PCTCN2022131433-appb-000017
如下:
Figure PCTCN2022131433-appb-000018
m是所述设备k中,光层链路的条数,
Figure PCTCN2022131433-appb-000019
是m=1即第一条光层链路在t时刻的a i取值,
Figure PCTCN2022131433-appb-000020
Figure PCTCN2022131433-appb-000021
同理;
方式三,将各所述光层链路的所述四个维度的性能值进行随机抽样取值,其中,同一样本序列在同一时间周期内不同时刻的性能取值来自于同一条光层链路。
在步骤103中,单域管理服务器将模型更新信息和单域拓扑信息,上报至多域管理服务器,供多域管理服务器基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型,光传送网数字孪生模型,用于对光传送网进行性能预测。其中,多域管理服务器接受到多个单域管理服务器上报的模型更新信息和单域拓扑信息后,生成光传送网数字孪生模型,光传送网数字孪生模型是整个光传送网的数字孪生模型,即,包括上报的多个单域在内的数字孪生模型,以对光传送网进行数字模拟仿真。
在一个例子中,根据各个网元上报的所述参数更新量,更新所述单域性能预测模型的参数,得到所述更新后的所述单域性能预测模型,单域管理服务器具体需要对单域性能预测模型进行迭代更新;其中,迭代更新包括:将更新的单域性能预测模型下发给本单域内的各个网元,获取各个网元再次上报的参数更新量,根据再次上报的参数更新量对单域性能预测模型进行更新,并发起下一轮迭代更新,直到满足停止迭代条件。其中,停止迭代条件包括:性能预测模型的参数更新量收敛。
本实施例中,通过对所述单域性能预测模型进行迭代更新,从而使单域性能预测模型的预测效果更加贴合设备实际性能,来实现单域性能预测模型的更新,使得多域管理服务器生成的光传送网数字孪生模型有更好的仿真效果。
在一实施方式中,根据各个网元上报的参数更新量,更新单域性能预测模型的参数,根据以下公式实现:
Figure PCTCN2022131433-appb-000022
其中,设单域有K个网元,每个网元的性能数据采样序列集为P k且有
Figure PCTCN2022131433-appb-000023
i∈P k
Figure PCTCN2022131433-appb-000024
是网元k对第i个样本序列在t+1时刻的性能数据预测向量,
Figure PCTCN2022131433-appb-000025
是t时刻第i个样本序列的性能数据向量,样本序列是指:从0到t时刻对同一预测对象按时序采样的样本集合,一个样本序列的采样周期为[0...t,t+1]样本序列量为n k=|P k|,则单域的总样本序列量为
Figure PCTCN2022131433-appb-000026
在一个具体实施例子中,OTN中各单域的网元构造如图4所示,假设OTN网元节点k有6条光层链路,其主要光层传输性能参数包括入光功率、出光功率、收端光衰、光信噪比(Optical Signal Noise Ratio,简称“OSNR”),则性能参数表示如下表所示:
表一
  Link1 Link2 Link3 Link4 Link5 Link6
入光功率 a1 a2 a3 a4 a5 a6
出光功率 b1 b2 b3 b4 b5 b6
收端光衰 c1 c2 c3 c4 c5 c6
收端OSNR d1 d2 d3 d4 d5 d6
以循环神经网络(Recurrent Neural Network,,简称“RNN”)模型为例,性能预测功能模型建模如图5所示:RNN模型参数是ω(w a,w x,b),样本序列中每个周期采样到的性能数据都要输入至RNN模型中供模型训练使用,其中,
Figure PCTCN2022131433-appb-000027
是网元k对第i个样本序列在t+1时刻的性能数据预测向量,
Figure PCTCN2022131433-appb-000028
是t时刻第i个样本序列的性能数据向量,RNN算法模型参数需要向预测向量
Figure PCTCN2022131433-appb-000029
损失减小的方向改进, 网元节点(即网元)k在采样nk个样本序列后,通过RNN算法模型得出的性能预测损失函数和的平均式F k(ω)如下:
Figure PCTCN2022131433-appb-000030
其中,
Figure PCTCN2022131433-appb-000031
是网元节点k对第i个性能数据样本序列(以下简称样本序列)在t+1时刻性能预测的样本标记向量,
Figure PCTCN2022131433-appb-000032
是网元节点k在采样nk个样本序列后的性能预测损失函数和对该厂商管控系统的性能预测公共模型第j次迭代的参数ω j的梯度。
Figure PCTCN2022131433-appb-000033
是由网元节点k的第j+1次迭代上传给该厂商管控系统(即单域管理服务器)的该网元的模型参数更新量,a是模型的学习率。该厂商OTN单域管控系统的OTN光层性能预测RNN算法公共模型的损失函数是本网络域(即单域)各网元节点RNN算法模型的性能预测损失函数累加和的平均,定义为:
Figure PCTCN2022131433-appb-000034
该厂商OTN单域管控系统的OTN光层性能预测RNN算法公共模型训练的目标函数即为
Figure PCTCN2022131433-appb-000035
该厂商OTN单域管控系统的OTN光层性能预测RNN算法公共模型第j+1次迭代更新获得的模型参数即为:
Figure PCTCN2022131433-appb-000036
考虑光层链路性能向量采样,由于组网结构设计原因,各网元节点的光层链路数互不相同,为保证横向联邦学习对样本特征向量维度、元素属性一致性的要求,继而保证所有网元节点及单域管控系统的RNN模型输入的一致性,故而对方案A、B设计如下三种采样方案。
采样方法1,“串联补零”:以节点(即网元)k为例,可把节点k包含的所有6条光层链路在t时刻的性能向量串联构建一个长向量作为t时刻RNN的输入,第i个样本序列在第t时刻的样本表示为:
Figure PCTCN2022131433-appb-000037
其中,
Figure PCTCN2022131433-appb-000038
是节点k第i个样本序列在t时刻的链路性能向量采样,该向量的长度是4*m维。此处,m表示本域内,单个节点上所部署的最多的光层链路条数。此时,如果节点k上的链路条数小于m,即m>6,则该向量剩余4*(m-6)维的元素做补零处理。
Figure PCTCN2022131433-appb-000039
是节点k对第i个样本序列在t+1时刻的链路性能预测值,该向量的长度是4*m维。此处,m表示本域内,单个节点上所部署的最多的光层链路条数。此时,如果节点k上的链路条数小于m,即m>6,则该向量剩余4*(m-6)维的元素做补零处理。
Figure PCTCN2022131433-appb-000040
是节点k对第i个样本序列在t+1时刻性能预测的样本标记,该向量维数与上述预测值向量的维数相同。则节点k在采样nk个样本序列后,通过RNN算法模型得出的性能预测损失函数和F k(ω)以下 式计算:
Figure PCTCN2022131433-appb-000041
采样方法2,随机抽样:以节点k为例,对同一个样本序列的采样方法:对节点k包含的所有6条光层链路进行随机抽样,随机选出某条性能预测链路作训练样本,依次采样得出该链路对象在同一个时间周期内各时刻的性能样本。此种方法必须保证,同一样本序列在同一时间周期内不同时刻的性能采样来自于同一条链路对象。
Figure PCTCN2022131433-appb-000042
其中,
Figure PCTCN2022131433-appb-000043
是节点k第i个样本序列在t时刻的链路性能向量采样,该向量的长度即是单条光层链路性能向量的实际维数,全网络域统一,此处即是4。
Figure PCTCN2022131433-appb-000044
是节点k对第i个样本序列在t+1时刻的链路性能预测值,该向量的长度如上,即是4。
Figure PCTCN2022131433-appb-000045
是节点k对第i个样本序列在t+1时刻性能预测的样本标记。F k(ω)是节点k在采样nk个样本序列后,通过RNN算法模型得出的性能预测损失函数和。
Figure PCTCN2022131433-appb-000046
采样方法3,加权求和,卷积化特征提取处理:仍以节点k为例,对同一个样本序列的采样方法:对节点k包含的所有6条光层链路性能向量中的每维性能参数元素做加权求和的卷积化特征提取处理,最终得出处理后的单个4维的光层链路性能向量,作为本节点k的光层链路性能向量。依次采样网元节点k的6条光层链路在同一个时间周期内各时刻的性能样本,并用上述卷积化特征提取处理方法得出一个“网元节点k在各时刻的光层链路性能向量”的样本序列。
如图6所示,表一中的各参数如下进行卷积化特征提取,
Figure PCTCN2022131433-appb-000047
得到
Figure PCTCN2022131433-appb-000048
其中,
Figure PCTCN2022131433-appb-000049
是节点k第i个样本序列在t时刻的经卷积化处理的光层链路性能向量采样。该向量的长度即是单条光层链路性能向量的维数,全网络域统一,此处即是4。
Figure PCTCN2022131433-appb-000050
是节点k对第i个样本序列在t+1时刻的光层链路性能预测值。该向量的长度如上,即是4。
Figure PCTCN2022131433-appb-000051
是节点k对第i个样本序列在t+1时刻性能预测的样本标记(节点k在t+1时刻各链路性能标记的加权求和)。F k(ω)是节点k在采样nk个样本序列后,通过RNN算法模型得出的性能预测损失函数和。
Figure PCTCN2022131433-appb-000052
在一个例子中,光传送网性能预测系统应用于方案A,具体步骤包括:
步骤1:各厂商单域通过横向联邦学习技术训练获得本域OTN网络性能预测功能模型(即单域数字孪生模型)。
步骤2:各厂商单域根据本域网络拓扑模型(即单域拓扑信息)、各网元节点基础模型、本域网络性能预测等功能模型,构建本域DT网络层。并根据多域编排系统的请求进行上报。
步骤3:多域编排系统拼接各域DT网络层,进而构建生成整网的DT网络层模型(即光传送网数字孪生模型)。
其中,假设厂商A的单域管控系统将第j次迭代获得的RNN性能预测模型(即单域性能预测模型)参数ω j已经下发给本域各网元节点,在此基础上各网元节点k与单域管控系统的横向联邦学习交互流程如图7所示:
当j=0,单域管控系统参数模型初始化ω 0,并下发给域内各节点,再由单域管控系统判断上报的
Figure PCTCN2022131433-appb-000053
若此式趋于0,则单域管控系统结束此次迭代,若此式不趋于0,则单域管控系统计算正式,并将计算结果下发给各网元节点由各网元节点执行j++轮迭代。
Figure PCTCN2022131433-appb-000054
其中,以j+1轮迭代过程为例,每轮迭代步骤流程如图8所示,包括:
在j+1轮迭代开始时,由节点k的RNN模型的损失函数对ω j求梯度,
Figure PCTCN2022131433-appb-000055
再计算模型参数更新
Figure PCTCN2022131433-appb-000056
并由节点k将该参数更新量上报给本域管控系统。本域管控系统在获得所有K个节点的
Figure PCTCN2022131433-appb-000057
后,采用
Figure PCTCN2022131433-appb-000058
更新公共模型参数ω j+1并下发给各网元节点,从而结束本轮的迭代。
方案A的特征如下:
1、单域内各网元节点自身具备AI训练能力,可通过对本节点的性能采样、模型训练,能及时感知、预测本网元性能变化量。
2、单域管控系统也具备AI训练能力,从联邦学习技术角度讲,在本方案中起到边缘服务器的作用。单域各网元节点将自身AI算法训练的性能预测模型参数梯度(或参数更新量)上报给本域管控系统,并由本域管控系统对上报的所有网元的模型参数梯度(或参数更新量)进行聚合处理,同时更新本域管控系统构建的本域性能预测公共模型(即单域性能预测模型)的参数。
3、单域管控系统将更新后的本域性能预测公共模型参数,广播下发给本域各网元节点。
4、各网元节点用公共模型参数刷新本网元性能预测模型参数,并据此迭代发起下一轮模型训练及与单域管控系统的交互。
5、如某个网元节点与单域管控系统之间出现通讯故障或者节点自身出现故障,从而不能上传本网元节点的模型参数梯度(或参数更新量),这种情况不会影响单域管控系统对自身公共模型参数的更新及与其它网元节点的交互。
6、这种方案可以解决因本域各网元节点器件批次、型号,使用年限等差异性造成的性能预测模型训练泛化能力差,模型预测推理容易出现的过拟合等问题。
7、各单域管控系统将根据最终训练获得的本域公共性能预测模型及本域网络拓扑信息,生成由本域各网元DT模型(包括本网元DT基础结构模型、DT性能预测功能模型等)按本域OTN物理网络拓扑连接关系组建的本域OTN DT网络层,并发送给多域编排系统。多域编排系统将各单域上报的OTN DT网络层拼接,进而获得整网OTN DT网络层。由于是拼接获得的整网DT网络层,所包含的各单域部分之间的性能预测模型不尽相同。
8、单域管控系统与本域内部各网元由同一设备制造商组网建设,那么本网元在上报本网元模型参数梯度(或参数更新量)之前,可不需要对本网元的性能预测模型参数梯度进行加密处理。
9、采用横向联邦学习训练公共性能预测模型,要确保单域内分布的各网元节点的RNN(此处以RNN为例)训练模型和在训练中起到边缘服务器作用的单域管控系统的RNN训练模型的结构是相同的:包括RNN模型输入的所有向量元素属性、向量元素个数、RNN模型的层数,每层的神经元个数,层与层之间的激活函数、连接关系,输出的向量属性、输出的向量元素个数等等,以确保RNN模型参数的统一训练和同步刷新。
该方案所能带来的技术效果体现为解决各网元自身的器件批次、型号、使用年限等属性不尽相同,造成DT性能预测模型训练泛化能力差,在推理过程中容易出现过拟合等问题,实现OTN DT性能预测模型泛化能力强,预测准确度高等效果。
在一个例子中,光传送网性能预测系统应用于方案B,具体步骤包括:
步骤1:多域编排与各单域通过横向联邦学习技术训练获得多域OTN网络性能预测功能模型(即网络性能预测模型)。
步骤2:多域编排系统根据多域OTN网络性能预测公共功能模型与各单域上报的加密拓扑信息,生成整网OTN DT网络层模型(即光传送网数字孪生模型)。
其中,以j+1轮迭代过程为例,每轮迭代步骤流程如图9所示,包括:
当j=0,多域编排系统参数模型初始化ω 0,并下发给各单域,多域编排系统判断单域管理服务器上报的
Figure PCTCN2022131433-appb-000059
若此式趋于0,则多域管控系统结束此次迭代,若此式不趋于0,则多域编排系统计算
Figure PCTCN2022131433-appb-000060
并下发给各单域,各单域执行j++轮迭代。
其中,以j+1轮迭代过程为例,每轮迭代步骤流程如图10所示,包括:
在j+1轮迭代开始时,由单域k的RNN模型的损失函数对ω j求梯度,
Figure PCTCN2022131433-appb-000061
再计算模型参数更新
Figure PCTCN2022131433-appb-000062
并由单域k将该参数更新量上报给多域编排系统。多域编排系统在获得所有单域的加密
Figure PCTCN2022131433-appb-000063
后,采用
Figure PCTCN2022131433-appb-000064
更新公共模型参数ω j+1并下发给各单域,从而结束本轮的迭代。
在方案B中,网络性能预测模型的更新,具体根据以下公式实现:
Figure PCTCN2022131433-appb-000065
其中,设整个光传送网有K个单域,每个单域管理服务器对各自所属单域的性能数据采样序列集为P k且有
Figure PCTCN2022131433-appb-000066
i∈P k
Figure PCTCN2022131433-appb-000067
是单域k对第i个样本序列在t+1时刻的性能数据预测向量,
Figure PCTCN2022131433-appb-000068
是t时刻第i个样本序列的性能数据向量,样本序列是指:从0到t时刻对同一预测对象按时序采样 的样本集合,一个样本序列的采样周期为[0...t,t+1],样本序列量为n k=|P k|,则单域的总样本序列量为
Figure PCTCN2022131433-appb-000069
方案B的特征如下:
1、单域内各网元节点不具备AI训练建模能力,各单域管控系统具备AI训练建模能力,跨厂商的多域编排系统也具备AI训练建模能力。
2、从联邦学习技术角度讲,在本方案中多域编排系统起到边缘服务器的作用。各单域管控系统将自身AI算法训练的性能预测模型参数梯度(或参数更新量)经同态加密技术上报给多域编排系统,并由多域编排系统对上报的所有单域管控系统的模型参数梯度(或参数更新量)进行同态加密聚合处理,同时更新多域编排系统构建的多域性能预测公共模型参数。
3、多域编排系统将更新后的多域性能预测公共模型参数,广播下发给各单域管控系统。
4、各单域管控系统用公共模型参数刷新本域性能预测模型参数,并据此迭代发起下一轮模型训练及与多域编排系统的交互。
5、如某个单域管控系统与多域编排系统之间出现通讯故障或者节点自身出现故障,从而不能上传本域管控系统的模型参数梯度(或参数更新量),这种情况不会影响多域编排系统对自身公共模型参数的更新及与其它单域管控系统的交互。
6、这种方案可以解决因各域间各网元节点器件批次、型号,使用年限等差异性造成的性能预测模型训练泛化能力差,模型预测推理容易出现的过拟合等问题,性能预测模型跨域通用,具有更强的泛化能力。
7、多域编排系统将根据最终训练获得的多域公共性能预测模型及各域上报的加密网络拓扑信息,生成整网跨域OTN DT网络层,各OTN域的性能预测模型皆相同。
8、各域由不同设备制造商组网建设,因此各域管控系统在上报本域模型参数梯度(或参数更新量)之前,需要对本域的性能预测模型参数梯度(或参数更新量)做加密处理;同时各域上报给多域管控系统的本域网络拓扑信息,也视需要做加密处理。
9、采用横向联邦学习训练公共性能预测模型,要确保各单域管控系统的RNN(此处以RNN为例)训练模型和在训练中起到边缘服务器作用的多域编排系统的RNN训练模型的结构是相同的:包括RNN模型输入的所有向量元素属性、向量元素个数、RNN模型的层数,每层的神经元个数,层与层之间的激活函数、连接关系,输出的向量属性、输出的向量元素个数等等,以确保RNN模型参数的统一训练和同步刷新。
该方案所能带来的技术效果体现为解决各厂商网元自身的器件批次、型号、使用年限等属性不尽相同,造成跨域OTN DT性能预测模型训练泛化能力差,在推理过程中容易出现过拟合等问题,实现跨域OTN DT性能预测模型泛化能力强,预测准确度高等效果。
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。
本申请的实施例还涉及一种光传送网性能预测方法,多域管理服务器,如图11所示,包括:
步骤1101,获取单域管理服务器上报的模型更新信息和单域拓扑信息;其中,模型更新信息,是通过横向联邦学习技术,对单域管理服务器的性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对性能预测模型进行更新;
步骤1102,基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型;其中,光传送网数 字孪生模型,用于对光传送网进行性能预测。
本实施例的光传送网性能预测方法,应用于多域管理服务器,多域管理服务器是光传送网中管理所有单域管理服务器的服务器,多域管理服务器上可以运行多域编排系统,以实现光传送网性能预测方法。
在一个例子中,性能预测模型为单域性能预测模型,模型更新信息为基于更新后的所述单域性能预测模型得到的单域数字孪生模型,多域管理服务器通过以下方式实现基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型:根据单域拓扑信息,将各单域的单域管理服务器上报的单域数字孪生模型,拼接得到光传送网数字孪生模型。
在另一个例子中,性能预测模型为网络性能预测模型,模型更新信息为网络性能预测模型的参数更新量,多域管理服务器通过以下方式实现基于模型更新信息和单域拓扑信息,生成光传送网数字孪生模型:根据网络性能预测模型的参数更新量,更新网络性能预测模型;根据更新后的网络性能预测模型,和单域拓扑信息,生成光传送网数字孪生模型。
具体地,多域管理服务器根据网络性能预测模型的参数更新量,更新网络性能预测模型,可以是根据各个网元上报的参数更新量,更新单域性能预测模型的参数,具体根据以下公式实现:
Figure PCTCN2022131433-appb-000070
其中,设共有K个设备以横向联邦学习的方式,进行所述性能预测模型的更新,每个设备的性能数据采样序列集为P k且有
Figure PCTCN2022131433-appb-000071
i∈P k
Figure PCTCN2022131433-appb-000072
是设备k对第i个样本序列在t+1时刻的性能数据预测向量,
Figure PCTCN2022131433-appb-000073
是t时刻第i个样本序列的性能数据向量,所述样本序列是指从0到t时刻对同一预测对象按时序采样的样本集合,一个样本序列的采样周期为[0...t,t+1],样本序列量为n k=|P k|,则单域的总样本序列量为
Figure PCTCN2022131433-appb-000074
ω j、ω j+1分别是所述性能预测模型在第j轮迭代和第j+1轮迭代后得到的参数取值,
Figure PCTCN2022131433-appb-000075
是在横向联邦学习过程中,所述设备k第j+1轮迭代得到的性能预测模型的参数取值,
Figure PCTCN2022131433-appb-000076
是所述设备k计算得到的性能预测模型的参数ω j更新至ω j+1的参数更新量,a是性能预测模型的学习率,g k是ω j的梯度。
不难发现,本实施方式为与上述实施例相对应的实施例,本实施例可与上述实施例互相配合实施。上述实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在上述实施例中。
本申请的实施例还涉及一种光传送网性能预测系统,如图12所示,包括:
单域管理服务器1201、多域管理服务器1202;单域管理服务器1201与多域管理服务器1202通信连接;
其中,单域管理服务器1201,用于获取性能预测模型;通过横向联邦学习技术,获取性能预测模型的模型更新信息;其中,模型更新信息,是对性能预测模型基于单域中的各网元性能数据进行训练后得到;将参数更新量和单域拓扑信息,上报至多域管理服务器1202,供多域管理服务器1202生成光传送网数字孪生模型;
多域管理服务器1202,用于获取单域管理服务器1201上报的模型更新信息和单域拓扑信息;其中,模型更新信息,是通过横向联邦学习技术,对单域管理服务器1201的性能预测模型基于单域中的各网元性能数据进行训练后得到;根据模型更新信息和单域拓扑信息,生成光传送网数字孪生模型;其中,光传送网数字孪生模型,用于对光传送网进行性能预测。
不难发现,本实施方式为与上述实施例相对应的系统实施例,本实施例可与上述实施例互相配合实 施。上述实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在上述实施例中。
本申请的实施例还涉及一种电子设备,如图13所示,包括:至少一个处理器1301;与至少一个处理器通信连接的存储器1302;其中,存储器1302存储有可被至少一个处理器1301执行的指令,指令被至少一个处理器1301执行上述的任一实施例的方法。
其中,存储器1302和处理器1301采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器1301和存储器1302的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器1301处理的信息通过天线在无线介质上进行传输,进一步,天线还接收信息并将信息传送给处理器1301。
处理器1301负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器1302可以被用于存储处理器在执行操作时所使用的信息。
本申请的实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (12)

  1. 一种光传送网性能预测方法,应用于单域管理服务器,包括:
    获取性能预测模型;
    通过横向联邦学习技术,获取所述性能预测模型的模型更新信息;其中,所述模型更新信息,是对所述性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对所述性能预测模型进行更新;
    将所述模型更新信息和单域拓扑信息,上报至多域管理服务器,供所述多域管理服务器基于所述模型更新信息和所述单域拓扑信息,生成光传送网数字孪生模型,所述光传送网数字孪生模型,用于对所述光传送网进行性能预测。
  2. 根据权利要求1所述的光传送网性能预测方法,其中,所述性能预测模型,包括:单域性能预测模型;
    所述模型更新信息,包括:基于更新后的所述单域性能预测模型得到单域数字孪生模型;
    所述获取所述性能预测模型的模型更新信息,包括:
    将所述单域性能预测模型下发至本单域中的各个网元;
    获取各个网元上报的所述单域性能预测模型的参数更新量;其中,各个网元根据本地性能数据,分别对所述单域性能预测模型进行训练,并计算得到所述参数更新量;
    根据各个网元上报的所述参数更新量,更新所述单域性能预测模型的参数,得到所述更新后的所述单域性能预测模型;
    根据所述更新后的所述单域性能预测模型,和本单域的单域拓扑信息,得到所述单域数字孪生模型。
  3. 根据权利要求2所述的光传送网性能预测方法,其中,所述根据各个网元上报的所述参数更新量,更新所述单域性能预测模型的参数,得到所述更新后的所述单域性能预测模型,包括:
    对所述单域性能预测模型进行迭代更新;
    其中,所述迭代更新包括:将更新的所述单域性能预测模型下发给本单域内的各个网元,获取各个网元再次上报的参数更新量,根据所述再次上报的参数更新量对所述单域性能预测模型进行更新,并发起下一轮迭代更新,直到满足停止迭代条件。
  4. 根据权利要求1所述的光传送网性能预测方法,其中,所述性能预测模型,包括:网络性能预测模型;
    所述模型更新信息,包括:所述网络性能预测模型的参数更新量;
    所述获取所述性能预测模型的模型更新信息,包括:
    根据各网元性能数据,对所述网络性能预测模型进行训练,得到所述网络性能预测模型的参数更新量;
    其中,所述网络性能预测模型的参数更新量用于上报至所述多域管理服务器,供所述多域管理服务器根据所述网络性能预测模型的参数更新量,更新所述网络性能预测模型。
  5. 根据权利要求1至4中任一项所述的光传送网性能预测方法,其中,所述性能预测模型的更新,根据以下公式实现:
    Figure PCTCN2022131433-appb-100001
    其中,设共有K个设备以横向联邦学习的方式,进行所述性能预测模型的更新,每个设备的性能数据采样序列集为P k且有
    Figure PCTCN2022131433-appb-100002
    i∈P k
    Figure PCTCN2022131433-appb-100003
    是设备k对第i个样本序列在t+1时刻的性能数据预测向量,
    Figure PCTCN2022131433-appb-100004
    是t时刻第i个样本序列的性能数据向量,所述样本序列是指从0到t时刻对同一 预测对象按时序采样的样本集合,一个样本序列的采样周期为[0...t,t+1],样本序列量为n k=|P k|,则单域的总样本序列量为
    Figure PCTCN2022131433-appb-100005
    ω j、ω j+1分别是所述性能预测模型在第j轮迭代和第j+1轮迭代后得到的参数取值,
    Figure PCTCN2022131433-appb-100006
    是在横向联邦学习过程中,所述设备k第j+1轮迭代得到的性能预测模型的参数取值,
    Figure PCTCN2022131433-appb-100007
    是所述设备k计算得到的性能预测模型的参数ω j更新至ω j+1的参数更新量,a是性能预测模型的学习率,g k是ω j的梯度。
  6. 根据权利要求5所述的光传送网性能预测方法,其中,所述性能预测模型的更新,基于
    Figure PCTCN2022131433-appb-100008
    与t+1时刻的实际性能数据向量
    Figure PCTCN2022131433-appb-100009
    的损失函数和F k(ω)实现,其中,
    Figure PCTCN2022131433-appb-100010
    所述性能数据向量
    Figure PCTCN2022131433-appb-100011
    从入光功率a i、出光功率b i、收端光衰c i和收端光信噪比d i四个维度进行采样取值,所述性能数据向量,通过以下三种方式的任意一种进行采样,
    方式一,从所述四个维度依次采样各光层链路在同一采样周期内的性能值,对每个维度的性能值进行加权求和的卷积化特征提取处理,
    所述卷积化特征提取处理得到的
    Figure PCTCN2022131433-appb-100012
    及其
    Figure PCTCN2022131433-appb-100013
    如下:
    Figure PCTCN2022131433-appb-100014
    其中,
    Figure PCTCN2022131433-appb-100015
    α iiii是卷积化特征提取处理运算中的预设参数,m是所述设备k中,光层链路的条数;
    方式二,将各所述光层链路的所述四个维度在t时刻的性能值,串联成长向量,得到的
    Figure PCTCN2022131433-appb-100016
    及其
    Figure PCTCN2022131433-appb-100017
    如下:
    Figure PCTCN2022131433-appb-100018
    m是所述设备k中,光层链路的条数,
    Figure PCTCN2022131433-appb-100019
    是m=1即第一条光层链路在t时刻的a i取值,
    Figure PCTCN2022131433-appb-100020
    Figure PCTCN2022131433-appb-100021
    同理;
    方式三,将各所述光层链路的所述四个维度的性能值进行随机抽样取值,其中,同一样本序列在同一时间周期内不同时刻的性能取值来自于同一条光层链路。
  7. 一种光传送网性能预测方法,应用于多域管理服务器,包括:
    获取单域管理服务器上报的模型更新信息和单域拓扑信息;其中,所述模型更新信息,是通过横向联邦学习技术,对所述单域管理服务器的性能预测模型基于单域中的各网元性能数据进行训练后得到,用于对所述性能预测模型进行更新;
    基于所述模型更新信息和所述单域拓扑信息,生成光传送网数字孪生模型;其中,所述光传送网数字孪生模型,用于对所述光传送网进行性能预测。
  8. 根据权利要求7所述的光传送网性能预测方法,其中,所述性能预测模型,包括:单域性能预测模型;
    所述模型更新信息,包括:基于更新后的所述单域性能预测模型得到单域数字孪生模型;
    所述基于所述模型更新信息和所述单域拓扑信息,生成光传送网数字孪生模型,包括:
    根据所述单域拓扑信息,将各单域的所述单域管理服务器上报的所述单域数字孪生模型,拼接得到所述光传送网数字孪生模型。
  9. 根据权利要求7所述的光传送网性能预测方法,其中,所述性能预测模型,包括:网络性能预测模型;
    所述模型更新信息,包括:所述网络性能预测模型的参数更新量;
    所述基于所述模型更新信息和所述单域拓扑信息,生成光传送网数字孪生模型,包括:
    根据所述网络性能预测模型的参数更新量,更新所述网络性能预测模型;
    根据更新后的所述网络性能预测模型,和所述单域拓扑信息,生成光传送网数字孪生模型。
  10. 一种光传送网性能预测系统,包括:
    单域管理服务器、多域管理服务器;所述单域管理服务器与所述多域管理服务器通信连接;
    其中,所述单域管理服务器,用于获取性能预测模型;通过横向联邦学习技术,获取所述性能预测模型的模型更新信息;其中,所述模型更新信息,是对所述性能预测模型基于单域中的各网元性能数据进行训练后得到;将所述参数更新量和单域拓扑信息,上报至多域管理服务器,供所述多域管理服务器生成光传送网数字孪生模型;
    所述多域管理服务器,用于获取所述单域管理服务器上报的所述模型更新信息和所述单域拓扑信息;其中,所述模型更新信息,是通过所述横向联邦学习技术,对所述单域管理服务器的性能预测模型基于单域中的各网元性能数据进行训练后得到;根据所述模型更新信息和所述单域拓扑信息,生成所述光传送网数字孪生模型;其中,所述光传送网数字孪生模型,用于对所述光传送网进行性能预测。
  11. 一种电子设备,包括:
    至少一个处理器;
    与所述至少一个处理器通信连接的存储器;
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至6中任一项所述的光传送网性能预测方法,或者,如权利要求7至9中任一项所述的光传送网性能预测方法。
  12. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的光传送网性能预测方法,或者,如权利要求7至9中任一项所述的光传送网性能预测方法。
PCT/CN2022/131433 2022-03-10 2022-11-11 光传送网性能预测方法、系统、电子设备及存储介质 WO2023168976A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210234032.2 2022-03-10
CN202210234032.2A CN116781153A (zh) 2022-03-10 2022-03-10 光传送网性能预测方法、系统、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023168976A1 true WO2023168976A1 (zh) 2023-09-14

Family

ID=87937115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/131433 WO2023168976A1 (zh) 2022-03-10 2022-11-11 光传送网性能预测方法、系统、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN116781153A (zh)
WO (1) WO2023168976A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515837A (zh) * 2021-03-30 2021-10-19 清华大学 仿真测试平台的建立方法、装置和电子设备
US11190266B1 (en) * 2020-05-27 2021-11-30 Pivotal Commware, Inc. RF signal repeater device management for 5G wireless networks
WO2021238505A1 (zh) * 2020-05-27 2021-12-02 华北电力大学 基于联邦学习的区域光伏功率概率预测方法及协同调控系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11190266B1 (en) * 2020-05-27 2021-11-30 Pivotal Commware, Inc. RF signal repeater device management for 5G wireless networks
WO2021238505A1 (zh) * 2020-05-27 2021-12-02 华北电力大学 基于联邦学习的区域光伏功率概率预测方法及协同调控系统
CN113515837A (zh) * 2021-03-30 2021-10-19 清华大学 仿真测试平台的建立方法、装置和电子设备

Also Published As

Publication number Publication date
CN116781153A (zh) 2023-09-19

Similar Documents

Publication Publication Date Title
Zhang et al. Federated learning for the internet of things: Applications, challenges, and opportunities
Rusek et al. RouteNet: Leveraging graph neural networks for network modeling and optimization in SDN
Almasan et al. Digital twin network: Opportunities and challenges
US20230039182A1 (en) Method, apparatus, computer device, storage medium, and program product for processing data
CN109510760A (zh) 一种面向物联网应用的区块链网关及用该网关管理物联网的方法
US10027536B2 (en) System and method for affinity-based network configuration
Bouzinis et al. Wireless Federated Learning (WFL) for 6G Networks⁴Part I: Research Challenges and Future Trends
US11050634B2 (en) Systems and methods for contextual transformation of analytical model of IoT edge devices
Lü et al. Event‐triggered discrete‐time distributed consensus optimization over time‐varying graphs
EP4002231A1 (en) Federated machine learning as a service
WO2023093235A1 (zh) 通信网络架构的生成方法、装置、电子设备及介质
US20200219014A1 (en) Distributed learning using ensemble-based fusion
Saquetti et al. Toward in-network intelligence: Running distributed artificial neural networks in the data plane
CN109948373A (zh) 一种多方业务数据交互方法
Huang et al. Collective reinforcement learning based resource allocation for digital twin service in 6G networks
Wu et al. Leader-following consensus of nonlinear discrete-time multi-agent systems with limited communication channel capacity
WO2023168976A1 (zh) 光传送网性能预测方法、系统、电子设备及存储介质
US20240046147A1 (en) Systems and methods for administrating a federated learning network
US20230132213A1 (en) Managing bias in federated learning
Hong et al. Retracted: Artificial intelligence point‐to‐point signal communication network optimization based on ubiquitous clouds
González et al. Weighted predictor‐feedback formation control in local frames under time‐varying delays and switching topology
WO2023179073A1 (zh) 基于纵向联邦学习的otn数字孪生网络生成方法及系统
Gu et al. Consensus control and feedback graph co-design for MIMO discrete-time multi-agent systems
Moutai et al. An Optimal Approach for Testing Control in The Distributed Cloud
WO2022105374A1 (zh) 信息处理方法、模型的生成及训练方法、电子设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930599

Country of ref document: EP

Kind code of ref document: A1