WO2020001220A1 - 物理网元节点的虚拟化方法、装置、设备及存储介质 - Google Patents

物理网元节点的虚拟化方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020001220A1
WO2020001220A1 PCT/CN2019/088967 CN2019088967W WO2020001220A1 WO 2020001220 A1 WO2020001220 A1 WO 2020001220A1 CN 2019088967 W CN2019088967 W CN 2019088967W WO 2020001220 A1 WO2020001220 A1 WO 2020001220A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
node
layer
delay
link
Prior art date
Application number
PCT/CN2019/088967
Other languages
English (en)
French (fr)
Inventor
王大江
王振宇
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP19824449.3A priority Critical patent/EP3813303B1/en
Priority to JP2020572830A priority patent/JP7101274B2/ja
Publication of WO2020001220A1 publication Critical patent/WO2020001220A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • the present invention relates to, but is not limited to, the field of communication technologies, and in particular, to a method, an apparatus, a device, and a storage medium for virtualizing a physical network element node.
  • 5G network services have higher requirements on performance indicators such as clock accuracy, delay, and reliability.
  • Optical Transport Network (OTN) is usually considered for deployment on 5G midhaul and backhaul networks.
  • SLA Service Level Agreement
  • the resource optimization algorithm is applied to OTN.
  • Network resources are sliced to meet the delay requirements of various 5G services that occupy corresponding slice resources.
  • the level of the Open System Interconnection (OSI) network model affects latency.
  • OSI Open System Interconnection
  • the network slicing algorithm it is necessary to calculate the cumulative delay value between each adjacent pair of network element nodes based on the factors affecting the delay at the OSI level based on the physical network topology of the OTN in the related technology.
  • An OTN delay attribute topology map and based on this, implement a network slicing algorithm with a delay optimization strategy as the objective function.
  • the cumulative delay value between each pair of adjacent network element nodes and the exchange delay value through each network element node are related to the device attributes of the corresponding node; for a single scheduling with only one switching technology capability
  • the delay parameters are single and fixed.
  • the OTN physical network topology cannot cover In the scenario of mixed scheduling network element nodes, multiple switching delays occur when services pass through different switching levels of the network element nodes. Therefore, the calculation of the shortest delay path in the delay optimization strategy will result in "Uncertainty of delay properties" and cannot be implemented.
  • the embodiments of the present invention are expected to provide a method, device, device and storage medium for virtualizing physical network element nodes, which are used to solve the problems that the physical network topology cannot cover services in the related technology, and the mixed switching network element nodes have different switching levels There are many kinds of exchange delay problems.
  • An embodiment of the present invention provides a method for virtualizing a physical network element node.
  • the method includes:
  • the scheduling link is a link that can be scheduled to a corresponding switching layer when traffic passes through the physical network element node;
  • An embodiment of the present invention provides a device for virtualizing a physical network element node, where the device includes:
  • the scheduling link mapping module is configured to establish a switching delay link structure corresponding to the scheduling link of the physical network element node; the scheduling link is a service that can be scheduled to a corresponding exchange when passing through the physical network element node Layer link
  • a generating module is configured to generate a virtual delay model of the physical network element node according to the switching delay link structure.
  • An embodiment of the present invention provides a physical network element node device.
  • the device includes a memory and a processor.
  • the memory stores a virtualized computer program of the physical network element node.
  • the processor is configured to execute the computer program to A method for virtualizing a physical network element node provided by an embodiment of the present invention is implemented.
  • An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores a virtualized computer program of a physical network element node, and the computer program can be executed by at least one processor to implement the physics provided by the embodiment of the present invention.
  • Network element node virtualization method
  • a physical network element node by establishing a physical link to a physical network element node and establishing a switching delay link structure for a scheduling link of the network element node, a physical network element node can be generated according to the physical link and the switching delay link structure.
  • the virtualized delay model can effectively solve the problem of multiple switching delays that occur when the OTN physical network topology cannot cover the services in the related technology and pass through the mixed switching network element nodes at different switching levels. It can be used in the 5G slicing technology application scenario. , Which effectively meets the requirements for calculating the shortest delay path.
  • FIG. 1 is a flowchart of a method for virtualizing a physical network element node according to an embodiment of the present invention
  • Figure 2 is an example of an OTN delay attribute topology diagram including L2 / L1 / L0 nodes with mixed scheduling capabilities
  • FIG. 3 is a schematic diagram of a device node model having a mixed scheduling function of L0 / L1 / L2;
  • FIG. 4 is a virtual delay model of a physical network element node when it is used as a head or tail node of a service connection or an OVPN virtual link in an embodiment of the present invention
  • FIG. 5 is a virtualized delay model of a physical network element node when an intermediate node of a service connection or an OVPN virtual link has only two physical external fiber links according to an embodiment of the present invention
  • FIG. 6 is a virtual delay model of a physical network element node when there are three physical external optical fiber links as an intermediate node of a service connection or an OVPN virtual link according to an embodiment of the present invention
  • FIG. 7 is a virtualized delay model of a physical network element node when an intermediate node of a service connection or an OVPN virtual link has more than three physical external optical fiber links according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of a virtual network delay topology when physical nodes A and E are respectively used as the first or last nodes of a service connection or an OVPN virtual link according to an embodiment of the present invention
  • FIG. 9 is a schematic diagram of a virtual network delay topology when physical nodes B and D are respectively used as the first or last nodes of a service connection or an OVPN virtual link according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of a method for virtualizing a physical network element node according to an embodiment of the present invention
  • FIG. 11 is a schematic diagram of a self-loop phenomenon and judgment in a virtual node according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a virtualization device for a physical network element node according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a physical network element node device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method for virtualizing a physical network element node. As shown in FIG. 1, the method includes:
  • the scheduling link is a link that can be scheduled to a corresponding switching function when a service passes through the physical network element node;
  • a switching delay link structure is established for a scheduling link of a network element node, so that a virtual delay model of a physical network element node can be generated according to the switching delay link structure, thereby effectively solving mobile communications in related technologies.
  • the topology of the network especially the OTN physical network, cannot cover the problem of multiple switching delays that occur when services pass through different switching layers of the hybrid scheduling network element node. It can effectively meet the shortest delay path calculation requirements in the application scenario of 5G slicing technology. .
  • the switching layer generally corresponds to a switching function, such as the OSI network model layer in the OSI network model, where the OSI network model layer may include the L0 layer, which is the 0th layer in the OSI network model; the L1 layer, which is the OSI Layer 1 in the network model; layer L2, which is the second layer in the OSI network model; layer L3, which is the third layer in the OSI network model.
  • a hybrid scheduling network element node refers to a network element node having a scheduling capability of a hybrid switching function, for example, a network element node having each of the OSI network model layers described above.
  • the switching delay link structure is a virtual link structure corresponding to a preset delay. In some embodiments, it may also include configuring a delay value for the inter-layer adaptation delay link.
  • 5G network services have higher requirements for clock accuracy, delay, reliability and other performance indicators: clock accuracy reaches nanosecond level, and delay is required to microsecond level.
  • clock accuracy reaches nanosecond level, and delay is required to microsecond level.
  • ultra-high reliability, ultra-low latency communications URLLC
  • mass machine type of type Communication of Communications, mMTC
  • enhanced mobile communications have emerged.
  • the network slicing technology can allocate different network resources to different services, and cut out multiple logical networks on an independent physical network. Then, according to the service level agreement (SLA) level of the slice, the resource can be pre-planned. Allocate, pre-optimize, and accurately control the bandwidth and delay of services on different slices to achieve full and effective use of network resources.
  • SLA service level agreement
  • OTN optical transport networks are usually considered for deployment on 5G midhaul and backhaul networks.
  • SLA delay levels are used as slice optimization strategies and resource optimization algorithms are used to perform OTN network resources.
  • Slice processing to meet the delay requirements of various 5G services occupying corresponding slice resources is the key technology for OTN to meet the requirements of 5G networking. The factors that affect the delay in the OSI network model are shown in Table 1.
  • Table 1 Delay of each layer corresponding to the OSI network model
  • the cumulative delay value between each pair of adjacent network element nodes is calculated based on the factors affecting the delay described in Table 1.
  • a topology map of OTN delay attributes is formed, as shown in Figure 2. Based on this, a network slicing algorithm with a delay optimization strategy as the objective function is implemented.
  • the network slicing algorithm usually includes the sub-algorithm of the shortest delay path between a specified pair of network element nodes, with Figure 2 as the calculation object.
  • the shortest delay requires each adjacent network element that the path between the specified network element node pair passes through.
  • the cumulative value of the delay between the node pairs and the sum of the exchange delays of each node passing through are the shortest.
  • the cumulative delay value between each adjacent pair of network element nodes and the exchange delay value of each network element node that passes through it are related to the device attributes of the corresponding node: For a single unit with only one switching technology capability For the scheduling network element nodes, these delay parameters are all single and fixed.
  • the delay value is different when passing through different switching levels of the nodes.
  • the topology in Figure 2 cannot cover hybrid
  • multiple switching delays occur when services pass through different switching levels of the nodes, so the calculation of the shortest delay path will appear “cannot be implemented due to the uncertainty of the delay properties of the computing object”
  • the embodiments of the present invention can effectively meet the requirements for calculating the shortest delay path in the application scenario of 5G slicing technology.
  • the switching delay link structure is one or more switching delay matrices; and establishing the switching delay link structure corresponding to the scheduling link of the physical network element node includes:
  • the physical network element node is a head node or a tail node, establishing a switching delay matrix corresponding to the scheduling link of the physical network element node;
  • a switching delay matrix corresponding to the scheduling link of the physical network element node is established between every two physical links.
  • a switching delay matrix can be established, where the imaginary center is the switching delay matrix, A'0, A'1 , A'2, A'2 'respectively represent the first virtual layer, the second virtual layer, the third virtual layer, and the fourth virtual layer, and each virtual layer is constituted by a virtual node.
  • a switching delay matrix can be established, where A'01 and A'02 are virtual node pairs of the first virtual layer, A'11 and A'12 are virtual node pairs of the second virtual layer, A'21 and A'22 are virtual node pairs of the third virtual layer, and A'2'1 'and A'2'2' are fourth The virtual node pair of the virtual layer.
  • FIG. 6 when the physical network element node is an intermediate node and has three physical links, a switching delay matrix can be established between the physical links corresponding to the three physical links in pairs.
  • FIG. 7 when the physical network element node is an intermediate node and has more than three physical links, a switching delay matrix can be established between the physical links corresponding to the three physical links in pairs. .
  • the dispatching links of services passing through the hybrid physical network element nodes are divided into the following types: L0xL0; L0—L1xL1—L0; L0—L1—L2xL2—L1—L0; L0—L2xL2—L0 (here "X" represents the exchange function at the corresponding level);
  • the switching layer is an open system interconnection OSI model layer;
  • the establishing a switching delay matrix corresponding to the scheduling link of the physical network element node includes:
  • the multiple virtual layers include a first virtual layer corresponding to the L0 layer, a second virtual layer corresponding to the L1 layer, and a third virtual layer corresponding to the L2 layer.
  • the third virtual layer includes a first virtual sublayer and a second virtual sublayer; and establishing an inter-layer adaptive delay link between the multiple virtual layers, as shown in Figure 4- As shown in Figure 7, it can include:
  • a third inter-layer adaptive delay link is established between the first virtual layer and the second virtual sublayer.
  • the establishing a plurality of virtual layers according to the open system interconnection OSI model layer of the physical network element node includes:
  • a plurality of virtual nodes may be established according to the OSI model layer, and each virtual node constitutes a virtual layer;
  • each virtual layer is composed of a virtual node pair;
  • Intra-layer adaptive delay links are established between virtual node pairs in the virtual layer.
  • it may also include configuring a delay value for an intra-layer adaptive delay link.
  • the virtual node pair includes a first virtual node and a second virtual node; the inter-layer adaptation delay link between any two virtual layers includes a first inter-layer adaptation delay link and The second inter-layer adaptation delay link; the establishing an inter-layer adaptation delay link between the multiple virtual layers, as shown in Figs. 5-7, further including:
  • the network element node A is a node with a mixed scheduling function of L0 / L1 / L2
  • the physical network element nodes corresponding to similar node A can be abstracted as shown in FIG. 4 respectively.
  • the virtualization delay model can also be expressed as a virtualization structure, a virtualization model, and the like.
  • Link (Link) 1, Link2, ... until Linkn indicates the physical link corresponding to the physical fiber link where the network element node is connected to the external topology
  • the nodes A'0, A'1, A'2, and A'2 ' respectively represent the virtual layers corresponding to the L0, L1, L2, and L2 layers of the hybrid scheduling node (the first virtual layer, the second virtual layer, the first A virtual sublayer, a second virtual sublayer), A'2 'can represent a mirror image of A'2;
  • L'01 indicates the inter-layer adaptive delay link between the L0 and L1 layers, and the delay is 600ns
  • L'12 indicates the inter-layer adaptive delay link between the L1 and L2 layers, and the delay is 500ns
  • L'02 The adaptation link between L0 and L2 layers, the delay is 400ns. Among them, a delay value that can be preset for each inter-layer adaptive delay link.
  • Link1 and Link2 indicate the physical link corresponding to the physical fiber link that this node is connected to the external topology;
  • the physical link delay value of Link1 and Link2 depends on factors such as the external transmission distance, and is not shown here ;
  • the network element node virtualized delay model structure includes a total of 4 virtual node pairs, 4 nodes internally exchanged delay links, and 3 pairs of nodes internally adapted delay links; each virtual node pair Represents the inbound or outbound end of the corresponding switching layer; the switching delay value of the device node at the switching layer is identified on each switching delay link; each internal interlayer adaptation delay link is also on The adaptation delay value is identified; among them, the internal exchange delay link (that is, the intra-layer adaptation delay link) can be described as follows:
  • the virtualized node pair corresponding to the L0 layer exchange the ingress or egress node A'01, A'02, the exchange delay value of the exchange delay link L'00 between the two is 500ns;
  • the pair of virtualized nodes with L2 layer switching functions the inbound and outbound A'21 and A'22, the exchange delay value of the exchange delay link L'22 between the two is 10us;
  • the corresponding inter-layer adaptive delay link pairs L'12 and L'21 between the L1 layer and the L2 layer, the delay values are both 3us for example;
  • A'2'1 ', A'2'2', this node pair is the mirror node of A'21, A'22 respectively;
  • L'2'2 ' is the mirrored exchange delay link of L'22, at that time The delay value must be the same as L'22;
  • L'02 and L'20 are the adaptive delay link pair between L0 and L2 layers, and their delay values are both 2us;
  • A'2'1 ', A' This node pair 2'2 'and the exchange delay link L'2'2' between them and the adaptive delay link pair L'02 and L'20 are used to describe L0 without passing through the L1 layer.
  • L2xL2—L0 service scheduling type L0 service scheduling type.
  • the first virtual layer is composed of a virtual node, setting a physical link of the physical network element node on the virtual node;
  • a physical link of the physical network element node is set on two virtual nodes of the virtual node pair; or, each physical node of the physical network element node is established.
  • the external virtual node corresponding to the port establishes a physical link of the physical network element node on each external virtual node, and two virtual nodes of the virtual node pair are respectively provided with internal virtual nodes for connecting the corresponding external virtual node. link.
  • Link1, Link2, and Link3 indicate the physical link corresponding to the physical fiber link that this node is connected to the external topology; the link delay values of Link1, Link2, and Link3 depend on factors such as the external transmission distance. Not considered here;
  • LinkP'11 and LinkP'12, LinkP'21 and LinkP'22, LinkP'31 and LinkP'32 are the external virtual nodes P'1, P'2, and P'3 and the switching delay matrices A 'and B, respectively.
  • the internal virtual link connected to ', C' is a topological abstract description representing the overall relationship of the node's virtual model, and its delay value can be expressed as 0us;
  • the models of the switching delay matrices A ', B', and C ' have the same meaning, and they all indicate that the delay characteristics of different switching scheduling models corresponding to services passing through the L0, L1, and L2 layers.
  • the switching delay matrix A ' is used as an example to explain: the switching delay matrix A' includes a total of 4 virtual child node pairs, 4 intra-matrix switching delay links, and 3 inter-layer adaptive delay links within the matrix. ; Each virtual child node pair represents the ingress or egress of the corresponding switching layer; each switching delay link has a corresponding service switching delay value at the switching layer, which can be obtained by link attribute configuration; The inter-layer adaptive delay link in each matrix also has a corresponding adaptive delay value, which can be obtained through link attribute configuration.
  • the definition of each virtual child node and link in the switching delay matrix A ' is as follows:
  • the virtualized node pair corresponding to the L0 layer exchange the ingress or egress node A'01, A'02, the exchange delay link between the two is A'L'0;
  • Virtual node pair corresponding to L1 layer switching inbound or outbound node A'11, A'12, the exchange delay link between the two is A'L'1;
  • the pair of virtualized nodes with L2 layer switching functions the ingress and egress nodes A'21 and A'22, the exchange delay link between the two is A'L'2;
  • A'2'1 ', A'2'2', this virtual node pair is the mirror node of A'21, A'22 respectively;
  • A'L'2 ' is the mirrored switching delay link of A'L'2 , Its delay value must be the same as A'L'2;
  • A'L'02 and A'L'20 are adaptive delay link pairs between L0 and L2 layers;
  • the virtual node pair '2'2' and the exchange delay link A'L'2 'between them and the adaptive delay link pair A'L'02 and A'L'20 are used to describe the slave Types of L0-L2xL2-L0 service scheduling from L0 to L2 without passing through L1.
  • the dotted line in the dotted line represents the similar expansion of the virtualization model of this node as the number of external physical fiber links of the physical node increases;
  • the model structure guarantees that when a service passes through an intermediate node with mixed L2 / L1 / L0 scheduling capabilities, it will inevitably go through an internal delay exchange when it passes through any pair of inbound and outbound fiber link physical ports of the node.
  • Matrix In this way, the delay generated by the business when passing through the node is passed through the form of the model structure, and is accurately expressed in a graph topology language.
  • FIG. 2 can be abstractly defined as the virtualized delay model model in FIG. 8 or FIG. 9.
  • FIG. 8 shows the virtualized delay model corresponding to the path delay optimization calculation when the physical network element nodes A and E are respectively the service connection or the first or last node of the OVPN virtual link;
  • nodes B and D are used as the first or last node of the service connection or OVPN virtual link, respectively, the virtual delay model corresponding to the path delay optimization calculation.
  • the nodes in the virtualized delay models on the right in Figures 8 and 9 are graph topology nodes with the same algorithmic logic meaning in the topology map; the chains in the virtualized delay models on the right in Figure 7 and Figure 8 A path is a graph topology link with the same algorithmic logic meaning in a topology graph.
  • An embodiment of the present invention provides a method for virtualizing a physical network element node.
  • a method for performing path delay optimization calculation based on the foregoing generated virtualized delay model, as shown in FIG. 10, includes:
  • S201 Establish a switching delay link structure corresponding to a scheduling link of the physical network element node;
  • the scheduling link is a link that can be scheduled to a corresponding switching layer when a service passes through the physical network element node;
  • S203 Generate a virtual network delay topology map of the optical transmission network according to a virtualized delay model of each physical network element node in the optical transmission network;
  • S204 Traverse the path branches of the delay topology map of the virtual network, and perform path delay optimization calculation.
  • the embodiment of the present invention introduces a virtualized delay model into an optical transport network, so that the optimization based on the path delay can effectively solve the problem that the OTN physical network topology in the related technology cannot cover services that pass through different switching layers of the network element nodes of the hybrid scheduling.
  • the emergence of multiple switching delay issues can effectively meet the shortest path delay calculation requirements in the 5G slicing technology application scenario.
  • the switching delay link structure is one or more switching delay matrices; and performing the path delay optimization calculation may further include:
  • next-hop topology node of the current topology node and the current topology node belong to the same physical network element node and belong to different switching delay matrices, then filtering the next topology node One-hop topology nodes continue to traverse other next-hop topology nodes.
  • the path delay optimization calculation steps for service connection or OVPN virtual link mapping are as follows:
  • Step 1 Convert the network topology of FIG. 2 to the right-side virtual network delay topology in FIG. 9;
  • Step 2 Take the delay optimization as the objective function and rely on the algorithm engine to calculate the delay optimization path between nodes B 'and D';
  • Step 3 When running Dijkstra algorithm or KSP algorithm for path branch traversal, when passing a node in a certain topology, judge its next hop node attribute;
  • Step 4 If the next-hop topology node and the topology node that the path has passed belong to the same physical network element node (as in the dotted coil indicated by node A on the right) and belong to different switching delay matrices, omit Pass this topology node (this step is used to prevent the "path from forming a loop in the virtual topology that belongs to the same physical node during algorithm calculation"); for example, as shown in Figure 11, the topological node A2 ' The next hop branch of " ⁇ " is skipped, and the next hop branch with " ⁇ " will be selected by the algorithm, otherwise the delay path between nodes B 'and D' will form a loop in the virtual topology corresponding to A road;
  • Step 5 Continue traversing other next hop nodes of the node until a next hop node that does not meet the conditions of step 4 is found;
  • Step 6 Continue to perform calculation processing according to the processing mechanism of Dijkstra algorithm or KSP algorithm.
  • An embodiment of the present invention provides a device for virtualizing a physical network element node. As shown in FIG. 12, the device includes:
  • a scheduling link mapping module 1201 is configured to establish a switching delay link structure corresponding to the scheduling link of the physical network element node; the scheduling link is a service that can be scheduled to the corresponding when passing through the physical network element node Link at the switching layer;
  • the generating module 1202 is configured to generate a virtualized delay model of the physical network element node according to the switching delay link structure.
  • a switching delay link structure is established for a scheduling link of a network element node, so that a virtualized delay model of a physical network element node can be generated according to the switching delay link structure, which can effectively solve the special problems of mobile networks in related technologies. It is because the topology of the OTN physical network cannot cover the problem of multiple switching delays that occur when services pass through different switching layers of the hybrid scheduling network element node. It can effectively meet the shortest path delay calculation requirements in the application scenario of 5G slicing technology.
  • the switching delay link structure is one or more switching delay matrices;
  • the scheduling link mapping module 1201 is further configured when the physical network element node is a head node or a tail node, Establishing a switching delay matrix corresponding to the scheduling link of the physical network element node;
  • a switching delay matrix corresponding to the scheduling link of the physical network element node is established between every two physical links.
  • the switching layer is an open system interconnection OSI model layer;
  • the physical link mapping module 1202 configures when establishing a switching delay matrix corresponding to the scheduling link of the physical network element node Establishing multiple virtual layers for an open system interconnection OSI model layer according to the physical network element node; establishing an inter-layer adaptive delay link between the multiple virtual layers according to the scheduling link; The multiple virtual layers and the inter-layer adaptive delay links are established to establish the switching delay matrix.
  • the OSI model layer includes a L0 layer, an L1 layer, and an L2 layer; the plurality of virtual layers includes a first virtual layer corresponding to the L0 layer and a second virtual layer corresponding to the L1 layer And a third virtual layer corresponding to the L2 layer.
  • the third virtual layer includes a first virtual sublayer and a second virtual sublayer; the scheduling link mapping module 1201 establishes an inter-layer adaptation delay chain between the plurality of virtual layers And is configured to establish a first inter-layer adaptive delay link between the first virtual layer and the second virtual layer; between the second virtual layer and the first virtual sublayer Establishing a second inter-layer adaptation delay link; establishing a third inter-layer adaptation delay link between the first virtual layer and the second virtual sublayer.
  • the scheduling link mapping module 1201 when the scheduling link mapping module 1201 establishes multiple virtual layers according to the open system interconnection OSI model layer of the physical network element node, it is configured to configure the physical network element node as a head node or a tail node.
  • multiple virtual nodes are established according to the OSI model layer, and each virtual node constitutes a virtual layer.
  • the physical network element node is an intermediate node, multiple virtual layers are established according to the OSI model layer.
  • Each virtual layer is composed of virtual node pairs; an in-layer adaptive delay link is established between the virtual node pairs of each virtual layer.
  • the virtual node pair includes a first virtual node and a second virtual node;
  • the inter-layer adaptation delay link between any two virtual layers includes a first inter-layer adaptation delay link and A second-layer adaptive delay link;
  • the scheduling link mapping module 1202 establishes an inter-layer adaptive delay link between the plurality of virtual layers, it is further configured to be in the any two virtual layers
  • the first inter-layer adaptive delay link is established between the first virtual nodes of the two
  • the second inter-layer adaptive delay link is established between second virtual nodes of the any two virtual layers.
  • the apparatus further includes a physical link mapping module configured to set the physical node on the virtual node when the first virtual layer is composed of a virtual node.
  • a physical link of a network element node; when the first virtual layer is composed of a virtual node pair, establishing a physical link of the physical network element node on two virtual nodes of the virtual node pair; or The external virtual node corresponding to each physical port of the physical network element node is described.
  • a physical link of the physical network element node is provided on each external virtual node, and two virtual nodes of the virtual node pair are respectively provided for connection.
  • the internal virtual link of the corresponding external virtual node is described.
  • the apparatus further includes a delay optimization module configured to generate a virtual network delay topology map of the optical transmission network according to a virtualized delay model of each physical network element node in the optical transmission network; Traverse the path branches of the delay map of the virtual network and perform path delay optimization calculations.
  • a delay optimization module configured to generate a virtual network delay topology map of the optical transmission network according to a virtualized delay model of each physical network element node in the optical transmission network; Traverse the path branches of the delay map of the virtual network and perform path delay optimization calculations.
  • the switching delay link structure is one or more switching delay matrices; the delay optimization module is configured to perform path delay optimization calculations, and is also configured to traverse the current topology node of the current path When the next-hop topology node of the current topology node and the current-topology node belong to the same physical network element node and belong to different switching delay matrices, the next-hop topology node is filtered, and other nodes are traversed. Next hop topology node.
  • An embodiment of the present invention further provides a physical network element node device.
  • the device includes a memory 1301 and a processor 1302.
  • the memory 1301 stores a virtualized computer program of the physical network element node.
  • the processor 1302 executes the computer program to implement a method for virtualizing a physical network element node provided by an embodiment of the present invention.
  • An embodiment of the present invention provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium, and the computer program can be executed by at least one processor to implement a physical network element provided by an embodiment of the present invention. Node virtualization method.
  • sequence numbers of the foregoing embodiments of the present invention are merely for description, and do not represent the superiority or inferiority of the embodiments.
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better.
  • Implementation Based on such an understanding, the technical solution of the present invention, in essence, or the part that contributes to the technology in the related technology, can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk) , CD-ROM), including a number of instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present invention.
  • a storage medium such as ROM / RAM, magnetic disk
  • CD-ROM compact disc-read only memory
  • a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种物理网元节点的虚拟化方法、装置、设备及存储介质,所述方法包括:建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。

Description

物理网元节点的虚拟化方法、装置、设备及存储介质
相关申请的交叉引用
本申请基于申请号为201810695749.0、申请日为2018年06月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及但不限于通信技术领域,尤其涉及一种物理网元节点的虚拟化方法、装置、设备及存储介质。
背景技术
第五代移动通信技术(5th-Generation,5G)网络业务对时钟精度、时延、可靠性等性能指标都有了更高的要求。光传送网络(Optical Transport Network,OTN)通常被考虑部署在5G的中传和回传网上,通过将服务等级协议(Service Level Agreement,SLA)时延等级作为切片优化策略、采用资源优化算法对OTN网络资源进行切片处理,以满足占用对应切片资源的各种5G业务的时延要求。开放式系统互联(Open System Interconnection,OSI)网络模型的层次影响时延。通常在网络切片算法实施前,需要以相关技术中OTN物理网络拓扑为基础,根据OSI层次影响时延的因素,计算得出每个相邻网元节点对之间的时延累积值,并形成一张OTN时延属性拓扑图,并在此基础上实施以时延优化策略为目标函数的网络切片算法。
相关技术中,每个相邻网元节点对之间的时延累积值、经过每个网元节点的交换时延值都和对应节点的设备属性有关;对于只有一种交换技术能力的单一调度属性的网元节点而言,时延参数是单一的、且固定不变的。但对于具备层0(L0,layer)/L1/L2混合调度功能属性的网元节点而言,经 过节点不同的OSI层面时,对应的时延值是不同的,此时OTN物理网络拓无法涵盖混合调度网元节点组网场景下、业务经过网元节点不同交换层面时所出现的多种交换时延的情况,因而在时延优化策略中最短时延路径计算就会出现“因计算对象的时延属性的不确定性”而无法实施的问题。
发明内容
有鉴于此,本发明实施例期望提供一种物理网元节点的虚拟化方法、装置、设备及存储介质,用以解决相关技术中物理网络拓无法涵盖业务,经过混合调度网元节点不同交换层面时所出现的多种交换时延的问题。
本发明实施例提供一种物理网元节点的虚拟化方法,所述方法包括:
建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
本发明实施例提供一种物理网元节点的虚拟化装置,所述装置包括:
调度链路映射模块,配置为建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
生成模块,配置为根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
本发明实施例提供一种物理网元节点设备,所述设备包括存储器和处理器,所述存储器存储有物理网元节点的虚拟化计算机程序,所述处理器配置为执行所述计算机程序,以实现本发明实施例提供的物理网元节点的虚拟化方法。
本发明实施例提供一种计算机可读存储介质,所述存储介质存储有物理网元节点的虚拟化计算机程序,所述计算机程序可被至少一个处理器执行,以实现本发明实施例提供的物理网元节点的虚拟化方法。
本发明实施例通过对物理网元节点建立物理链路,对网元节点的调度链路建立交换时延链路结构,从而可以根据物理链路和交换时延链路结构生成物理网元节点的虚拟化时延模型,进而可以有效解决相关技术中OTN物理网络拓无法涵盖业务,经过混合调度网元节点不同交换层面时所出现的多种交换时延的问题,可以在5G切片技术应用场景下的,有效满足时延最短路径计算要求。
附图说明
图1为本发明实施例中一种物理网元节点的虚拟化方法的流程图;
图2为包含L2/L1/L0混合调度能力节点的OTN时延属性拓扑图实例;
图3为具备L0/L1/L2混合调度功能的设备节点模型示意图;
图4为本发明实施例中作为业务连接或者OVPN虚拟链路的首或尾节点时的物理网元节点的虚拟化时延模型;
图5为本发明实施例中作为业务连接或者OVPN虚拟链路的中间节点且只有两条物理外部光纤链路时的物理网元节点的虚拟化时延模型;
图6为本发明实施例中作为业务连接或者OVPN虚拟链路的中间节点且有三条物理外部光纤链路时的物理网元节点的虚拟化时延模型;
图7为本发明实施例中作为业务连接或者OVPN虚拟链路的中间节点且有超过3条的多条物理外部光纤链路时的物理网元节点的虚拟化时延模型;
图8为本发明实施例中当物理节点A和E分别作为业务连接或者OVPN虚拟链路的首或尾节点时虚拟网络时延拓扑图示意图;
图9为本发明实施例中当物理节点B和D分别作为业务连接或者OVPN虚拟链路的首或尾节点时虚拟网络时延拓扑图示意图;
图10为本发明实施例中一种物理网元节点的虚拟化方法的流程图;
图11为本发明实施例中虚拟节点内部自环现象及判断示意图;
图12为本发明实施例中一种物理网元节点的虚拟化装置的结构示意图;
图13为本发明实施例中一种物理网元节点设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
本发明实施例提供一种物理网元节点的虚拟化方法,如图1所示,所述方法包括:
S101,建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换功能的链路;
S102,根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
本发明实施例通过对网元节点的调度链路建立交换时延链路结构,从而可以根据交换时延链路结构生成物理网元节点的虚拟化时延模型,进而可以有效解决相关技术中移动通信网络,特别OTN物理网络的拓扑无法涵盖业务经过混合调度网元节点不同交换层面时所出现的多种交换时延的问题,可以在5G切片技术应用场景下的,有效满足时延最短路径计算要求。
本发明实施例中交换层一般对应一种交换功能,例如OSI网络模型中的OSI网络模型层,其中OSI网络模型层可以包括L0层,即OSI网络模型中的第0层;L1层,即OSI网络模型中的第1层;L2层,即OSI网络模型中的第2层;L3层,即OSI网络模型中的第3层。本发明实施例中混合调度网元节点指代具有混合交换功能调度能力的网元节点,例如具有上 述各个OSI网络模型层的网元节点。本发明实施例中交换时延链路结构为对应预设时延的一种虚拟的链路结构。在一些实施例中,也可以包括对层间适配时延链路配置时延值。
5G网络业务对时钟精度、时延、可靠性等性能指标都有了更高的要求:时钟精度达到纳秒级,时延要求到微秒级。为满足带宽、时延和可靠性等业务需求,涌现出了超高可靠超低时延通信(Ultra Reliable&Low Latency Communication,uRLLC)、海量机器类通信(massive Machine Type of Communication,mMTC)、和增强移动宽带(Enhance Mobile Broadband,eMBB)等服务理念,而传统的服务质量(Quality of Service,QoS)策略已经不能应对上述需求。网络切片技术可以给不同的业务分配不同的网络资源,在一个独立的物理网络上切分出多个逻辑的网络,进而根据切片的服务等级协议(Service Level Agreement,SLA)等级,实现资源的预分配、预优化,对不同切片上业务的带宽、时延等进行精确控制,以实现对网络资源的充分和有效利用。
作为5G系统组网架构中的重要组成部分,OTN光传送网通常被考虑部署在5G的中传和回传网上,通过将SLA时延等级作为切片优化策略、采用资源优化算法对OTN网络资源进行切片处理,以满足占用对应切片资源的各种5G业务的时延要求,是OTN满足5G组网要求的关键技术。OSI网络模型中影响时延的因素如表1所示。
表1:OSI网络模型对应的各层时延情况
Figure PCTCN2019088967-appb-000001
Figure PCTCN2019088967-appb-000002
通常在切片算法实施前,需要以相关技术中OTN物理网络拓扑为基础,根据表1中描述的影响时延的因素,计算得出每个相邻网元节点对之间的时延累积值,并形成一张OTN时延属性拓扑图,如图2所示,并在此基础上实施以时延优化策略为目标函数的网络切片算法。
网络切片算法通常包含以图2为计算对象的、指定网元节点对之间的最短时延路径子算法,最短时延要求指定网元节点对之间的路径所经过的每个相邻网元节点对之间的时延累积值、经过的每个节点的交换时延的总和最短。通常情况下,每个相邻网元节点对之间的时延累积值、经过的每个网元节点的交换时延值都和对应节点的设备属性有关:对于只有一种交换技术能力的单一调度网元节点而言,这些时延参数都是单一的、且固定不变的。
但对于如图3所示的具备L0/L1/L2混合调度功能的网元设备节点模型而言,经过节点不同的交换层面时的时延值是不同的,此时图2的拓扑无法涵盖混合调度节点组网场景下、业务经过节点不同交换层面时所出现的多种交换时延的情况,因而最短时延路径计算就会出现“因计算对象的时延属性的不确定性”而无法实施的问题,而本发明实施例可以在5G切片技术应用场景下的,有效地满足时延最短路径计算要求。
在一些实施例中,所述交换时延链路结构为一个或多个交换时延矩阵;所述建立与所述物理网元节点的调度链路对应的交换时延链路结构,包括:
当所述物理网元节点为首节点或尾节点时,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵;
当所述物理网元节点为中间节点时,在每两个物理链路之间,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵。
详细地,如图4所示,当物理网元节点为首节点或尾节点,可以建立了一个交换时延矩阵,其中图中虚心中所示的为交换时延矩阵,A'0、A'1、A'2、A'2'分别表示第一虚拟层、第二虚拟层、第三虚拟层、第四虚拟层,各个虚拟层又一个虚拟节点构成。如图5所示,当物理网元节点为中间节点,且具有两个物理链路,可以在建立一个交换时延矩阵,其中A'01、A'02为第一虚拟层的虚拟节点对,A'11、A'12为第二虚拟层的虚拟节点对,A'21、A'22为第三虚拟层的虚拟节点对,A'2'1'、A'2'2'为第四虚拟层的虚拟节点对。如图6所示,当物理网元节点为中间节点,且具有三个物理链路,从而可以在三个物理链路对应的物理链路之间,两两建立一个交换时延矩阵。同理,如图7所示,当物理网元节点为中间节点,且具有三个以上物理链路,可以在三个物理链路对应的物理链路之间,两两建立一个交换时延矩阵。
在5G技术应用场景中,业务经过混合物理网元节点的调度链路分以下几种:L0xL0;L0—L1xL1—L0;L0—L1—L2xL2—L1—L0;L0—L2xL2—L0(此处的“x”表示在对应层面的交换功能);
由于每种业务调度经过不同的交换层面,因而不同的调度类型,其经过混合调度节点的总交换时延是不同的;在软件定义网络(Software Defined Network,SDN)或基于波分复用(Wavelength Division Multiplexing,WDM)/OTN的自动交换光网络(WDM/OTN Automatically Switched Optical Network,WASON)中按时延优化策略算路时,需要考虑混合调度节点的上述交换调度类型特征,如图2所示的单一的节点模型表述是无法满足时 延算路要求的,因此本发明实施例中通过节点虚拟化技术生成虚拟化时延模型,将具备上述几种调度类型特征的物理网元节点能以算路拓扑的形式被表示出来,从而满足以时延优化为目标策略的算法要求。
基于此,在一些实施例中,所述交换层为开放式系统互联OSI模型层;所述建立一个与所述物理网元节点的调度链路对应的交换时延矩阵,包括:
根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层;
根据所述调度链路,在所述多个虚拟层之间建立层间适配时延链路;
根据所述多个虚拟层和所述层间适配时延链路,建立所述交换时延矩阵。
其中,所述多个虚拟层包括与所L0层对应的第一虚拟层、与所述L1层对应的第二虚拟层以及与所述L2层对应的第三虚拟层。
在一些实施例中,所述第三虚拟层包括第一虚拟子层和第二虚拟子层;所述在所述多个虚拟层之间建立层间适配时延链路,如图4-图7所示,可以包括:
在所述第一虚拟层和所述第二虚拟层之间建立第一层间适配时延链路;
在所述第二虚拟层和所述第一虚拟子层之间建立第二层间适配时延链路;
在所述第一虚拟层和所述第二虚拟子层之间建立第三层间适配时延链路。
在一些实施例中,所述根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层,包括:
当所述物理网元节点为首节点或尾节点时,如图4所示,可以根据所述OSI模型层建立多个虚拟节点,每个虚拟节点构成一虚拟层;
当所述物理网元节点为中间节点时,如图5-图7所示,根据所述OSI模型层建立多个虚拟层;其中,每个虚拟层由虚拟节点对构成;在所述每个虚拟层的虚拟节点对之间建立层内适配时延链路。
在一些实施例中,也可以包括对层内适配时延链路配置时延值。
在一些实施例中,所述虚拟节点对包括第一虚拟节点和第二虚拟节点;任意两个虚拟层之间的层间适配时延链路包括第一层间适配时延链路和第二层间适配时延链路;所述在所述多个虚拟层之间建立层间适配时延链路,如图5-图7所示,还包括:
在所述任意两个虚拟层的第一虚拟节点之间建立所述第一层间适配时延链路,在所述任意两个虚拟层的第二虚拟节点之间建立所述第二层间适配时延链路。
例如,以图2中的OTN时延属性拓扑为例,如网元节点A为具有L0/L1/L2混合调度功能节点,则类似节点A对应的物理网元节点可分别被抽象成如图4、图5、图6、图7所示的虚拟化时延模型。其中虚拟化时延模型也可以表述为虚拟化结构、虚拟化模型等。
其中,当物理网元节点作为业务连接或者OVPN虚拟链路的首或尾节点时,物理网元节点的虚拟化时延模型描述如图3所示:
1、对应于网络设备意义上的物理网元节点,被抽象成点化线范畴内的虚拟化时延模型;
2、链路(Link)1,Link2,......直到Linkn表示本网元节点与外部拓扑相连接的物理光纤链路所对应的物理链路;
3、节点A'0、A'1、A'2、A'2'分别表示该混合调度节点的L0、L1、L2、L2层对应的虚拟层(第一虚拟层、第二虚拟层、第一虚拟子层、第二虚拟子层),A'2'可以表示A'2的镜像;
4、L'01表示L0和L1层之间的层间适配时延链路,时延是600ns;L'12表示L1和L2层之间的层间适配时延链路,时延是500ns;L'02——L0和L2层之间的适配链路,时延是400ns。其中,每个层间适配时延链路可以预设的时延值。
当本物理设备节点作为业务连接或者OVPN虚拟链路的中间节点且只 有两条物理外部光纤链路时,物理网元节点的虚拟化时延模型描述如图4所示:
1、对应于网络设备意义上的物理节点,被抽象成点化线范畴内的虚拟化时延模型;
2、Link1,Link2表示本节点与外部拓扑相连接的物理光纤链路所对应的物理链路;Link1,Link2所对应的物理链路时延值依赖于外部传送距离等因素,此处不做表示;
3、此外,该网元节点虚拟化时延模型结构共包含4个虚拟节点对、4条节点内部交换时延链路,3对节点内部层间适配时延链路;每个虚拟节点对代表了对应交换层面的入端或出端;每条交换时延链路上都标识了该设备节点在该交换层面的交换时延值;每条内部层间适配时延链路上也都标识了适配时延值;其中,内部交换时延链路(即层内适配时延链路)具体情况可描述为:
L0层交换对应的虚拟化节点对:入端或出端节点A'01、A'02,两者之间的交换时延链路L'00的交换时延值举例为500ns;
L1层交换对应的虚拟化节点对:入端或出端节点A'11、A'12,两者之间的交换时延链路L'11的交换时延值举例为5us;
具备L2层交换功能对应的虚拟化节点对:入端与出端A'21、A'22,两者之间的交换时延链路L'22的交换时延值举例为10us;
L0层与L1层之间对应的层间适配时延链路对L'01和L'10,其时延值举例均为1us;
L1层与L2层之间对应的层间适配时延链路对L'12和L'21,其时延值举例均为3us;
A'2'1'、A'2'2',这个节点对分别是A'21、A'22的镜像节点;L'2'2'是L'22的镜像交换时延链路,其时延值和L'22必然相同;L'02和L'20是L0层与L2层之间的适配时延链路对,其时延值均为2us;A'2'1'、A'2'2'这个节点对 和它们之间的交换时延链路L'2'2'、以及适配时延链路对L'02和L'20,用以描述不经过L1层的L0—L2xL2—L0的业务调度类型。
在一些实施例中,所述建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路之前或之后,包括:
当所述第一虚拟层由一虚拟节点构成时,在该虚拟节点上设置所述物理网元节点的物理链路;
当所述第一虚拟层由虚拟节点对构成时,在该虚拟节点对的两个虚拟节点上设置所述物理网元节点的物理链路;或者,建立与所述物理网元节点的各个物理端口对应的外接虚拟节点,在各个外接虚拟节点上建立所述物理网元节点的物理链路,在所述虚拟节点对的两个虚拟节点上分别设置用于连接相应外接虚拟节点的内接虚拟链路。
例如,当本物理设备节点作为业务连接或者OVPN虚拟链路的中间节点且有三条物理外部光纤链路时,物理网元节点的虚拟化时延模型描述如图6所示:
1、对应于网络设备意义上的物理节点,被抽象成大点化线圆圈范畴内的虚拟化模型结构;
2、Link1、Link2,和Link3表示本节点与外部拓扑相连接的物理光纤链路所对应的物理链路;Link1、Link2,和Link3所对应的链路时延值依赖于外部传送距离等因素,此处不予考虑;
3、本节点的光纤链路物理端口分别被抽象成外接虚拟节点P'1、P'2,和P'3;
4、任意两个外接虚拟节点之间必经过一个交换时延矩阵,分别是交换时延矩阵A'、B'和C';
5、LinkP'11和LinkP'12、LinkP'21和LinkP'22、LinkP'31和LinkP'32分别是外接虚拟节点P'1、P'2和P'3与交换时延矩阵A'、B',和C'相连接的 内接虚拟链路,是表示节点虚拟模型整体关系的拓扑化抽象描述,其时延值可表示为0us;
6、交换时延矩阵A'、B'和C'的模型意义相同,都是表示:业务在经过L0、L1、L2层时所对应的不同交换调度模型的时延特征。现以交换时延矩阵A'为例进行阐述:交换时延矩阵A'共包含4个虚拟子节点对、4条矩阵内部交换时延链路和3对矩阵内部层间适配时延链路;每个虚拟子节点对代表了对应交换层面的入端或出端;每条交换时延链路都有对应的业务在该交换层面的交换时延值,可通过链路属性配置来获得;每条矩阵内部层间适配时延链路也都有对应的适配时延值,可通过链路属性配置来获得。交换时延矩阵A'内部各个虚拟子节点和链路的定义如下:
L0层交换对应的虚拟化节点对:入端或出端节点A'01、A'02,两者之间的交换时延链路是A'L'0;
L1层交换对应的虚拟化节点对:入端或出端节点A'11、A'12,两者之间的交换时延链路是A'L'1;
具备L2层交换功能对应的虚拟化节点对:入端与出端节点A'21、A'22,两者之间的交换时延链路是A'L'2;
L0层与L1层之间对应的适配时延链路对A'L'01和A'L'10;
L1层与L2层之间对应的适配时延链路对A'L'12和A'L'21;
A'2'1'、A'2'2',这个虚拟节点对分别是A'21、A'22的镜像节点;A'L'2'是A'L'2的镜像交换时延链路,其时延值和A'L'2必然相同;A'L'02和A'L'20是L0层与L2层之间的适配时延链路对;A'2'1'、A'2'2'这个虚拟节点对和它们之间的交换时延链路A'L'2'、以及适配时延链路对A'L'02和A'L'20,用以描述从L0到L2层且不经过L1层的L0—L2xL2—L0的业务调度类型。
当然,当本物理设备节点作为业务连接或者OVPN虚拟链路的中间节点且有超过3条的多条物理外部光纤链路时,物理设备节点被抽象成的虚拟化时延模型描述如图7所示:
1、对应于网络设备意义上的物理网元节点,被抽象成大点化线圆圈范畴内的虚拟化模型结构;
2、点化线内的虚线代表随着该物理节点的外部物理光纤链路条数的增加,本节点虚拟化模型的相似拓展;
3、本图中交换时延矩阵定义和图6描述相同;
4、值得说明的是,虚拟化时延模型内部用来表示光纤链路物理端口的虚节点相互间必两两结对,并通过两条内部虚拟链路和一个交换时延矩阵相连接。
5、该模型结构保证了业务在经过具备L2/L1/L0混合调度能力的中间节点时,在经过该节点任意一对入端和出端光纤链路物理端口时,必然经过一个内部时延交换矩阵:从而将业务在经过该节点时产生的时延通过该模型结构的形式,准确地用图拓扑语言表示出来。
基于如图4-图7描述的虚拟化时延模型,假设图2的“OTN时延属性拓扑图”中的A、C、E为具备L0/L1/L2层混合调度功能的物理网元节点,节点B和D为仅具有L0层调度功能的物理网元节点,则结合具备场景,图2可被抽象定义成图8或图9中的虚拟化时延模型模型。其中,图8表示当物理网元节点A和E分别作为业务连接或者OVPN虚拟链路的首或尾节点时,路径时延优化计算所对应的虚拟化时延模型;图9表示当物理网元节点B和D分别作为业务连接或者OVPN虚拟链路的首或尾节点时,路径时延优化计算所对应的虚拟化时延模型。其中图8和图9内右侧的虚拟化时延模型中的节点是拓扑图中的具有相同算法逻辑意义的图拓扑节点;图7和图8内右侧的虚拟化时延模型中的链路是拓扑图中的具有相同算法逻辑意义的图拓扑链路。
本发明实施例提供一种物理网元节点的虚拟化方法,基于前述生成的虚拟化时延模型进行如路径时延优化计算的方法,如图10所示,包括:
S201,建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
S202,根据物理链路和所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型;
S203,根据所述光传送网络中各个物理网元节点的虚拟化时延模型生成所述光传送网络的虚拟网络时延拓扑图;
S204,遍历所述虚拟网络时延拓扑图的路径分支,并进行路径时延优化计算。
本发明实施例将虚拟化时延模型引入到光传送网络中,从而在基于行路径时延优化计算可以有效解决相关技术中OTN物理网络拓无法涵盖业务经过混合调度网元节点不同交换层面时所出现的多种交换时延的问题,可以在5G切片技术应用场景下的,有效满足时延最短路径计算要求。
在一些实施例中,所述交换时延链路结构为一个或多个交换时延矩阵;所述进行路径时延优化计算,还可以包括:
在遍历当前路径的当前拓扑节点时,如果所述当前拓扑节点的下一跳拓扑节点与所述当前拓扑节点属于同一物理网元节点,且属于不同的交换时延矩阵,则滤过所述下一跳拓扑节点,继续遍历其他的下一跳拓扑节点。
例如,以图9描述的网络场景为例,业务连接或者OVPN虚拟链路映射的路径时延优化计算步骤如下:
步骤1:将图2的网络拓扑转换成图9中的右侧虚拟网络时延拓扑;
步骤2:以时延优化为目标函数,依靠算法引擎,计算节点B'和D'之间的时延优化路径;
步骤3:当运行Dijkstra算法或KSP算法进行路径分支遍历,当经过某个拓扑中的节点时,判断它的下一跳节点属性;
步骤4:如果下一跳拓扑节点与该条路径已经经过的拓扑节点属于同一 物理网元节点(如右侧A节点指示的点化线圈内)范畴、且属于不同的交换时延矩阵,则略过这个拓扑节点(该步骤用于防止“算法计算过程中出现路径在属于同一个物理节点的虚拓扑内形成自环”的情况);例如,如图11所示,拓扑节点A2'的带“×”的下一跳分支即被略过、而带“√”的下一跳分支将被算法选用,否则节点B'和D'之间的时延路径将在A对应的虚拟拓扑中形成环路;
步骤5:继续遍历该节点的其他的下一跳节点,直到找到不满足步骤4条件的下一跳节点;
步骤6:继续按Dijkstra算法或KSP算法的处理机制进行计算处理。
本发明实施例提供一种物理网元节点的虚拟化装置,如图12所示,所述装置包括:
调度链路映射模块1201,配置为建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
生成模块1202,配置为根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
本发明实施例对网元节点的调度链路建立交换时延链路结构,从而可以根据交换时延链路结构生成物理网元节点的虚拟化时延模型,进而可以有效解决相关技术中移动网络特别是OTN物理网络的拓扑无法涵盖业务经过混合调度网元节点不同交换层面时所出现的多种交换时延的问题,可以在5G切片技术应用场景下的,有效满足时延最短路径计算要求。
在一些实施例中,所述交换时延链路结构为一个或多个交换时延矩阵;所述调度链路映射模块1201,还配置为当所述物理网元节点为首节点或尾节点时,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵;
当所述物理网元节点为中间节点时,在每两个物理链路之间,建立一 个与所述物理网元节点的调度链路对应的交换时延矩阵。
在一些实施例中,所述交换层为开放式系统互联OSI模型层;所述物理链路映射模块1202在建立一个与所述物理网元节点的调度链路对应的交换时延矩阵时,配置为根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层;根据所述调度链路,在所述多个虚拟层之间建立层间适配时延链路;根据所述多个虚拟层和所述层间适配时延链路,建立所述交换时延矩阵。
在一些实施例中,所述OSI模型层包括L0层、L1层和L2层;所述多个虚拟层包括与所L0层对应的第一虚拟层、与所述L1层对应的第二虚拟层以及与所述L2层对应的第三虚拟层。
在一些实施例中,所述第三虚拟层包括第一虚拟子层和第二虚拟子层;所述调度链路映射模块1201在所述多个虚拟层之间建立层间适配时延链路时,配置为在所述第一虚拟层和所述第二虚拟层之间建立第一层间适配时延链路;在所述第二虚拟层和所述第一虚拟子层之间建立第二层间适配时延链路;在所述第一虚拟层和所述第二虚拟子层之间建立第三层间适配时延链路。
在一些实施例中,所述调度链路映射模块1201在根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层时,配置为当所述物理网元节点为首节点或尾节点时,根据所述OSI模型层建立多个虚拟节点,每个虚拟节点构成一虚拟层;当所述物理网元节点为中间节点时,根据所述OSI模型层建立多个虚拟层;其中,每个虚拟层由虚拟节点对构成;在所述每个虚拟层的虚拟节点对之间建立层内适配时延链路。
在一些实施例中,所述虚拟节点对包括第一虚拟节点和第二虚拟节点;任意两个虚拟层之间的层间适配时延链路包括第一层间适配时延链路和第二层间适配时延链路;所述调度链路映射模块1202在所述多个虚拟层之间建立层间适配时延链路时,还配置为在所述任意两个虚拟层的第一虚拟节 点之间建立所述第一层间适配时延链路,在所述任意两个虚拟层的第二虚拟节点之间建立所述第二层间适配时延链路。
在一些实施例中,所述装置还包括物理链路映射模块,所述物理链路映射模块,配置为当所述第一虚拟层由一虚拟节点构成时,在该虚拟节点上设置所述物理网元节点的物理链路;当所述第一虚拟层由虚拟节点对构成时,在该虚拟节点对的两个虚拟节点上建立所述物理网元节点的物理链路;或者,建立与所述物理网元节点的各个物理端口对应的外接虚拟节点,在各个外接虚拟节点上设置所述物理网元节点的物理链路,在所述虚拟节点对的两个虚拟节点上分别设置用于连接相应外接虚拟节点的内接虚拟链路。
在一些实施例中,所述装置还包括时延优化模块,配置为根据所述光传送网络中各个物理网元节点的虚拟化时延模型生成所述光传送网络的虚拟网络时延拓扑图;遍历所述虚拟网络时延拓扑图的路径分支,并进行路径时延优化计算。
在一些实施例中,所述交换时延链路结构为一个或多个交换时延矩阵;所述时延优化模块配置为进行路径时延优化计算,还配置为在遍历当前路径的当前拓扑节点时,如果所述当前拓扑节点的下一跳拓扑节点与所述当前拓扑节点属于同一物理网元节点,且属于不同的交换时延矩阵,则滤过所述下一跳拓扑节点,继续遍历其他的下一跳拓扑节点。
本发明实施例还提供一种物理网元节点设备,如图13所示,所述设备包括存储器1301和处理器1302,所述存储器1301存储有物理网元节点的虚拟化计算机程序,所述处理器1302执行所述计算机程序以实现本发明实施例提供的物理网元节点的虚拟化方法。
本发明实施例提供一种提供计算机可读存储介质,所述计算机可读存 储介质上存储有计算机程序,所述计算机程序可被至少一个处理器执行,以实现本发明实施例提供的物理网元节点的虚拟化方法。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对相关技术中技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。

Claims (22)

  1. 一种物理网元节点的虚拟化方法,所述方法包括:
    建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
    根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
  2. 如权利要求1所述的方法,其中,所述交换时延链路结构为一个或多个交换时延矩阵;所述建立与所述物理网元节点的调度链路对应的交换时延链路结构,包括:
    当所述物理网元节点为首节点或尾节点时,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵;
    当所述物理网元节点为中间节点时,在每两个物理链路之间,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵。
  3. 如权利要求2所述的方法,其中,所述交换层为开放式系统互联OSI模型层;所述建立一个与所述物理网元节点的调度链路对应的交换时延矩阵,包括:
    根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层;
    根据所述调度链路,在所述多个虚拟层之间建立层间适配时延链路;
    根据所述多个虚拟层和所述层间适配时延链路,建立所述交换时延矩阵。
  4. 如权利要求3所述的方法,其中,所述OSI模型层包括L0层、L1层和L2层;所述多个虚拟层包括与所L0层对应的第一虚拟层、与所述L1层对应的第二虚拟层以及与所述L2层对应的第三虚拟层。
  5. 如权利要求4所述的方法,其中,所述第三虚拟层包括第一虚拟子层和第二虚拟子层;所述在所述多个虚拟层之间建立层间适配时延链路,包括:
    在所述第一虚拟层和所述第二虚拟层之间建立第一层间适配时延链路;
    在所述第二虚拟层和所述第一虚拟子层之间建立第二层间适配时延链路;
    在所述第一虚拟层和所述第二虚拟子层之间建立第三层间适配时延链路。
  6. 如权利要求3所述的方法,其中,所述根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层,包括:
    当所述物理网元节点为首节点或尾节点时,根据所述OSI模型层建立多个虚拟节点,每个虚拟节点构成一虚拟层;
    当所述物理网元节点为中间节点时,根据所述OSI模型层建立多个虚拟层;其中,每个虚拟层由虚拟节点对构成;在所述每个虚拟层的虚拟节点对之间建立层内适配时延链路。
  7. 如权利要求6所述的方法,其中,所述虚拟节点对包括第一虚拟节点和第二虚拟节点;任意两个虚拟层之间的层间适配时延链路包括第一层间适配时延链路和第二层间适配时延链路;所述在所述多个虚拟层之间建立层间适配时延链路,还包括:
    在所述任意两个虚拟层的第一虚拟节点之间建立所述第一层间适配时延链路,在所述任意两个虚拟层的第二虚拟节点之间建立所述第二层间适配时延链路。
  8. 如权利要求4或5所述的方法,其中,所述方法还包括:
    当所述第一虚拟层由一虚拟节点构成时,在该虚拟节点上设置所述物理网元节点的物理链路;
    当所述第一虚拟层由虚拟节点对构成时,在该虚拟节点对的两个虚拟节点上设置所述物理网元节点的物理链路;或者,建立与所述物理网元节点的各个物理端口对应的外接虚拟节点,在各个外接虚拟节点上设置所述物理网元节点的物理链路,在所述虚拟节点对的两个虚拟节点上分别设置 用于连接相应外接虚拟节点的内接虚拟链路。
  9. 如权利要求1-7中任意一项所述的方法,其中,所述根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型之后,包括:
    根据所述光传送网络中各个物理网元节点的虚拟化时延模型生成所述光传送网络的虚拟网络时延拓扑图;
    遍历所述虚拟网络时延拓扑图的路径分支,并进行路径时延优化计算。
  10. 如权利要求9所述的方法,其中,所述交换时延链路结构为一个或多个交换时延矩阵;所述进行路径时延优化计算,还包括:
    在遍历当前路径的当前拓扑节点时,如果所述当前拓扑节点的下一跳拓扑节点与所述当前拓扑节点属于同一物理网元节点,且属于不同的交换时延矩阵,则滤过所述下一跳拓扑节点,继续遍历其他的下一跳拓扑节点。
  11. 一种物理网元节点的虚拟化装置,所述装置包括:
    调度链路映射模块,配置为建立与所述物理网元节点的调度链路对应的交换时延链路结构;所述调度链路为业务经过所述物理网元节点时可被调度到相应交换层的链路;
    生成模块,配置为根据所述交换时延链路结构生成所述物理网元节点的虚拟化时延模型。
  12. 如权利要求11所述的装置,其中,所述交换时延链路结构为一个或多个交换时延矩阵;所述调度链路映射模块,还配置为当所述物理网元节点为首节点或尾节点时,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵;
    当所述物理网元节点为中间节点时,在每两个物理链路之间,建立一个与所述物理网元节点的调度链路对应的交换时延矩阵。
  13. 如权利要求12所述的装置,其中,所述交换层为开放式系统互联OSI模型层;所述物理链路映射模块在建立一个与所述物理网元节点的调度链路对应的交换时延矩阵时,配置为根据所述物理网元节点的开放式系统 互联OSI模型层建立多个虚拟层;根据所述调度链路,在所述多个虚拟层之间建立层间适配时延链路;根据所述多个虚拟层和所述层间适配时延链路,建立所述交换时延矩阵。
  14. 如权利要求13所述的装置,其中,所述OSI模型层包括L0层、L1层和L2层;所述多个虚拟层包括与所L0层对应的第一虚拟层、与所述L1层对应的第二虚拟层以及与所述L2层对应的第三虚拟层。
  15. 如权利要求14所述的装置,其中,所述第三虚拟层包括第一虚拟子层和第二虚拟子层;所述调度链路映射模块在所述多个虚拟层之间建立层间适配时延链路时,配置为在所述第一虚拟层和所述第二虚拟层之间建立第一层间适配时延链路;在所述第二虚拟层和所述第一虚拟子层之间建立第二层间适配时延链路;在所述第一虚拟层和所述第二虚拟子层之间建立第三层间适配时延链路。
  16. 如权利要求13所述的装置,其中,所述调度链路映射模块在根据所述物理网元节点的开放式系统互联OSI模型层建立多个虚拟层时,配置为当所述物理网元节点为首节点或尾节点时,根据所述OSI模型层建立多个虚拟节点,每个虚拟节点构成一虚拟层;当所述物理网元节点为中间节点时,根据所述OSI模型层建立多个虚拟层;其中,每个虚拟层由虚拟节点对构成;在所述每个虚拟层的虚拟节点对之间建立层内适配时延链路。
  17. 如权利要求16所述的装置,其中,所述虚拟节点对包括第一虚拟节点和第二虚拟节点;任意两个虚拟层之间的层间适配时延链路包括第一层间适配时延链路和第二层间适配时延链路;所述调度链路映射模块在所述多个虚拟层之间建立层间适配时延链路时,还配置为在所述任意两个虚拟层的第一虚拟节点之间建立所述第一层间适配时延链路,在所述任意两个虚拟层的第二虚拟节点之间建立所述第二层间适配时延链路。
  18. 如权利要求14或15所述的装置,其中,所述装置还包括:
    物理链路映射模块,配置为当所述第一虚拟层由一虚拟节点构成时, 在该虚拟节点上设置所述物理网元节点的物理链路;当所述第一虚拟层由虚拟节点对构成时,在该虚拟节点对的两个虚拟节点上设置所述物理网元节点的物理链路;或者,建立与所述物理网元节点的各个物理端口对应的外接虚拟节点,在各个外接虚拟节点上建立所述物理网元节点的物理链路,在所述虚拟节点对的两个虚拟节点上分别设置用于连接相应外接虚拟节点的内接虚拟链路。
  19. 如权利要求11-17中任意一项所述的装置,其中,所述装置还包括:
    时延优化模块,配置为根据所述光传送网络中各个物理网元节点的虚拟化时延模型生成所述光传送网络的虚拟网络时延拓扑图;遍历所述虚拟网络时延拓扑图的路径分支,并进行路径时延优化计算。
  20. 如权利要求19所述的装置,其中,所述交换时延链路结构为一个或多个交换时延矩阵;所述时延优化模块,配置为进行路径时延优化计算,还配置为在遍历当前路径的当前拓扑节点时,如果所述当前拓扑节点的下一跳拓扑节点与所述当前拓扑节点属于同一物理网元节点,且属于不同的交换时延矩阵,则滤过所述下一跳拓扑节点,继续遍历其他的下一跳拓扑节点。
  21. 一种物理网元节点设备,所述设备包括存储器和处理器,所述存储器存储有物理网元节点的虚拟化计算机程序,所述处理器,配置为执行所述计算机程序以实现如权利要求1-10中任意一项所述方法的步骤。
  22. 一种计算机可读存储介质,所述存储介质存储有物理网元节点的虚拟化计算机程序,所述计算机程序可被至少一个处理器执行,以实现如权利要求1-10中任意一项所述方法的步骤。
PCT/CN2019/088967 2018-06-29 2019-05-29 物理网元节点的虚拟化方法、装置、设备及存储介质 WO2020001220A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19824449.3A EP3813303B1 (en) 2018-06-29 2019-05-29 Virtualization method and apparatus for physical network element node, and device and storage medium
JP2020572830A JP7101274B2 (ja) 2018-06-29 2019-05-29 物理ネットワーク要素ノードの仮想化方法、装置、機器及び記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810695749.0A CN110661633B (zh) 2018-06-29 2018-06-29 物理网元节点的虚拟化方法、装置、设备及存储介质
CN201810695749.0 2018-06-29

Publications (1)

Publication Number Publication Date
WO2020001220A1 true WO2020001220A1 (zh) 2020-01-02

Family

ID=68985434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088967 WO2020001220A1 (zh) 2018-06-29 2019-05-29 物理网元节点的虚拟化方法、装置、设备及存储介质

Country Status (4)

Country Link
EP (1) EP3813303B1 (zh)
JP (1) JP7101274B2 (zh)
CN (1) CN110661633B (zh)
WO (1) WO2020001220A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104438A (zh) * 2020-08-20 2020-12-18 武汉光迅科技股份有限公司 一种roadm设备的配置方法、装置、电子设备及存储介质
CN113938434A (zh) * 2021-10-12 2022-01-14 上海交通大学 大规模高性能RoCEv2网络构建方法和系统
CN115514657A (zh) * 2022-11-15 2022-12-23 阿里云计算有限公司 网络建模方法、网络问题分析方法及相关设备
JP7436747B2 (ja) 2020-08-31 2024-02-22 中興通訊股▲ふん▼有限公司 Otnネットワークリソース最適化方法および装置、コンピュータデバイスと記憶媒体

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412463B (zh) * 2021-05-28 2024-06-04 中国移动通信有限公司研究院 时延测量方法、装置及数字孪生网络
US20240276361A1 (en) * 2022-03-30 2024-08-15 Rakuten Mobile, Inc. Communication route determination system and communication route determination method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486235A (zh) * 2014-11-26 2015-04-01 北京华力创通科技股份有限公司 一种afdx网络时延降低方法
CN104685838A (zh) * 2012-10-05 2015-06-03 华为技术有限公司 利用服务特定拓扑抽象和接口的软件定义网络虚拟化
US20180091251A1 (en) * 2015-03-25 2018-03-29 Tevetron, Llc Communication Network Employing Network Devices with Packet Delivery Over Pre-Assigned Optical Channels
CN107888425A (zh) * 2017-11-27 2018-04-06 北京邮电大学 移动通信系统的网络切片部署方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001218631A1 (en) * 2000-12-11 2002-06-24 Nokia Corporation Configuring a data transmission interface in a communication network
US9392565B2 (en) * 2010-03-05 2016-07-12 Samsung Electronics Co., Ltd. Method and system for accurate clock synchronization through interaction between communication layers and sub-layers for communication systems
CN101841420B (zh) * 2010-05-24 2011-11-23 中国人民解放军国防科学技术大学 面向片上网络的低延迟路由器结构
JP5439297B2 (ja) * 2010-06-30 2014-03-12 株式会社日立製作所 制御サーバ、および、ネットワークシステム
US10129839B2 (en) * 2014-12-05 2018-11-13 Qualcomm Incorporated Techniques for synchronizing timing of wireless streaming transmissions to multiple sink devices
EP3121997B3 (en) * 2015-07-20 2024-04-10 Koninklijke KPN N.V. Service provisioning in a communication network
CN106713141B (zh) * 2015-11-18 2020-04-28 华为技术有限公司 用于获得目标传输路径的方法和网络节点

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104685838A (zh) * 2012-10-05 2015-06-03 华为技术有限公司 利用服务特定拓扑抽象和接口的软件定义网络虚拟化
CN104486235A (zh) * 2014-11-26 2015-04-01 北京华力创通科技股份有限公司 一种afdx网络时延降低方法
US20180091251A1 (en) * 2015-03-25 2018-03-29 Tevetron, Llc Communication Network Employing Network Devices with Packet Delivery Over Pre-Assigned Optical Channels
CN107888425A (zh) * 2017-11-27 2018-04-06 北京邮电大学 移动通信系统的网络切片部署方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3813303A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104438A (zh) * 2020-08-20 2020-12-18 武汉光迅科技股份有限公司 一种roadm设备的配置方法、装置、电子设备及存储介质
JP7436747B2 (ja) 2020-08-31 2024-02-22 中興通訊股▲ふん▼有限公司 Otnネットワークリソース最適化方法および装置、コンピュータデバイスと記憶媒体
CN113938434A (zh) * 2021-10-12 2022-01-14 上海交通大学 大规模高性能RoCEv2网络构建方法和系统
CN115514657A (zh) * 2022-11-15 2022-12-23 阿里云计算有限公司 网络建模方法、网络问题分析方法及相关设备
CN115514657B (zh) * 2022-11-15 2023-03-24 阿里云计算有限公司 网络建模方法、网络问题分析方法及相关设备

Also Published As

Publication number Publication date
CN110661633B (zh) 2022-03-15
JP2021530893A (ja) 2021-11-11
CN110661633A (zh) 2020-01-07
EP3813303B1 (en) 2024-07-31
EP3813303A4 (en) 2021-07-28
JP7101274B2 (ja) 2022-07-14
EP3813303A1 (en) 2021-04-28

Similar Documents

Publication Publication Date Title
WO2020001220A1 (zh) 物理网元节点的虚拟化方法、装置、设备及存储介质
US11082262B2 (en) Flow entry generating method and apparatus
US10547537B1 (en) Batched path computation in resource-constrained networks
US9800507B2 (en) Application-based path computation
CN109417512B (zh) 用于通信网络的网络设备及相关方法
EP3673629B1 (en) Topology-aware controller associations in software-defined networks
US9485550B2 (en) Systems and methods for selection of optimal routing parameters for DWDM network services in a control plane network
KR102653760B1 (ko) 네트워크 슬라이싱 구현 방법, 장치 및 컨트롤러
EP3621243B1 (en) Virtual network creation method, apparatus and transport network system
CN104283791A (zh) 一种sdn网络中的三层拓扑确定方法和设备
WO2021129085A1 (zh) 网络切片创建方法、报文转发方法及其装置
CN109286563B (zh) 一种数据传输的控制方法和装置
WO2018177256A1 (zh) 一种时延信息的通告方法及装置
US10447399B2 (en) Method and system for restoring optical layer service
Penna et al. A clustered SDN architecture for large scale WSON
US9185042B2 (en) System and method for automated quality of service configuration through the access network
CN109698982B (zh) 控制通道实现方法、装置、设备、存储介质和处理方法
Nakagawa et al. Hierarchical time-slot allocation for dynamic bandwidth control in optical layer-2 switch network
CN107181694B (zh) 一种运用多线程技术实现的路由频谱分配方法
Martinez et al. Assessing the performance of multi-layer path computation algorithms for different PCE architectures
US12068923B2 (en) Path computation with direct enforcement of non-local constraints
CN107615719A (zh) 一种网络中路径计算的方法、装置及系统
WO2018014274A1 (zh) 建立路径的方法和节点
CN116017210A (zh) 路径计算方法、业务开通方法、电子设备、可读存储介质
Shrivastava et al. NOVEL RECONFIGURATION TECHNIQUE FOR HIGH CAPACITY WDM NETWORK

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19824449

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020572830

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019824449

Country of ref document: EP

Effective date: 20210121