WO2021083228A1 - 报文转发方法、装置及计算机存储介质 - Google Patents

报文转发方法、装置及计算机存储介质 Download PDF

Info

Publication number
WO2021083228A1
WO2021083228A1 PCT/CN2020/124463 CN2020124463W WO2021083228A1 WO 2021083228 A1 WO2021083228 A1 WO 2021083228A1 CN 2020124463 W CN2020124463 W CN 2020124463W WO 2021083228 A1 WO2021083228 A1 WO 2021083228A1
Authority
WO
WIPO (PCT)
Prior art keywords
vpnsid
data center
proxy
center routing
vpnsids
Prior art date
Application number
PCT/CN2020/124463
Other languages
English (en)
French (fr)
Inventor
闫朝阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021083228A1 publication Critical patent/WO2021083228A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Definitions

  • This application relates to the technical field of network function virtualization, and in particular to a method, device and computer storage medium for message forwarding.
  • VNF virtualization network function
  • the CSG obtains in advance the virtual private network segment identifier (VPNSID) of each of the multiple virtual machines deployed for the VNF to obtain multiple VPNSIDs. And obtain the private network route of the VNF, and the private network route of the VNF is used to uniquely identify the VNF.
  • the CSG establishes the correspondence between the multiple VPNSIDs and the private network route of the VNF.
  • the CSG When the CSG receives a message, if the message carries the private network route of the VNF, the message is mapped to one VPNSID among the multiple VPNSIDs through the multipath hash algorithm according to the corresponding relationship, and then The VPNSID is forwarded as the destination address of the message, so that the message is forwarded to the virtual machine indicated by the VPNSID.
  • the foregoing correspondence relationship includes a maximum of 8 VPNSIDs, which limits the efficiency of load sharing.
  • the present application provides a message forwarding method, device, and computer storage medium, which can improve the efficiency of load sharing.
  • the technical solution is as follows:
  • a message forwarding method is provided, and the method is applied to a CSG in a communication network.
  • the communication network also includes multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF.
  • Each of the multiple VRFs is connected to one or more of the aforementioned multiple VMs.
  • Each of the aforementioned multiple VMs is configured with a VPNSID
  • each of the aforementioned multiple data center routing nodes is configured with a proxy VPNSID.
  • the CSG receives a message that carries the identification of the target VNF; the CSG selects a proxy VPNSID from a plurality of proxy VPNSIDs included in the first forwarding table as the target proxy VPNSID, and the CSG uses the target proxy VPNSID as the destination address Forwarding the message to forward the message to the data center routing node indicated by the target proxy VPNSID is used to instruct the data center routing node indicated by the target proxy VPNSID to forward the message according to the multiple VPNSIDs included in the second forwarding table.
  • the first forwarding table is the forwarding table corresponding to the identifier of the target VNF
  • the multiple proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of multiple VMs.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of the VMs corresponding to the VPNSIDs of the target agent among the VPNSIDs of the multiple VMs.
  • a proxy VPNSID in order to avoid being restricted by the maximum load sharing path number of the CSG, can be configured for each data center routing node.
  • multiple proxy VPNSIDs are configured corresponding to the VPNSID of the VM.
  • the proxy VPNSID can be used in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node during load sharing, and each data center routing node will do it.
  • the maximum number of load sharing paths of data center routing nodes can be as high as 128. Therefore, the message forwarding method provided by the embodiments of the present application is equivalent to increasing the number of paths that CSG ultimately performs load sharing, thereby improving the efficiency of load sharing. .
  • the CSG obtains proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs, and adds the obtained proxy VPNSIDs to the first forwarding table.
  • the CSG Since the embodiment of this application uses the proxy VPNSID in the local forwarding table of the CSG to replace the VPNSID of the original VM, before forwarding the message, the CSG needs to obtain the proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs to construct the implementation of this application.
  • the first forwarding table provided by the example, thereby improving the efficiency of subsequent load sharing.
  • the above-mentioned CSG obtains the proxy VPNSID corresponding to the VPNSIDs of multiple VMs, specifically: the CSG receives the first notification message sent by any one of the multiple VRFs, and the first notification message carries each of the VRF connections.
  • Multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the VM are possible.
  • the VRF can actively report to the CSG multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM that it is connected to, so that the CSG can construct the first forwarding table and improve the efficiency of CSG's construction of the first forwarding table.
  • the communication network also includes RR.
  • the CSG obtains the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs, specifically: the CSG receives the second notification message sent by the RR, and the second notification message carries the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs.
  • the RR reports the proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs to the CSG, so that the CSG can construct the first forwarding table, which improves the flexibility of the CSG to construct the first forwarding table.
  • a message forwarding method is provided, which is applied to any data center routing node among multiple data center routing nodes in a communication network.
  • the communication network also includes a CSG, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • any data center routing node receives a message sent by the CSG, and the message carries the target proxy VPNSID.
  • the target proxy VPNSID carried in the message is the proxy VPNSID of any data center routing node
  • the any data center routing node selects a VPNSID from the VPNSIDs of multiple VMs included in the second forwarding table, and any data The central routing node forwards the message with the selected VPNSID as the destination address.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of any data center routing node.
  • the embodiment of this application uses the proxy VPNSID in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to the routing nodes of each data center during load sharing.
  • the routing node completes the actual load sharing. Therefore, when the data center routing node receives a message, it needs to forward the message to one of the multiple VMs according to the second forwarding table to achieve load sharing. Since the maximum number of load sharing paths of data center routing nodes can be as high as 128, the message forwarding method provided by the embodiments of the present application is equivalent to increasing the number of CSG final load sharing paths, thereby improving the efficiency of load sharing. .
  • the any data center routing node obtains the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the any data center routing node, and adds the obtained VPNSIDs of the VMs to the second forwarding table.
  • the data center routing node Since the actual load sharing is performed by each data center routing node in the embodiment of this application, the data center routing node needs to obtain the VPNSIDs of multiple VMs corresponding to its proxy VPNSID before forwarding the message to construct this application
  • the second forwarding table provided by the embodiment further improves the efficiency of subsequent load sharing.
  • the any data center routing node obtains the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of any data center routing node, specifically: the any data center routing node receives the data sent by any one of the multiple VRFs
  • the third announcement message, the third announcement message carries multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected by any VRF; the any data center routing node according to the proxy VPNSID of any data center routing node, from the first 3.
  • the VRF can actively report multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM to the data center routing node, so that the data center routing node can construct a second forwarding table, which improves the data center. The efficiency of the routing node to construct the second forwarding table.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of any data center routing node are configured by the administrator on the any data center routing node.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of any data center routing node can be directly configured manually, which improves the flexibility of the data center routing node to construct the second forwarding table.
  • a message forwarding method is provided, which is applied to any one of a plurality of VRFs in a communication network.
  • the communication network also includes a CSG, a plurality of data center routing nodes, and a method for executing the target VNF.
  • Multiple VMs, and each of the multiple VRFs is connected to one or more of the multiple VMs.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the VRF obtains multiple proxy VPNSIDs corresponding to the VPNSID of each of the multiple VMs connected to the VRF; the VRF publishes a first notification message to the CSG, and the first notification message carries the VRF Multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple connected VMs.
  • the VRF can actively report to the CSG multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM that it is connected to, so that the CSG can construct the first forwarding table and improve the efficiency of CSG constructing the first forwarding table.
  • the VRF after the VRF obtains the multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected to the VRF, the VRF also publishes a third notification message to multiple data center routing nodes, and the third notification message carries the Multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected by the VRF.
  • the VRF can also actively report multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to the data center routing node, so that the data center routing node can construct the second forwarding table and improve the data The efficiency of the central routing node to construct the second forwarding table.
  • a message forwarding method is provided, which is applied to RR in a communication network.
  • the communication network also includes a CSG, multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • Each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the RR obtains the VPNSID of each VM in the multiple VMs connected to any one of the multiple VRFs; for the obtained VPNSID of any VM, the RR determines the corresponding relationship between the locally stored VPNSID and the proxy VPNSID The proxy VPNSID corresponding to the VPNSID of the VM is obtained, and multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected to any VRF are obtained; the RR sends a second notification message to the CSG, and the second notification message carries multiple VRFs. Multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected to each VRF.
  • the RR When the RR reports the proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs to the CSG, and the CSG constructs the first forwarding table, the RR needs to obtain the multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM in the multiple VMs, and then report to the CSG.
  • the CSG sends the second notification message to facilitate the CSG to construct the first forwarding table, which improves the flexibility of the CSG to construct the first forwarding table.
  • the corresponding relationship between the VPNSID stored locally in the RR and the proxy VPNSID is configured by the administrator on the RR, thereby improving the flexibility of the CSG to construct the first forwarding table.
  • a CSG in a communication network also includes multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF.
  • Each of the multiple VRFs is connected to one or more of the aforementioned multiple VMs.
  • Each of the aforementioned multiple VMs is configured with a VPNSID
  • each of the aforementioned multiple data center routing nodes is configured with a proxy VPNSID.
  • CSG includes:
  • the receiving module is used to receive a message, and the message carries the identifier of the target VNF;
  • the selection module is used to select a proxy VPNSID from a plurality of proxy VPNSIDs included in the first forwarding table as the target proxy VPNSID;
  • the forwarding module is used to forward the message with the target proxy VPNSID as the destination address to forward the message to the data center routing node indicated by the target proxy VPNSID, and to instruct the data center routing node indicated by the target proxy VPNSID according to the second transfer Announce multiple VPNSID forwarding messages included.
  • the first forwarding table is the forwarding table corresponding to the identifier of the target VNF, and the multiple proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of multiple VMs.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of the VMs corresponding to the VPNSIDs of the target agent among the VPNSIDs of the multiple VMs.
  • the CSG further includes an adding module for obtaining proxy VPNSIDs corresponding to VPNSIDs of multiple VMs, and adding the obtained proxy VPNSIDs to the first forwarding table.
  • the above-mentioned adding module is specifically configured to: receive a first notification message sent by any one of the multiple VRFs, where the first notification message carries the VPNSID corresponding to each VM in each VM connected to the any VRF. Proxy VPNSID.
  • the communication network also includes RR.
  • the above-mentioned adding module is specifically configured to: receive a second announcement message sent by the RR, the second announcement message carrying proxy VPNSIDs corresponding to VPNSIDs of multiple VMs.
  • a data center routing node among multiple data center routing nodes in a communication network is provided.
  • the communication network also includes a CSG, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the data center routing node includes:
  • the receiving module is used to receive the message sent by the CSG, and the message carries the VPNSID of the target agent.
  • the selection module is used to select a VPNSID from the VPNSIDs of multiple VMs included in the second forwarding table when the target proxy VPNSID carried in the message is the proxy VPNSID of any data center routing node.
  • the forwarding module is used to forward the message with the selected VPNSID as the destination address.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node.
  • the data center routing node further includes:
  • the adding module is used to obtain the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and add the obtained VPNSIDs of the VMs to the second forwarding table.
  • the aforementioned acquisition module is specifically configured to: receive a third notification message sent by any one of the multiple VRFs, and the third notification message carries multiple agents corresponding to the VPNSID of each VM in each VM connected to any VRF VPNSID; According to the proxy VPNSID of the data center routing node, obtain the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured by the administrator on the data center routing node.
  • modules included in the data center routing node provided in the sixth aspect may refer to the message forwarding method provided in the second aspect, which will not be elaborated here.
  • any one of multiple VRFs in a communication network is provided.
  • the communication network further includes a CSG, multiple data center routing nodes, multiple VMs for executing the target VNF, and each of the multiple VRFs
  • One or more of multiple VMs are connected to each VRF.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the VRF includes:
  • the obtaining module is used to obtain multiple proxy VPNSIDs corresponding to the VPNSID of each of the multiple VMs connected by the VRF;
  • the publishing module is configured to publish a first notification message to the CSG, and the first notification message carries multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected by the VRF.
  • the publishing module is further configured to publish a third announcement message to multiple data center routing nodes, the third announcement message carrying multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected by the VRF.
  • the technical effects of the various modules included in the VRF provided in the seventh aspect may refer to the message forwarding method provided in the third aspect, which is not described in detail here.
  • an RR in a communication network also includes a CSG, multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • Each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the RR includes:
  • the acquisition module is used to acquire the VPNSID of each VM in the multiple VMs connected to any one of the multiple VRFs; for the acquired VPNSID of any VM, the RR determines the VPNSID based on the correspondence between the locally stored VPNSID and the proxy VPNSID The proxy VPNSID corresponding to the VPNSID of the VM is obtained, and multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected to any VRF are obtained;
  • the publishing module is configured to send a second notification message to the CSG, where the second notification message carries multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected to each of the multiple VRFs.
  • the corresponding relationship between the VPNSID stored locally in the RR and the proxy VPNSID is configured by the administrator on the RR, thereby improving the flexibility of the CSG to construct the first forwarding table.
  • the technical effects of the modules included in the RR provided in the eighth aspect may refer to the message forwarding method provided in the fourth aspect, which will not be described in detail here.
  • a CSG in a communication network further includes a plurality of data center routing nodes, a plurality of virtual routing and forwarding VRFs, and a plurality of VMs for executing the target virtual network function VNF, the One or more of the multiple VMs are connected to each of the multiple VRFs, each VM of the multiple VMs is configured with a VPNSID, and each data center routing node of the multiple data center routing nodes The node is configured with a proxy VPNSID;
  • the CSG includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to execute a program stored in the memory to execute the method in any one of the foregoing first aspect.
  • a data center routing node in a communication network includes multiple data center routing nodes, CSGs, multiple VRFs, multiple VMs for executing target VNFs, and each VRF is connected There is one or more of the multiple VMs, each VM of the multiple VMs is configured with a VPNSID, and each data center routing node of the multiple data center routing nodes is configured with a proxy VPNSID;
  • Any one of the multiple data center routing nodes includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to execute a program stored in the memory to execute the method described in any one of the foregoing second aspect.
  • a VRF in a communication network includes a plurality of VRFs, CSGs, a plurality of data center routing nodes, and a plurality of VMs for executing a target VNF.
  • Each of the plurality of VRFs One or more of the multiple VMs are connected to one VRF, each VM of the multiple VMs is configured with a VPNSID, and each data center routing node of the multiple data center routing nodes is configured with an agent VPNSID;
  • Any one of the multiple VRFs includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to execute a program stored in the memory to execute the method described in any one of the foregoing third aspect.
  • an RR in a communication network further includes a CSG, multiple data center routing nodes, multiple VRFs, multiple VMs for executing target VNFs, and each VRF is connected to One or more of the multiple VMs, each VM of the multiple VMs is configured with a VPNSID, and each data center routing node of the multiple data center routing nodes is configured with a proxy VPNSID;
  • the RR includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to execute a program stored in the memory to execute the method described in any one of the foregoing fourth aspects.
  • a chip is provided, the chip is set in a CSG in a communication network, and the communication network further includes a plurality of data center routing nodes, a plurality of virtual routing and forwarding VRFs, and a virtual network for executing a target Multiple VMs of a functional VNF, each of the multiple VRFs is connected to one or more of the multiple VMs, each VM of the multiple VMs is configured with a VPNSID, and the multiple data Each data center routing node in the central routing node is configured with a proxy VPNSID;
  • the chip includes a processor and an interface circuit
  • the interface circuit is used to receive instructions and transmit them to the processor
  • the processor is configured to execute the method described in any one of the foregoing first aspect.
  • a chip is provided, the chip is set in any data center routing node among a plurality of data center routing nodes included in a communication network, and the communication network further includes a CSG, a plurality of VRFs, and Multiple VMs that execute the target VNF, each VRF is connected to one or more of the multiple VMs, each VM in the multiple VMs is configured with a VPNSID, and each of the multiple data center routing nodes Each data center routing node is configured with a proxy VPNSID;
  • the chip includes a processor and an interface circuit
  • the interface circuit is used to receive instructions and transmit them to the processor
  • the processor is configured to execute any one of the above-mentioned second aspects.
  • a chip is provided, the chip is set in any one of a plurality of VRFs included in a communication network, and the communication network further includes a CSG, a plurality of data center routing nodes, and is used to execute a target VNF
  • the communication network further includes a CSG, a plurality of data center routing nodes, and is used to execute a target VNF
  • One or more of the multiple VMs are connected to each VRF of the multiple VRFs, each VM of the multiple VMs is configured with a VPNSID, and the multiple data center routes Each data center routing node in the node is configured with a proxy VPNSID;
  • the chip includes a processor and an interface circuit
  • the interface circuit is used to receive instructions and transmit them to the processor
  • the processor is configured to execute any one of the above-mentioned third aspects.
  • a chip is provided, the chip is set in an RR of a communication network, and the communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, and a plurality of VMs for executing a target VNF ,
  • Each VRF is connected to one or more of the multiple VMs, each VM of the multiple VMs is configured with a VPNSID, and each data center routing node of the multiple data center routing nodes is configured with A proxy VPNSID;
  • the chip includes a processor and an interface circuit
  • the interface circuit is used to receive instructions and transmit them to the processor
  • the processor is configured to execute any one of the above-mentioned fourth aspects.
  • a message forwarding system includes a CSG, multiple data center routing nodes, multiple virtual routing and forwarding VRFs, and multiple VMs for executing a target virtual network function VNF.
  • One or more of the multiple VMs are connected to each VRF of the multiple VRFs, each VM of the multiple VMs is configured with a virtual private network segment identifier VPNSID, and each data in the multiple data center routing nodes
  • the central routing node is configured with a proxy VPNSID;
  • the CSG is used to obtain the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table;
  • Any data center routing node among the multiple data center routing nodes is used to obtain the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and add the obtained VPNSIDs of the VMs to the second forwarding table.
  • the CSG is specifically used to: receive a first notification message sent by any one of the multiple VRFs, the first notification message carrying multiple agents corresponding to the VPNSID of each VM in each VM connected to the any VRF VPNSID.
  • the CSG is specifically configured to: the CSG receives a second notification message sent by the RR, and the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs.
  • the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries the VPNSID corresponding to each VM in each VM connected to the any VRF Multiple proxy VPNSIDs; according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured by the administrator on the data center routing node.
  • each node in the above-mentioned message forwarding system can also refer to the technical effects of the above-mentioned first, second, third and fourth aspects of the message forwarding method, which will not be repeated here.
  • FIG. 1 is a schematic diagram of the architecture of a communication network provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of another communication network architecture provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for forwarding a message according to an embodiment of the present application
  • FIG. 4 is an intention of a message forwarding process provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table according to an embodiment of the present application
  • Fig. 6 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another network device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an interface board in the network device shown in FIG. 7 provided by an embodiment of the present application;
  • FIG. 9 is a schematic structural diagram of a CSG provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a data center routing node provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another network device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of the architecture of a communication network provided by an embodiment of the present application.
  • the communication network 100 includes multiple CSGs, multiple provider edges (PE), multiple data center routing nodes, multiple data centers, and multiple virtual route forwarding (virtual route forwarding, VRF).
  • PE provider edges
  • VRF virtual route forwarding
  • the data center may be an RDC, a central data center (CDC) or an edge data center (EDC), and Figure 1 takes the RDC as a data center as an example for illustration.
  • the data center routing node can be a data center gateway (DCGW) deployed between the PE and the data center, or a DC Spine (spine) router deployed between the data center and the VRF. This is not specifically limited.
  • Figure 1 illustrates an example in which the data center routing node is a DCGW deployed between the PE and the data center.
  • any CSG communicates with any DCGW through multiple PEs in the backbone network. Communicate between any DCGW and any RDC. Any RDC communicates with each connected VRF, and each VRF is connected to one or more VMs (in Fig. 1, each VRF is connected to one VM as an example for illustration). There are multiple VMs used to execute the same VNF, and they may be connected to different VRFs. As shown in FIG. 1, there are three VMs used to execute the VNF shown in FIG. 1, which are the three VMs connected to the first three VRFs from top to bottom in FIG. 1.
  • an SID of type END.dx is configured for each VM in each VM used to execute the VNF, which is also called VPNSID. That is, in this embodiment of the application, each VPNSID is used to uniquely identify a VM.
  • the forwarding table of the CSG in the related art includes multiple VPNSIDs corresponding to the identifier of the VNF, so that the CSG forwards the message to the VM indicated by one of the VPNSIDs according to the forwarding table.
  • the current CSG supports up to 8 load sharing, that is, when the CSG receives a message, the CSG can forward the message to one of the 8 VMs based on the multi-path hash algorithm.
  • the VM performs processing, which seriously affects the efficiency of load sharing.
  • the embodiment of the present application provides a method for forwarding messages in this basic scenario, so as to improve the efficiency of load sharing.
  • the VNF shown in Figure 1 can be an access management function (access management function, AMF), a session management function (session management function, SMF), or a user plane function (UPF), etc. Wait.
  • access management function access management function
  • SMF session management function
  • UPF user plane function
  • any of the above-mentioned VRFs is connected to a VM through a designated access (Access Circuit, AC) Layer 3 interface or sub-interface, which will not be described in detail here.
  • AC Access Circuit
  • each device shown in FIG. 1 is only for illustration, and does not constitute a limitation on the architecture of the communication network provided in the embodiment of the present application.
  • the communication network shown in FIG. 1 is simplified, and the simplified communication network is shown in FIG. 2.
  • the subsequent method of forwarding messages is illustrated by taking the communication network shown in FIG. 2 as an example.
  • the communication network 200 includes a CSG and multiple data center routing nodes (in FIG. 2, two data center routing nodes are taken as an example for illustration, which are respectively labeled as data center routing node 1 and data center routing node 2) , Multiple VRFs (in Figure 2 two VRFs are taken as an example for illustration, respectively marked as VRF1 and VRF2), and multiple VMs used to execute the target VNF.
  • each VRF is connected to multiple VMs.
  • Figure 2 takes two VMs connected to each VRF as an example for illustration).
  • the network is divided into multiple domains.
  • Each domain includes a group of hosts and a group of routers.
  • the hosts and routers in a domain are managed by a controller.
  • the CSG, data center routing node 1 and data center routing node 2 in FIG. 2 are located in the same domain, and the data center routing node 1, data center routing node 2, and VRF1 and VRF2 are located in another domain.
  • a route reflector (RR) is also deployed in each domain, which is marked as RR1 and RR2 in Figure 2.
  • the function of the route reflector in each domain is: any routing device in the domain can communicate with other routing devices through the route reflector, and there is no need to directly establish a network connection between the two routing devices, thereby reducing network resources Consumption.
  • the following takes the communication network shown in FIG. 2 as an example to illustrate the message forwarding method provided by the embodiment of the present application.
  • the following embodiments can be used to implement the message Forward.
  • Fig. 3 is a flowchart of a message forwarding method provided by an embodiment of the present application. As shown in Figure 3, the method includes the following steps:
  • Step 301 The CSG receives a message, which carries the identifier of the target VNF.
  • a proxy VPNSID in order to avoid being restricted by the maximum load sharing path number of the CSG, can be configured for each data center routing node.
  • multiple proxy VPNSIDs can be configured for the VPNSID of the VM.
  • the proxy VPNSID can be used in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node during load sharing, and each data center routing node will do it.
  • the actual load sharing is completed, and the maximum number of load sharing paths of the data center routing node can be as high as 128. Therefore, the packet forwarding method adopted in the embodiment of this application is equivalent to increasing the number of paths for the CSG to perform load sharing. Improve the efficiency of forwarding messages.
  • a first forwarding table corresponding to the identifier of the target VNF is stored in the CSG, and the first forwarding table includes multiple proxy VPNSIDs, so that the CSG can forward the message through the following steps 302 and 303.
  • the multiple proxy VPNSIDs included in the first forwarding table refer to the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs used to execute the target VNF. Among them, the configuration process of the first forwarding table will be described in the following embodiments, which will not be described here.
  • Fig. 4 is a schematic diagram of packet forwarding provided by an embodiment of the present application.
  • the first forwarding table stored in the CSG includes two proxy VPNSIDs, namely DE::B100 and DF::B100.
  • DE::B100 is the proxy VPNSID of routing node 1 in the data center.
  • DF::B100 is the proxy VPNSID of routing node 2 in the data center.
  • proxy VPNSIDs are configured for the VPNSID of the VM.
  • two corresponding proxy VPNSIDs are configured for the VPNSID of the first VM from top to bottom (the VPNSID is A8:1::B100), namely DE::B100 and DF::B100.
  • the VPNSID of the second VM from top to bottom the VPNSID is A8:1::B101
  • two corresponding proxy VPNSIDs are also configured, namely DE::B100 and DF::B100.
  • the two corresponding proxy VPNSIDs can also be configured. That is, for the VPNSID of each VM shown in FIG. 4, two corresponding proxy VPNSIDs are configured, namely DE::B100 and DF::B100.
  • the first forwarding table includes two proxy VPNSIDs (the two proxy VPNISDs are respectively DE::B100 and DF::B100.) Yes and execution All proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs of the target VNF.
  • the process of configuring the proxy VPNSID corresponding to the VPNSID of each VM and the specific implementation of generating the first forwarding table will be described in detail in the following embodiment of generating the first forwarding table, which will not be repeated here.
  • Step 302 The CSG selects a proxy VPNSID from the multiple proxy VPNSIDs included in the first forwarding table as the target proxy VPNSID.
  • the CSG may select a proxy VPNSID from a plurality of proxy VPNSIDs included in the first forwarding table through a multi-path hash algorithm.
  • the multi-path hash algorithm may be an equal-cost multi-path routing (ECMP) algorithm.
  • ECMP equal-cost multi-path routing
  • multiple proxy VPNSIDs in the first forwarding table have the same probability of being selected, so that the CSG evenly distributes the received messages to each data center routing node.
  • the proxy VPNSID selected by CSG based on the multi-path hash algorithm is DE::B100, indicating that the message needs to be forwarded to data center routing node 1 at this time.
  • hash algorithms may be used to make the probability of each proxy VPNSID selected in the first forwarding table different.
  • a specific type of hash algorithm may be determined according to a load balancing strategy.
  • Step 303 The CSG forwards the message using the target proxy VPNSID as the destination address.
  • the proxy VPNSID is used to replace the VPNSID in the related technology, after the CSG selects the proxy VPNSID from the first forwarding table as the target proxy VPNSID, it can use the target proxy VPNSID as the message The destination address to forward the message.
  • the message also includes a payload (payload).
  • the steps labeled 1 in Figure 4 are used to illustrate the foregoing process.
  • the CSG can forward the message to the data center routing node indicated by the selected target proxy VPNSID.
  • the data center routing node indicated by the target agent VPNSID is configured with a second forwarding table, and the second forwarding table includes multiple VPNSIDs for instructing the data center routing node corresponding to the target agent VPNSID according to the multiple VPNSIDs included in the second forwarding table
  • the message is forwarded, and the multiple VPNSIDs included in the second forwarding table refer to VPNSIDs corresponding to the VPNSID of the target agent among the VPNSIDs of the multiple VMs.
  • the received message can be processed through the following steps 304 to 306 .
  • the configuration process of the second forwarding table will be described in the following embodiments, which will not be described here.
  • Step 304 For any data center routing node, the data center routing node receives a message sent by the CSG, and the message carries the target proxy VPNSID.
  • any data center routing node in the network may receive the message sent by the CSG, for any data center routing node, when the data center routing node receives the message sent by the CSG, it needs to determine whether the message is Deal with it by itself.
  • the data center routing node can compare whether the target proxy VPNSID carried in the message is consistent with its own configured proxy VPNSID. If they are inconsistent, it indicates that the message is processed by other data center routing nodes. At this time, the message is forwarded to other data center routing nodes for processing. If they are consistent, it indicates that the message is processed by itself. At this time, the data center routing node can continue to forward the message through the following steps 305 and 306.
  • the proxy VPNSID carried in the message is DE::B100
  • the proxy VPNSID is its own proxy VPNSID, so the data
  • the central routing node 1 can continue to forward the message through the following steps 305 and 306.
  • the data center routing node 2 When the data center routing node 2 receives the message, since the target proxy VPNSID carried in the message is inconsistent with its own proxy VPNSID, the data center routing node 2 continues to forward the message to the data indicated by the target proxy VPNSID Central routing node.
  • Step 305 In the case that the target proxy VPNSID carried in the message is the proxy VPNSID of the data center routing node, the data center routing node selects a VPNSID from the VPNSIDs of multiple VMs included in the second forwarding table, and the second forwarding
  • the VPNSIDs of multiple VMs included in the publication refer to the VPNSIDs of the VMs corresponding to the proxy VPNSID of any data center routing node.
  • the data center routing node Since the data center routing node locally stores the second forwarding table corresponding to its own proxy VPNSID, and the second forwarding table includes the VPNSIDs of multiple VMs corresponding to its own proxy VPNSID, therefore, the proxy VPNSID carried in the message In the case of the proxy VPNSID of any data center routing node, the data center routing node can directly select a VPNSID from the second forwarding table through a multi-path hash algorithm to forward the message through the following step 306.
  • the above-mentioned multi-path hash algorithm may also be an ECMP algorithm.
  • the VPNSID of each VM in the second forwarding table has the same probability of being selected, so that the data center routing node evenly distributes the received messages to each VM.
  • different hash algorithms may be used to make the probability that the VPNSID of each VM in the second forwarding table is selected differently, and the specific type of hash algorithm may be determined according to the load balancing strategy.
  • the second forwarding table of data center routing node 1 includes two VPNSIDs, namely A8:1::B100 and A8:1::B101. Therefore, the data center routing node 1 You can select one of these two VPNSIDs through the ECMP algorithm.
  • Step 306 The data center routing node forwards the message using the selected VPNSID as the destination address.
  • the data center routing node Since the message needs to be processed by the VM in the end, after the data center routing node selects a VPNSID from the second forwarding table, it can forward the message with the selected VPNSID as the destination address, so that the selected VPNSID The indicated VM processes the message.
  • the data center routing node 1 can transfer A8 :1::B100 is used as the destination address of the message (the destination address is marked as DA in Figure 4) for forwarding.
  • the VPNSID selected by data center routing node 1 from the second forwarding table is A8:1::B101, as shown in Figure 4, data center routing node 1 can use A8:1::B101 as the destination address of the message Forward it.
  • the two steps marked 2 in FIG. 4 are used to illustrate the foregoing process, and the two steps marked 2 are in an OR relationship.
  • each VM is deployed on the VRF, when the data center routing node forwards the message with the selected VPNSID as the destination address, the VRF of the VM indicated by the deployment of the selected VPNSID first receives the message , And then forward the message to the VM indicated by the selected VPNSID.
  • the step labeled 3 in Figure 4 is used to illustrate the foregoing process.
  • the first forwarding table needs to be configured on the CSG, and the second forwarding table is configured in each data center routing node.
  • the specific functions of the first forwarding table and the second forwarding table are already in The above embodiments are explained and explained, and then the configuration process of the first forwarding table and the second forwarding table will be explained.
  • Fig. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table provided by an embodiment of the present application. As shown in Figure 5, the method includes the following steps:
  • Step 501 For any VRF, the VRF obtains the VPNSID configured for any one of the multiple VMs connected to the VRF.
  • the VPNSID configured for any one of the multiple VMs connected to the VRF can be configured by the network controller, or can be configured directly on the VRF by the administrator, which is not specifically limited in this application. If the network controller configures the VPNSID, the network controller will publish the configured VPNSID to the VRF after configuring the VPNSID of any one of the multiple VMs connected to the VRF, so that the VRF can obtain the number of connections for the VRF. The VPNSID of any VM configuration in the two VMs.
  • the network controller or administrator configures the VPNSID for each connected VM according to the locator of the VRF.
  • the location identifier of VRF1 is A8:1::/64.
  • the network controller or administrator can configure two VPNSIDs for the two VMs connected to VRF1.
  • the VPNSID configured for the first VM from top to bottom in Figure 4 is A8:1::B100
  • the VPNSID configured for the second VM from top to bottom is A8:1::B101.
  • Step 502 For any data center routing node, the data center routing node obtains the proxy VPNSID configured for the data center routing node.
  • the proxy VPNSID of the data center routing node configured on the data center routing node can be configured by the network controller, or can be configured on the data center routing node by the administrator, which is not specifically limited in this application. If it is the network controller that configures the proxy VPNSID, after configuring the proxy VPNSID of the data center routing node, the network controller publishes the configuration proxy VPNSID to the data center routing node so that the data center routing node can obtain the data The proxy VPNSID configured by the central routing node.
  • the network controller or administrator configures the proxy VPNSID for the data center routing node according to the locator of the data center routing node.
  • the location identifier of the data center routing node 1 is DE::/64.
  • the network controller or administrator can configure a proxy VPNSID for the data center routing node 1. It is DE::B100.
  • the proxy VPNSID configured for the data center routing node 2 based on the location identifier DF::/64 of the data center routing node 2 is DF::B100.
  • Step 503 The CSG obtains the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs used to execute the target VNF, and adds the obtained proxy VPNSID to the first forwarding table.
  • the CSG can obtain the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs used to execute the target VNF through the following two specific implementation methods:
  • the first implementation manner For any VRF, the VRF obtains multiple proxy VPNSIDs configured corresponding to the VPNSID of any VM in the multiple VMs connected to the VRF.
  • the VRF publishes a first notification message to the CSG, and the first notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of any one of the multiple VMs connected to the VRF.
  • the CSG receives the first notification message sent by the VRF.
  • the CSG can obtain the proxy VPNSID corresponding to the VPNSID of multiple VMs according to the first notification message sent by each VRF.
  • multiple proxy VPNSIDs corresponding to the VPNSID of the VM can be configured by the network controller, or can be configured directly on the VRF by the administrator, which is not specifically limited in this application. If the network controller configures multiple proxy VPNSIDs corresponding to the VPNSID, the network controller will publish the multiple proxy VPNSIDs corresponding to the VPNSID of the VM after configuring the multiple proxy VPNSIDs corresponding to the VPNSID of the VM. The VRF enables the VRF to obtain multiple proxy VPNSIDs correspondingly configured for the VPNSID of the VM.
  • VRF and CSG are located in different domains, so any of the above VRFs can publish the first notification message to CSG through MP-BGP/EVPN (a Border Gateway Protocol (BGP) based on Border Gateway Protocol, BGP) and Ethernet Internet Virtual Private Network (Ethernet Virtual Private Network, EVPN) multi-link protocol (Multilink Protocol, MP)).
  • BGP Border Gateway Protocol
  • EVPN Ethernet Internet Virtual Private Network
  • MP Multilink Protocol
  • the VRF actively reports to the CSG multiple proxy VPNSIDs configured corresponding to the VPNSID of any one of the multiple VMs connected to the VRF.
  • the second implementation mode the RR in the communication network locally pre-stores the correspondence between the VPNSID and the proxy VPNSID.
  • the correspondence includes the VPNSIDs of multiple VMs and multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM. This correspondence The construction process of the relationship will be described in detail below, and will not be elaborated here.
  • the RR obtains the VPNSID of each VM in each VM connected to the VRF.
  • the RR can determine the corresponding VPNSID of the VM based on the correspondence between the VPNSID and the proxy VPNSID Multiple proxy VPNSIDs.
  • the RR After the RR obtains the multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VRF connection, it can send a second notification message to the CSG.
  • the second notification message carries each VRF connected to each of the multiple VRFs. Multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the VM.
  • the CSG receives the second notification message sent by the RR, and the CSG can obtain the multiple proxy VPNSIDs according to the second notification message.
  • the above-mentioned RR is specifically RR2 in FIG. 4.
  • the second notification message sent by the RR to the CSG is also advertised through MP-BGP/EVPN.
  • the RR obtains the VPNSID of each VM in each VM connected to the VRF can be implemented as follows: the RR sends a VPNSID obtaining request to the VRF, and the VPNSID obtaining request is used to instruct the VRF to connect the VRF to the VRF.
  • the VPNSID of each VM in each VM is sent to the RR. That is, in the second implementation manner, the VRF passively sends the VPNSID of each VM connected to the VRF to the RR.
  • the correspondence between the VPNSID stored locally in the RR and the proxy VPNSID can be directly configured by the administrator on the RR.
  • the administrator can configure multiple proxy VPNSIDs corresponding to the VPNSID of any one of the multiple VMs connected to the VRF on the RR through the management system or the command line, so that the RR can obtain the VRF connection
  • Multiple proxy VPNSIDs corresponding to the VPNSID of any VM in the multiple VMs thereby constructing a correspondence between VPNSID and proxy VPNSID.
  • Step 504 For any data center routing node, the data center routing node obtains the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and the data center routing node adds the obtained VPNSIDs of the multiple VMs to In the second forwarding table corresponding to its own proxy VPNSID.
  • the data center routing node can obtain multiple VPNSIDs corresponding to the proxy VPNSID of the data center routing node through the following two specific implementation methods:
  • the first implementation manner For any VRF, the VRF obtains multiple proxy VPNSIDs corresponding to the VPNSIDs of any VM among multiple VMs connected by any VRF.
  • the VRF publishes a third notification message to each data center routing node, and the third notification message carries multiple proxy VPNSIDs corresponding to the VPNSID of any one of the multiple VMs connected by the VRF.
  • the data center routing node receives the third notification message sent by the VRF, and obtains the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node The VPNSID of the corresponding VM.
  • the data center routing node After the data center routing node receives all the third announcement messages issued by the VRF, it can determine the VPNSIDs of all VMs corresponding to the proxy VPNSID of the data center routing node according to all the third announcement messages.
  • step 503 For the VRF to obtain the multiple proxy VPNSIDs corresponding to the VPNSIDs of any one of the multiple VMs connected by any VRF, reference may be made to the first implementation manner in step 503, which will not be described in detail here.
  • the VRF and the data center routing node are in the same domain. Therefore, the third notification message issued by the VRF to each data center routing node is issued through the internal network protocol (Interior Gateway Protocol, IGP) .
  • IGP Interior Gateway Protocol
  • the VRF actively reports to the data center routing node the multiple proxy VPNSIDs configured corresponding to the VPNSID of any one of the multiple VMs connected to the VRF.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured by the administrator on the data center routing node.
  • the administrator can directly configure the VPNSID of multiple VMs corresponding to the proxy VPNSID of the data center routing node on the data center routing node through the command line or the management system.
  • step 503 and step 504 can be used in combination.
  • the configuration mode can also be referred to as a fully dynamic configuration mode.
  • the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID, so that the CSG and the data center routing node can obtain the VPNSID of the VM and the corresponding proxy VPNSID, and then configure their respective forwarding tables.
  • this configuration manner may also be referred to as a semi-dynamic configuration manner.
  • the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID to the CSG, so that the CSG configures the first forwarding table, and the administrator directly configures the proxy VPNSID and the VPNSID of the corresponding VM at the data center routing node.
  • the data center routing node generates the second forwarding table.
  • this configuration manner may also be referred to as a static configuration manner.
  • the VRF does not actively report any information, so the RR needs to obtain the VPNSID of each VM connected to the VRF from the VRF, and then determine the VPNSID of each VM according to the correspondence between the locally stored VPNSID and the proxy VPNSID The proxy VPNSID is then notified to the CSG, and the CSG generates the first forwarding table. Since the VRF does not actively report any information, the data center routing node can only manually configure the VPNSID of the VM corresponding to its proxy VPNSID through the administrator.
  • FIG. 6 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • the network device 600 may be any node in the communication network in the embodiment shown in FIGS. 1-5, for example, it may be a CSG, a data center routing node, VRF etc.
  • the network device 600 may be a switch, a router, or other network devices that forward packets.
  • the network device 600 includes: a main control board 610, an interface board 630, and an interface board 640.
  • a switching network board (not shown in the figure) may be included, and the switching network board is used to complete data exchange between interface boards (interface boards are also called line cards or service boards).
  • the main control board 610 is used to perform functions such as system management, equipment maintenance, and protocol processing.
  • the interface boards 630 and 640 are used to provide various service interfaces (for example, POS interface, GE interface, ATM interface, etc.), and implement message forwarding.
  • the main control board 610, the interface board 630, and the interface board 640 are connected to the system backplane through a system bus to achieve intercommunication.
  • the interface board 630 includes one or more processors 631.
  • the processor 631 is used for controlling and managing the interface board, communicating with the central processing unit on the main control board, and for forwarding processing of messages.
  • the memory 632 on the interface board 630 is used to store forwarding entries, and the processor 631 forwards the message by searching for the forwarding entries stored in the memory 632.
  • the interface board 630 includes one or more network interfaces 633 for receiving packets sent by other devices, and sending packets according to instructions from the processor 631.
  • network interfaces 633 for receiving packets sent by other devices, and sending packets according to instructions from the processor 631.
  • steps 301, 303, 304, and 306 for the specific implementation process, refer to steps 301, 303, 304, and 306 in the embodiment shown in FIG. 3. I will not repeat them one by one here.
  • the processor 631 is configured to execute the processing steps and functions of any node in the communication network described in the embodiment shown in FIGS. 1-5.
  • steps 305 processing when used as a data center routing node
  • Step 501 processing when used as a VRF
  • Step 502 processing when used as a data center routing node
  • Step 503 in the embodiment shown in Figure 5 (Processing as a CSG) and Step 504 (processing as a data center routing node). I will not repeat them one by one here.
  • this embodiment includes multiple interface boards and adopts a distributed forwarding mechanism. Under this mechanism, the operation on the interface board 640 is basically similar to the operation of the interface board 630. For the sake of brevity ,No longer.
  • the processors 631 and/or 641 in the interface board 630 in FIG. 6 may be dedicated hardware or chips, such as a network processor or an application specific integrated circuit (application specific integrated circuit) to implement the above functions.
  • One way to achieve this is to use dedicated hardware or chip processing for the so-called forwarding plane.
  • the processor 631 and/or 641 may also adopt a general-purpose processor, such as a general-purpose CPU, to implement the functions described above.
  • main control boards there may be one or more main control boards, and when there are more than one, it may include a main main control board and a standby main control board.
  • interface boards There may be one or more interface boards. The stronger the data processing capability of the device, the more interface boards provided.
  • the multiple interface boards can communicate through one or more switching network boards, and when there are more than one, the load sharing and redundant backup can be realized together.
  • the device does not need to switch the network board, and the interface board undertakes the processing function of the business data of the entire system.
  • the device includes multiple interface boards, which can realize data exchange between multiple interface boards through the switching network board, and provide large-capacity data exchange and processing capabilities. Therefore, the data access and processing capabilities of network equipment with a distributed architecture are greater than those with a centralized architecture.
  • the specific architecture used depends on the specific networking deployment scenario, and there is no restriction here.
  • the memory 632 may be a read-only memory (read-only memory, ROM) or other types of static storage devices that can store static information and instructions, a random access memory (random access memory, RAM), or can store Other types of dynamic storage devices for information and instructions can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only Memory, CD-ROM, or Other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disks or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures And any other media that can be accessed by the computer, but not limited to this.
  • the memory 632 may exist independently and is connected to the processor 631 through a communication bus.
  • the memory 632 may also be integrated with the processor 631.
  • the memory 632 is used to store program codes, which are controlled by the processor 631 to execute, so as to execute the path detection method provided in the foregoing embodiments.
  • the processor 631 is configured to execute the program code stored in the memory 632.
  • One or more software modules can be included in the program code.
  • the one or more software modules may be the software modules provided in any of the following embodiments of FIG. 9 and FIG. 10.
  • the network interface 633 may be any device such as a transceiver for communicating with other devices or communication networks, such as Ethernet, radio access network (RAN), and wireless local area network. (wireless local area networks, WLAN) etc.
  • RAN radio access network
  • WLAN wireless local area network
  • FIG. 7 is a schematic structural diagram of another network device provided by an embodiment of the present application.
  • the network device 700 may be any node in the communication network in the embodiment shown in FIGS. 1-5, for example, it may be a CSG or a data center router. Node, VRF, etc.
  • the network device 700 may be a switch, a router, or other network devices that forward packets.
  • the network device 700 includes: a main control board 710, an interface board 730, a switching network board 720, and an interface board 740.
  • the main control board 710 is used to perform functions such as system management, equipment maintenance, and protocol processing.
  • the switching network board 720 is used to complete data exchange between various interface boards (interface boards are also called line cards or service boards).
  • the interface boards 730 and 740 are used to provide various service interfaces (for example, POS interface, GE interface, ATM interface, etc.), and implement data packet forwarding.
  • the control plane is composed of the management and control units of the main control board 710 and the management and control units on the interface boards 730 and 740.
  • the main control board 710, the interface boards 730 and 740, and the switching network board 720 are connected to the system backplane through the system bus to achieve intercommunication.
  • the central processing unit 731 on the interface board 730 is used to control and manage the interface board and communicate with the central processing unit on the main control board.
  • the forwarding entry memory 734 on the interface board 730 is used to store forwarding entries, and the network processor 732 forwards the message by searching for the forwarding entries stored in the forwarding entry memory 734.
  • the physical interface card 733 of the interface board 730 is used to receive packets.
  • the network processor 732 is configured to execute the processing steps and functions of any node described in the embodiment shown in FIGS. 1-5.
  • 302 processing when used as a CSG
  • step 305 processing when used as a data center routing node
  • step 501 processing when used as a VRF
  • step 502 processing when used as a data center routing node
  • step 503 when used as a CSG (Processing at the time)
  • step 504 processing at the time as a data center routing node. I will not repeat them one by one here.
  • the message is sent to other devices through the physical interface card 733.
  • this embodiment includes multiple interface boards and adopts a distributed forwarding mechanism. Under this mechanism, the operation on the interface board 740 is basically similar to the operation of the interface board 730. For the sake of brevity ,No longer.
  • the functions of the network processors 732 and 742 in FIG. 7 can be replaced by application specific integrated circuits.
  • main control boards there may be one or more main control boards, and when there are more than one, it may include a main main control board and a standby main control board.
  • interface boards There may be one or more interface boards. The stronger the data processing capability of the device, the more interface boards provided.
  • the switching network board may not exist, or there may be one or more. When there are more than one, the load sharing and redundant backup can be realized together. Under the centralized forwarding architecture, the device does not need to switch the network board, and the interface board undertakes the processing function of the business data of the entire system.
  • the device can have at least one switching network board, and data exchange between multiple interface boards is realized through the switching network board, providing large-capacity data exchange and processing capabilities. Therefore, the data access and processing capabilities of network equipment with a distributed architecture are greater than those with a centralized architecture.
  • the specific architecture used depends on the specific networking deployment scenario, and there is no restriction here.
  • FIG. 8 is a schematic structural diagram of an interface board 800 in the above-mentioned network device shown in FIG. 7 provided by an embodiment of the present application.
  • the network device where the interface board 800 is located may be the communication network in the embodiment shown in FIGS. 1-5 above. Any node, for example, can be CSG, data center routing node, VRF, etc.
  • the interface board 800 may include a physical interface card (PIC) 830, a network processor (NP) 810, and a traffic management module (traffic management) 820.
  • PIC physical interface card
  • NP network processor
  • traffic management traffic management
  • PIC physical interface card (physical interface card), used to realize the docking function of the physical layer, the original traffic enters the interface board of the network device from this, and the processed message is sent from the PIC card.
  • the network processor NP 810 is used to implement message forwarding processing.
  • the processing of uplink messages includes: processing of the inbound interface of the message, forwarding table lookup (such as the related content of the first forwarding table or the second forwarding table in the above embodiment); the processing of downlink messages: forwarding table Search (for example, related content related to the first forwarding table or the second forwarding table in the foregoing embodiment) and so on.
  • Traffic Management TM 820 is used to implement QoS, wire-speed forwarding, large-capacity buffering, queue management and other functions.
  • upstream traffic management includes: upstream QoS processing (such as congestion management and queue scheduling, etc.) and slicing processing;
  • downstream traffic management includes: packet processing, multicast replication, and downstream QoS processing (such as congestion management and queue scheduling, etc.) ).
  • the multiple interface boards 800 can communicate with each other through the switching network 840.
  • FIG. 8 only shows a schematic processing flow or module inside the NP, and the processing sequence of each module in a specific implementation is not limited to this, and other modules or processing flows can be deployed as needed in practical applications. The comparison of the embodiments of this application is not limited.
  • FIG. 9 is a schematic structural diagram of a CSG provided by an embodiment of the present application.
  • the communication network also includes multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF.
  • Each of the multiple VRFs is connected to one or more of the aforementioned multiple VMs.
  • Each of the aforementioned multiple VMs is configured with a VPNSID
  • each of the aforementioned multiple data center routing nodes is configured with a proxy VPNSID.
  • the CSG 900 includes:
  • the receiving module 901 is configured to receive a message, and the message carries the identifier of the target VNF. For a specific implementation manner, refer to step 301 in the embodiment of FIG. 3.
  • the selection module 902 is configured to select a proxy VPNSID from the multiple proxy VPNSIDs included in the first forwarding table as the target proxy VPNSID. For a specific implementation manner, refer to step 302 in the embodiment of FIG. 3.
  • the forwarding module 903 is used for forwarding the message with the target proxy VPNSID as the destination address to forward the message to the data center routing node indicated by the target proxy VPNSID, and for instructing the data center routing node indicated by the target proxy VPNSID according to the second Multiple VPNSIDs included in the forwarding table are forwarded packets.
  • the first forwarding table is the forwarding table corresponding to the identifier of the target VNF, and the multiple proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of multiple VMs.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of the VMs corresponding to the VPNSIDs of the target agent among the VPNSIDs of the multiple VMs. For a specific implementation manner, refer to step 303 in the embodiment of FIG. 3.
  • the CSG further includes an adding module for obtaining proxy VPNSIDs corresponding to VPNSIDs of multiple VMs, and adding the obtained proxy VPNSIDs to the first forwarding table.
  • the above-mentioned adding module is specifically configured to: receive a first notification message sent by any one of the multiple VRFs, where the first notification message carries the VPNSID corresponding to each VM in each VM connected to the any VRF. Proxy VPNSID.
  • the communication network also includes RR.
  • the above-mentioned adding module is specifically configured to: receive a second announcement message sent by the RR, the second announcement message carrying proxy VPNSIDs corresponding to VPNSIDs of multiple VMs.
  • a proxy VPNSID in order to avoid being restricted by the maximum load sharing path number of the CSG, can be configured for each data center routing node.
  • multiple proxy VPNSIDs are configured corresponding to the VPNSID of the VM.
  • the proxy VPNSID can be used in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node during load sharing, and each data center routing node will do it.
  • the maximum number of load sharing paths of data center routing nodes can be as high as 128. Therefore, the message forwarding method provided by the embodiments of the present application is equivalent to increasing the number of paths that CSG ultimately performs load sharing, thereby improving the efficiency of load sharing. .
  • FIG. 10 is a schematic structural diagram of any data center routing node among multiple data center routing nodes in a communication network provided by an embodiment of the present application.
  • the communication network also includes a CSG, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the data center routing node 1000 includes:
  • the receiving module 1001 is configured to receive a message sent by the CSG, and the message carries the VPNSID of the target agent. For a specific implementation manner, refer to step 304 in the embodiment of FIG. 3.
  • the selection module 1002 is configured to select a VPNSID from the VPNSIDs of multiple VMs included in the second forwarding table when the target proxy VPNSID carried in the message is the proxy VPNSID of any data center routing node. For a specific implementation manner, refer to step 305 in the embodiment of FIG. 3.
  • the forwarding module 1003 is used to forward the message using the selected VPNSID as the destination address.
  • the multiple VPNSIDs included in the second forwarding table refer to the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node. For a specific implementation manner, refer to step 306 in the embodiment of FIG. 3.
  • the data center routing node further includes:
  • the adding module is used to obtain the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and add the obtained VPNSIDs of the VMs to the second forwarding table.
  • the aforementioned acquisition module is specifically configured to: receive a third notification message sent by any one of the multiple VRFs, and the third notification message carries multiple agents corresponding to the VPNSID of each VM in each VM connected to any VRF VPNSID; According to the proxy VPNSID of the data center routing node, obtain the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured by the administrator on the data center routing node.
  • a proxy VPNSID in order to avoid being restricted by the maximum load sharing path number of the CSG, can be configured for each data center routing node.
  • multiple proxy VPNSIDs are configured corresponding to the VPNSID of the VM.
  • the proxy VPNSID can be used in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node during load sharing, and each data center routing node will do it.
  • the maximum number of load sharing paths of data center routing nodes can be as high as 128. Therefore, the message forwarding method provided by the embodiments of the present application is equivalent to increasing the number of paths that CSG ultimately performs load sharing, thereby improving the efficiency of load sharing. .
  • the data center routing node provided in the above embodiment performs message forwarding
  • only the division of the above functional modules is used as an example.
  • the above functions can be allocated to different functional modules according to needs. Complete, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above.
  • the data center routing node provided in the foregoing embodiment and the message forwarding method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • the embodiment of the present application also provides any one of the multiple VRFs in a communication network.
  • the communication network also includes a CSG, multiple data center routing nodes, multiple VMs for executing the target VNF, and multiple One or more of multiple VMs are connected to each VRF in the VRF.
  • each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the VRF includes:
  • the obtaining module is used to obtain multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected by the VRF.
  • the publishing module is configured to publish a first notification message to the CSG, and the first notification message carries multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected by the VRF.
  • the publishing module is further configured to publish a third announcement message to multiple data center routing nodes, the third announcement message carrying multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected by the VRF.
  • the VRF can actively report to the CSG multiple proxy VPNSIDs corresponding to the VPNSID of each VM in each VM that it is connected to, so that the CSG can construct the first forwarding table and improve the efficiency of CSG constructing the first forwarding table.
  • VRF message forwarding
  • the VRF provided in the above embodiment performs message forwarding
  • only the division of the above functional modules is used as an example for illustration.
  • the above function allocation can be completed by different functional modules according to needs, i.e.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the VRF provided in the foregoing embodiment belongs to the same concept as the message forwarding method embodiment, and its specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the embodiment of the present application also provides an RR in a communication network.
  • the communication network also includes a CSG, multiple data center routing nodes, multiple VRFs, and multiple VMs for executing the target VNF, and one or more of the multiple VMs are connected to each VRF.
  • Each VM in the multiple VMs is configured with a VPNSID
  • each data center routing node in the multiple data center routing nodes is configured with a proxy VPNSID.
  • the RR includes:
  • the acquisition module is used to acquire the VPNSID of each VM in the multiple VMs connected to any one of the multiple VRFs; for the acquired VPNSID of any VM, the RR determines the VPNSID based on the correspondence between the locally stored VPNSID and the proxy VPNSID The proxy VPNSID corresponding to the VPNSID of the VM is obtained, and multiple proxy VPNSIDs corresponding to the VPNSID of each VM in the multiple VMs connected to any VRF are obtained;
  • the publishing module is configured to send a second notification message to the CSG, where the second notification message carries multiple proxy VPNSIDs corresponding to the VPNSIDs of each of the multiple VMs connected to each of the multiple VRFs.
  • the corresponding relationship between the VPNSID stored locally in the RR and the proxy VPNSID is configured by the administrator on the RR, thereby improving the flexibility of the CSG to construct the first forwarding table.
  • the RR When the RR reports the proxy VPNSIDs corresponding to the VPNSIDs of multiple VMs to the CSG, and the CSG constructs the first forwarding table, the RR needs to obtain the multiple proxy VPNSIDs corresponding to the VPNSIDs of each VM in the multiple VMs, and then report to the CSG.
  • the CSG sends the second notification message to facilitate the CSG to construct the first forwarding table, which improves the flexibility of the CSG to construct the first forwarding table.
  • the RR provided in the above embodiment performs message forwarding
  • only the division of the above functional modules is used as an example for illustration.
  • the above function allocation can be completed by different functional modules as needed, i.e.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the RR provided in the foregoing embodiment belongs to the same concept as the message forwarding method embodiment, and its specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • an embodiment of the present application also provides a message forwarding system, which includes a CSG, multiple data center routing nodes, multiple virtual routing and forwarding VRFs, and multiple VMs for executing the target virtual network function VNF.
  • a message forwarding system which includes a CSG, multiple data center routing nodes, multiple virtual routing and forwarding VRFs, and multiple VMs for executing the target virtual network function VNF.
  • One or more of the multiple VMs are connected to each VRF of the multiple VRFs, each VM of the multiple VMs is configured with a virtual private network segment identifier VPNSID, and each data in the multiple data center routing nodes
  • the central routing node is configured with a proxy VPNSID;
  • the CSG is used to obtain the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table;
  • Any data center routing node among the multiple data center routing nodes is used to obtain the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node, and add the obtained VPNSIDs of the VMs to the second forwarding table.
  • the CSG is specifically used to: receive a first notification message sent by any one of the multiple VRFs, the first notification message carrying multiple agents corresponding to the VPNSID of each VM in each VM connected to the any VRF VPNSID.
  • the CSG is specifically configured to: the CSG receives a second notification message sent by the RR, and the second notification message carries the proxy VPNSID corresponding to the VPNSIDs of the multiple VMs.
  • the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries the VPNSID corresponding to each VM in each VM connected to the any VRF Multiple proxy VPNSIDs; according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
  • the VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured by the administrator on the data center routing node.
  • FIG. 11 is a schematic structural diagram of a network device 1100 provided by an embodiment of the present application. Any node in the communication network in the embodiments of FIGS. 1 to 5, such as CSG, data center routing node, VRF, etc., can be implemented by the network device 1100 shown in FIG. 11. At this time, the network device 1100 may be a switch. Routers or other network devices that forward packets. In addition, the network controller in the embodiments of FIG. 1 to FIG. 5 can also be implemented by the network device 1100 shown in FIG. 11. At this time, the specific functions of the network device 1100 can refer to any one of the embodiments in FIG. 1 to FIG. 5 The specific implementation manner of the network controller of, will not be repeated here. Referring to FIG. 11, the device includes at least one processor 1101, a communication bus 1102, a memory 1103, and at least one communication interface 1104.
  • the processor 1101 may be a general-purpose central processing unit (CPU), an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling program execution of the solution of the present application.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication bus 1102 may include a path for transferring information between the above-mentioned components.
  • the memory 1103 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disks or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be accessed by a computer Any other media, but not limited to this.
  • the memory 1103 may exist independently, and is connected to the processor 1101 through a communication bus 1102.
  • the memory 1103 may also be integrated with the processor 1101.
  • the memory 1103 is used to store program codes, and the processor 1101 controls the execution to execute the path detection method provided by any of the above embodiments.
  • the processor 1101 is configured to execute program codes stored in the memory 1103.
  • One or more software modules can be included in the program code. Any node in the communication network in the embodiments provided in FIGS. 1 to 5 can determine the data used to develop the application through one or more software modules in the program code in the processor 1101 and the memory 1103.
  • the one or more software modules may be the software modules provided in any of the embodiments in FIG. 9 and FIG. 10.
  • the communication interface 1104 uses any device such as a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network RAN, wireless local area networks (WLAN), and so on.
  • a transceiver to communicate with other devices or communication networks, such as Ethernet, radio access network RAN, wireless local area networks (WLAN), and so on.
  • WLAN wireless local area networks
  • the network device may include multiple processors, such as the processor 1101 and the processor 1105 shown in FIG. 11.
  • processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the computer may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital versatile disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)) )Wait.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种报文转发方法、装置及计算机存储介质,属于网络功能虚拟化技术领域。在该方法中,CSG接收报文,从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID作为目标代理VPNSID转发报文,以将报文转发至目标代理VPNSID所指示的数据中心路由节点。也即是,在该方法中CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担。而数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例提供的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了负载分担的效率。

Description

报文转发方法、装置及计算机存储介质
本申请要求于2019年10月30日提交的申请号为201911046986.5、发明名称为“报文转发方法、装置及计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络功能虚拟化技术领域,特别涉及一种报文转发方法、装置及计算机存储介质。
背景技术
在5G的区域数据中心(regnional data center,RDC)技术中,一个虚拟网络功能(virtualization network function,VNF)可以部署在多个虚拟机上,这多个虚拟机均可以单独执行该VNF,以实现该VNF的负载分担。因此,当基站业务网关(cellsite service gateway,CSG)接收到携带该VNF的标识的报文时,CSG需要将该报文转发至多个虚拟机中的一个虚拟机上,由该虚拟机基于该报文执行VNF。
相关技术中,对于任一VNF,CSG预先获取针对该VNF部署的多个虚拟机中每个虚拟机的虚拟私有网络段标识(virtual private network segment identifier,VPNSID),得到多个VPNSID。并获取该VNF的私网路由,该VNF的私网路由用于唯一标识该VNF。CSG建立这多个VPNSID与该VNF的私网路由之间的对应关系。当CSG接收到某个报文时,如果该报文携带该VNF的私网路由,则根据该对应关系通过多路径哈希算法将该报文映射到这多个VPNSID中的一个VPNSID,然后将该VPNSID作为该报文的目的地址进行转发,以实现将该报文转发至该VPNSID所指示的虚拟机。
但是上述转发报文的方法中,由于目前CSG能够支持的负载分担的数量最大为8路,导致上述对应关系中最多包括8个VPNSID,从而限制了负载分担的效率。
发明内容
本申请提供了一种报文转发方法、装置及计算机存储介质,可以提高负载分担的效率。所述技术方案如下:
第一方面,提供了一种报文转发方法,该方法应用于通信网络中的CSG。其中,该通信网络还包括多个数据中心路由节点、多个VRF、以及用于执行目标VNF的多个VM,这多个VRF中每个VRF上连接有前述多个VM中的一个或多个。前述多个VM中每个VM配置有一个VPNSID,前述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
在该方法中,CSG接收报文,该报文中携带目标VNF的标识;CSG从第一转发表包括 的多个代理VPNSID中选择一个代理VPNSID作为目标代理VPNSID,CSG将目标代理VPNSID作为目的地址转发报文,以将报文转发至目标代理VPNSID所指示的数据中心路由节点,用于指示目标代理VPNSID所指示的数据中心路由节点根据第二转发表包括的多个VPNSID转发报文。其中,第一转发表为目标VNF的标识对应的转发表,第一转发表包括的多个代理VPNSID是指与多个VM的VPNSID对应配置的代理VPNSID。第二转发表包括的多个VPNSID是指多个VM的VPNSID中与目标代理VPNSID对应的VM的VPNSID。
在本申请实施例中,为了避免受到CSG的最大负载分担路数的限制,可以为每个数据中心路由节点配置一个代理VPNSID。针对任一VM的VPNSID,该VM的VPNSID对应配置多个代理VPNSID。如此,在CSG的本地转发表中可以采用代理VPNSID来替代原来的VM的VPNSID,以使CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担。而数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例提供的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了负载分担的效率。
可选地,在该方法中,CSG获取与多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中。
由于本申请实施例是在CSG的本地转发表中采用代理VPNSID来替代原来的VM的VPNSID,因此,CSG在转发报文之前,需要获取多个VM的VPNSID对应的代理VPNSID,以构建本申请实施例提供的第一转发表,进而提高后续进行负载分担的效率。
可选地,上述CSG获取与多个VM的VPNSID对应的代理VPNSID,具体为:CSG接收多个VRF中任一VRF发送的第一通告消息,该第一通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
在一种实现方式中,VRF可以主动向CSG上报自身连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID,以便于CSG构建第一转发表,提高了CSG构建第一转发表的效率。
可选地,该通信网络还包括RR。此时,CSG获取与多个VM的VPNSID对应的代理VPNSID,具体为:CSG接收RR发送的第二通告消息,第二通告消息携带与多个VM的VPNSID对应的代理VPNSID。
在另一种实现方式中,由RR来向CSG上报多个VM的VPNSID对应的代理VPNSID,以便于CSG构建第一转发表,提高了CSG构建第一转发表的灵活性。
第二方面、提供了一种报文转发方法,该方法应用于通信网络中的多个数据中心路由节点中的任一数据中心路由节点。该通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
在该方法中,任一数据中心路由节点接收CSG发送的报文,报文携带目标代理VPNSID。在报文携带的目标代理VPNSID为该任一数据中心路由节点的代理VPNSID的情况下,该任一数据中心路由节点从第二转发表包括的多个VM的VPNSID中选择一个VPNSID,任一数据中心路由节点将选择的VPNSID作为目的地址转发报文。其中,第二转发表包括的多个VPNSID是指与任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID。
由于本申请实施例是在CSG的本地转发表中采用代理VPNSID来替代原来的VM的VPNSID,以使CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担。因此,数据中心路由节点在接收到报文时,需要根据第二转发表将该报文转发至多个VM中的一个VM,以实现负载分担。由于数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例提供的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了负载分担的效率。
可选地,在该方法中,该任一数据中心路由节点获取与该任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
由于本申请实施例是由各个数据中心路由节点来完成实际的负载分担,因此,数据中心路由节点需要在转发报文之前,获取与自身的代理VPNSID对应的多个VM的VPNSID,以构建本申请实施例提供的第二转发表,进而提高后续进行负载分担的效率。
可选地,该任一数据中心路由节点获取与任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,具体为:该任一数据中心路由节点接收多个VRF中任一VRF发送的第三通告消息,第三通告消息携带任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;该任一数据中心路由节点根据任一数据中心路由节点的代理VPNSID,从第三通告消息中获取与任一数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
在一种实现方式中,VRF可以主动向数据中心路由节点上报自身连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID,以便于数据中心路由节点构建第二转发表,提高了数据中心路由节点构建第二转发表的效率。
可选地,该任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该任一数据中心路由节点上配置。
在另一种实现方式中,与该任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID可以直接通过人工方式配置,提高数据中心路由节点构建第二转发表的灵活性。
第三方面、提供了一种报文转发方法,该方法应用于通信网络中的多个VRF中的任一VRF,该通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,多个VRF中每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
在该方法中,对于任一VRF,该VRF获取该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;该VRF向CSG发布第一通告消息,第一通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
在本申请实施例中,VRF可以主动向CSG上报自身连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID,以便于CSG构建第一转发表,提高了CSG构建第一转发表的效率。
可选地,该VRF获取该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID之后,该VRF还向多个数据中心路由节点发布第三通告消息,第三通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
在本申请实施例中,VRF还可以主动向数据中心路由节点上报自身连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID,以便于数据中心路由节点构建第二转发表,提 高了数据中心路由节点构建第二转发表的效率。
第四方面、提供了一种报文转发方法,该方法应用于通信网络中的RR。该通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
在该方法中,RR获取多个VRF中任一VRF连接的多个VM中每个VM的VPNSID;对于获取的任一VM的VPNSID,RR基于本地存储的VPNSID和代理VPNSID之间的对应关系确定该VM的VPNSID对应的代理VPNSID,得到与任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;RR向CSG发送第二通告消息,第二通告消息携带与多个VRF中每个VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
在由RR来向CSG上报多个VM的VPNSID对应的代理VPNSID,由CSG构建第一转发表的情况下,RR需要先获取多个VM中每个VM的VPNSID对应的多个代理VPNSID,然后向CSG发送第二通告消息,以便于CSG构建第一转发表,提高了CSG构建第一转发表的灵活性。
可选地,在该方法中,RR本地存储的VPNSID和代理VPNSID之间的对应关系由管理人员在该RR上配置,从而提高了CSG构建第一转发表的灵活性。
第五方面,提供了一种通信网络中的CSG。其中,该通信网络还包括多个数据中心路由节点、多个VRF、以及用于执行目标VNF的多个VM,这多个VRF中每个VRF上连接有前述多个VM中的一个或多个。前述多个VM中每个VM配置有一个VPNSID,前述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
CSG包括:
接收模块,用于接收报文,该报文中携带目标VNF的标识;
选择模块,用于从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID作为目标代理VPNSID;
转发模块,用于将目标代理VPNSID作为目的地址转发报文,以将报文转发至目标代理VPNSID所指示的数据中心路由节点,用于指示目标代理VPNSID所指示的数据中心路由节点根据第二转发表包括的多个VPNSID转发报文。其中,第一转发表为目标VNF的标识对应的转发表,第一转发表包括的多个代理VPNSID是指与多个VM的VPNSID对应配置的代理VPNSID。第二转发表包括的多个VPNSID是指多个VM的VPNSID中与目标代理VPNSID对应的VM的VPNSID。
可选地,CSG还包括添加模块,用于获取与多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中。
可选地,上述添加模块,具体用于:接收多个VRF中任一VRF发送的第一通告消息,该第一通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,该通信网络还包括RR。此时,上述添加模块,具体用于:接收RR发送的第二通告消息,第二通告消息携带与多个VM的VPNSID对应的代理VPNSID。
第五方面提供的CSG包括的各个模块的技术效果可以参考第一方面提供的报文转发方法,在此不再详细阐述。
第六方面、提供了一种通信网络中的多个数据中心路由节点中的任一数据中心路由节点。该通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
该数据中心路由节点包括:
接收模块,用于接收CSG发送的报文,报文携带目标代理VPNSID。
选择模块,用于在报文携带的目标代理VPNSID为该任一数据中心路由节点的代理VPNSID的情况下,从第二转发表包括的多个VM的VPNSID中选择一个VPNSID。
转发模块,用于将选择的VPNSID作为目的地址转发报文。其中,第二转发表包括的多个VPNSID是指与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID。
可选地,该数据中心路由节点还包括:
添加模块,用于获取与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
可选地,前述获取模块,具体用于:接收多个VRF中任一VRF发送的第三通告消息,第三通告消息携带任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;根据该数据中心路由节点的代理VPNSID,从第三通告消息中获取与该数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
可选地,该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该数据中心路由节点上配置。
第六方面提供的数据中心路由节点包括的各个模块的技术效果可以参考第二方面提供的报文转发方法,在此不再详细阐述。
第七方面、提供了一种通信网络中的多个VRF中的任一VRF,该通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,多个VRF中每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
该VRF包括:
获取模块,用于获取该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;
发布模块,用于向CSG发布第一通告消息,第一通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,发布模块,还用于向多个数据中心路由节点发布第三通告消息,第三通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
第七方面提供的VRF包括的各个模块的技术效果可以参考第三方面提供的报文转发方法,在此不再详细阐述。
第八方面、提供了一种通信网络中的RR。该通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
该RR包括:
获取模块,用于获取多个VRF中任一VRF连接的多个VM中每个VM的VPNSID;对于获取的任一VM的VPNSID,RR基于本地存储的VPNSID和代理VPNSID之间的对应关系确定该VM的VPNSID对应的代理VPNSID,得到与任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;
发布模块,用于向CSG发送第二通告消息,第二通告消息携带与多个VRF中每个VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,RR本地存储的VPNSID和代理VPNSID之间的对应关系由管理人员在该RR上配置,从而提高了CSG构建第一转发表的灵活性。
第八方面提供的RR包括的各个模块的技术效果可以参考第四方面提供的报文转发方法,在此不再详细阐述。
第九方面、提供了一种通信网络中的CSG,所述通信网络还包括多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述CSG包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述存储器中存储的程序以执行上述第一方面中任一所述的方法。
第十方面、提供了一种通信网络中的数据中心路由节点,所述通信网络包括多个数据中心路由节点、CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述多个数据中心路由节点中的任一数据中心路由节点包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述存储器中存储的程序以执行上述第二方面中任一所述的方法。
第十一方面、提供了一种通信网络中的VRF,所述通信网络包括多个VRF、CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述多个VRF中的任一VRF包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述存储器中存储的程序以执行上述第三方面中任一所述的方法。
第十二方面、提供了一种通信网络中的RR,所述通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述RR包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述存储器中存储的程序以执行上述第四方面中任一所述的方法。
第十三方面、提供了一种芯片,所述芯片设置在通信网络中的CSG中,所述通信网络还包括多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述芯片包括处理器和接口电路;
所述接口电路用于接收指令并传输至所述处理器;
所述处理器用于执行上述第一方面中任一所述的方法。
第十四方面、提供了一种芯片,所述芯片设置在通信网络包括的多个数据中心路由节点中的任一数据中心路由节点中,所述通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述芯片包括处理器和接口电路;
所述接口电路用于接收指令并传输至所述处理器;
所述处理器用于执行上述第二方面中任一所述的方面。
第十五方面、提供了一种芯片,所述芯片设置在通信网络包括的多个VRF中的任一VRF中,所述通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述芯片包括处理器和接口电路;
所述接口电路用于接收指令并传输至所述处理器;
所述处理器用于执行上述第三方面中任一所述的方面。
第十六方面、提供了一种芯片,所述芯片设置在通信网络的RR中,所述通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述 多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
所述芯片包括处理器和接口电路;
所述接口电路用于接收指令并传输至所述处理器;
所述处理器用于执行上述第四方面中任一所述的方面。
第十六方面、提供了一种报文转发系统,所述系统包括CSG、多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有该多个VM中的一个或多个,该多个VM中每个VM配置有一个虚拟私有网络段标识VPNSID,该多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
该CSG,用于获取与该多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中;
该多个数据中心路由节点中任一数据中心路由节点,用于获取与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
可选地,CSG具体用于:接收该多个VRF中任一VRF发送的第一通告消息,该第一通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,CSG具体用于:该CSG接收该RR发送的第二通告消息,该第二通告消息携带与该多个VM的VPNSID对应的代理VPNSID。
可选地,该数据中心路由节点,用于接收该多个VRF中任一VRF发送的第三通告消息,该第三通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;根据该数据中心路由节点的代理VPNSID,从该第三通告消息中获取与该数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
可选地,该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该数据中心路由节点上配置。
上述报文转发系统中各个节点的技术效果同样可以参考上述第一方面、第二方面、第三方面以及第四方面提供的报文转发方法的技术效果,在此不再赘述。
附图说明
图1是本申请实施例提供的一种通信网络的架构示意图;
图2是本申请实施例提供的另一种通信网络的架构示意图;
图3是本申请实施例提供的一种报文转发方法流程图;
图4是本申请实施例提供的一种报文转发流程意图;
图5是本申请实施例提供的一种配置第一转发表和第二转发表的方法流程图;
图6是本申请实施例提供的一种网络设备的结构示意图;
图7是本申请实施例提供的另一种网络设备的结构示意图;
图8是本申请实施例提供的一种图7所示网络设备中的接口板的结构示意图;
图9是本申请实施例提供的一种CSG的结构示意图;
图10是本申请实施例提供的一种数据中心路由节点的结构示意图;
图11是本申请实施例提供的另一种网络设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
应当理解的是,本文提及的“多个”是指两个或两个以上。在本申请的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
在对本申请实施例提供的报文转发方法进行解释说明之前,先对本申请实施例涉及的通信网络进行解释说明。
图1是本申请实施例提供的一种通信网络的架构示意图。如图1所示,该通信网络100包括多个CSG、多个运营商边缘设备(provider edge,PE)、多个数据中心路由节点、多个数据中心以及多个虚拟路由转发(virtual route forwarding,VRF)。
其中,数据中心可以RDC,也可以为中心数据中心(central data center,CDC)或者边缘数据中心(edge data center,EDC),图1是以RDC作为数据中心为例进行说明。数据中心路由节点可以为部署在PE和数据中心之间的数据中心网关(data center gateway,DCGW),还可以为部署在数据中心和VRF之间的DC Spine(脊)路由器,本申请实施例对此不做具体限定。图1是以数据中心路由节点为部署在PE和数据中心之间的DCGW为例进行说明。
如图1所示,任一CSG通过骨干网中的多个PE与任一DCGW进行通信。任一DCGW与任一RDC之间进行通信。任一RDC与所连接的各个VRF之间进行通信,每个VRF上连接有一个或多个VM(图1中是以每个VRF连接一个VM为例进行说明)。用于执行同一VNF的VM有多个,且可能分别连接在不同的VRF上。如图1所示,用于执行图1所示的VNF的VM有三个,分别是图1中从上到下的前三个VRF连接的三个VM。
为了能够精确实现针对某个VNF的负载分担,针对用于执行该VNF的各个VM中每个VM配置了END.dx类型的SID,也称为VPNSID。也即是,在本申请实施例中,每个VPNSID用于唯一标识一个VM。如此,相关技术中CSG的转发表包括与该VNF的标识对应的多个VPNSID,以便于CSG根据转发表将报文转发至其中的一个VPNSID所指示的VM。
如图1所示,目前的CSG最多支持8路负载分担,也即是,CSG接收到某个报文时,CSG基于多路径哈希算法将该报文最多能转发至8个VM中的一个VM进行处理,这严重影响了负载分担的效率。本申请实施例正是基本这个场景提供了一种转发报文的方法,以提高负载分担的效率。
另外,图1所示的VNF可以为接入管理功能(access management function,AMF),还可以为会话管理功能(session management function,SMF),还可以为用户面功能(user plane function,UPF)等等。
另外,上述任一VRF是通过指定的接入(Access Circuit,AC)三层接口或子接口与一 个VM进行连接,在此不再详细说明。
需要说明的是,图1中所示的各个设备的数量仅仅用于举例说明,并不构成对本申请实施例提供的通信网络的架构的限定。
为了后续便于说明,将图1中所示的通信网络进行了简化,简化后的通信网络如图2所示。后续的转发报文的方法以图2所示的通信网络进行举例说明。如图2所示,该通信网络200包括CSG,多个数据中心路由节点(图2中以两个数据中心路由节点为例进行说明,分别标记为数据中心路由节点1和数据中心路由节点2)、多个VRF(图2中以两个VRF为例进行说明,分别标记为VRF1和VRF2)、以及用于执行目标VNF的多个VM,多个VRF中每个VRF上连接有多个VM中的一个或多个(图2中以每个VRF上连接有两个VM为例进行说明)。
此外,如图2所示,网络划分为多个域,每个域包括一组主机和一组路由器,一个域内的主机和路由器由一个控制器统一进行管理。图2中的CSG、数据中心路由节点1和数据中心路由节点2位于同一域内,数据中心路由节点1、数据中心路由节点2以及VRF1和VRF2位于另一个域内。每个域内还部署有一个路由反射器(route reflector,RR),在图2中分别标记为RR1和RR2。其中,每个域内的路由反射器的功能为:该域内任一路由设备均可以通过该路由反射器与其他路由设备进行通信,无需这两个路由设备之间直接建立网络连接,从而减少网络资源的消耗。
关于图2所示的通信网络中各个节点的功能将在下述实施例中详细说明,在此先不一一展开说明。
下面以图2所示的通信网络为例来说明本申请实施例提供的报文转发方法,对于图1所述的通信网络中的其他节点部署情况,均可以参考下述实施例来实现报文转发。
图3是本申请实施例提供的一种报文转发方法流程图。如图3所示,该方法包括如下步骤:
步骤301:CSG接收报文,该报文中携带目标VNF的标识。
在本申请实施例中,为了避免受到CSG的最大负载分担路数的限制,可以为每个数据中心路由节点配置一个代理VPNSID。针对任一VM的VPNSID,可以为该VM的VPNSID配置多个代理VPNSID。如此,在CSG的本地转发表中可以采用代理VPNSID来替代原来的VM的VPNSID,以使CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担,而数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例通过的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了转发报文的效率。
因此,CSG中存储有与该目标VNF的标识对应的第一转发表,第一转发表包括多个代理VPNSID,以便于CSG通过下述步骤302和步骤303转发该报文。第一转发表包括的多个代理VPNSID是指与用于执行目标VNF的多个VM的VPNSID对应的代理VPNSID。其中,第一转发表的配置过程将在下述实施例中说明,在此就不先不阐述。
图4是本申请实施例提供的一种报文转发示意图。如图4所示,该CSG中存储的第一转发表中包括两个代理VPNSID,分别为DE::B100和DF::B100。DE::B100是数据中心路由节点1的代理VPNSID。DF::B100是数据中心路由节点2的代理VPNSID。
其中,针对任一VM的VPNSID,为该VM的VPNSID配置多个代理VPNSID。如图4所示,针对从上到下的第一个VM的VPNSID(该VPNSID为A8:1::B100)配置了两个对应的代理VPNSID,分别为DE::B100和DF::B100。针对从上到下的第二个VM的VPNSID(该VPNSID为A8:1::B101)也配置了两个对应的代理VPNSID,分别为DE::B100和DF::B100。此外,针对从上到下的第三个VM和第四个VM,同样可以配置这两个对应的代理VPNSID。也即是,对于图4所示的每个VM的VPNSID,均配置了两个对应的代理VPNSID,分别为DE::B100和DF::B100。
当各个VM的VPNSID按照上述方式配置对应的代理VPNSID之后,此时,第一转发表中包括两个代理VPNSID(这两个代理VPNISD分别为DE::B100和DF::B100。)是与执行目标VNF的多个VM的VPNSID对应的所有代理VPNSID。关于配置各个VM的VPNSID对应的代理VPNSID的过程以及生成第一转发表的具体实现方式将在下述生成第一转发表的实施例中详细说明,在此先不赘述。
步骤302:CSG从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID作为目标代理VPNSID。
在一种具体的实现方式中,CSG可以通过多路径哈希算法从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID。多路径哈希算法可以为等价多路径(equal-cost multi path routing,ECMP)算法。此时,第一转发表中的多个代理VPNSID被选择的概率相同,以实现CSG将接收到的报文均匀地分担至各个数据中心路由节点。比如,对于图4所示的转发流程,CSG基于多路径哈希算法选择的代理VPNSID为DE::B100,表明此时需要将报文转发至数据中心路由节点1。
可以理解,在另外的实现方式下,可以采用不同的哈希算法使第一转发表中的各个代理VPNSID被选择的概率不相同,具体可以根据负载均衡策略确定具体类型的哈希算法。
步骤303:CSG将目标代理VPNSID作为目的地址转发该报文。
在本申请实施例中,由于是采用了代理VPNSID来替换相关技术中的VPNSID,因此,CSG在从第一转发表中选择出代理VPNSID作为目标代理VPNSID之后,便可将目标代理VPNSID作为报文的目的地址以转发报文。
比如,对于图4所示的转发流程,CSG将代理VPNSID=DE::B100作为报文的目的地址(图4中将目的地址标记为DA),该报文的源地址(图4中将源地址标记为SA)为CSG。另外,如图4所示,报文中还包括有效载荷(payload)。如图4中标号为①的步骤用于说明前述过程。
通过上述步骤301至步骤303,CSG可以将报文转发至选择的目标代理VPNSID所指示的数据中心路由节点。目标代理VPNSID所指示的数据中心路由节点中配置有第二转发表,第二转发表中包括多个VPNSID,用于指示目标代理VPNSID对应的数据中心路由节点根据第二转发表包括的多个VPNSID转发报文,第二转发表包括的多个VPNSID是指多个VM的VPNSID中与目标代理VPNSID对应的VPNSID。因此,对于任一数据中心路由节点,如果该数据中心路由节点为CSG选择的目标代理VPNSID所指示的数据中心路由节点,则可以通过下述步骤304至步骤306来对接收到的报文进行处理。关于第二转发表的配置过程将在下述实施例中说明,在此先不阐述。
步骤304:对于任一数据中心路由节点,该数据中心路由节点接收CSG发送的报文,该 报文携带目标代理VPNSID。
由于网络中的任一数据中心路由节点均可能接收到CSG发送的报文,因此,对于任一数据中心路由节点,当该数据中心路由节点接收到CSG发送的报文,需判断该报文是否由自身来处理。在一种具体的实现方式中,由于报文携带的目的地址为目标代理VPNSID,因此,该数据中心路由节点可以比对报文中携带的目标代理VPNSID和自身的配置的代理VPNSID是否一致。如果不一致,表明该报文是由其他数据中心路由节点来处理的,此时,则将该报文转发至其他数据中心路由节点来处理。如果一致,表明该报文是由自身来处理,此时,该数据中心路由节点则可以通过下述步骤305和步骤306继续转发该报文。
比如,对于图4所示的转发流程,当数据中心路由节点1接收到该报文时,由于报文携带的代理VPNSID为DE::B100,该代理VPNSID正是自身的代理VPNSID,因此,数据中心路由节点1可以继续通过下述步骤步骤305和步骤306来转发该报文。
当数据中心路由节点2接收到该报文时,由于该报文携带的目标代理VPNSID和自身的代理VPNSID不一致,因此,数据中心路由节点2将该报文继续转发至目标代理VPNSID所指示的数据中心路由节点。
步骤305:在该报文携带的目标代理VPNSID为该数据中心路由节点的代理VPNSID的情况下,该数据中心路由节点从第二转发表包括的多个VM的VPNSID中选择一个VPNSID,第二转发表包括的多个VM的VPNSID是指与任一数据中心路由节点的代理VPNSID对应的VM的VPNSID。
由于数据中心路由节点的本地存储有与自身的代理VPNSID对应的第二转发表,且第二转发表中包括与自身的代理VPNSID对应的多个VM的VPNSID,因此,在报文携带的代理VPNSID为任一数据中心路由节点的代理VPNSID的情况下,该数据中心路由节点可以直接通过多路径哈希算法从第二转发表中选择一个VPNSID,以通过下述步骤306转发该报文。
上述多路径哈希算法同样可以为ECMP算法。此时,第二转发表中的各个VM的VPNSID被选择的概率相同,以实现数据中心路由节点将接收到的报文均匀地分担至各个VM。可以理解,在另外的实现方式下,可以采用不同的哈希算法使第二转发表中的各个VM的VPNSID被选择的概率不相同,具体可以根据负载均衡策略确定具体类型的哈希算法。
比如,对于图4所示的转发流程,假设数据中心路由节点1的第二转发表中包括两个VPNSID,分别为A8:1::B100和A8:1::B101,因此,数据中心路由节点1可以通过ECMP算法从这两个VPNSID中选择一个。
步骤306:该数据中心路由节点将选择的VPNSID作为目的地址转发该报文。
由于该报文最终时需要由VM来处理的,因此,该数据中心路由节点在从第二转发表中选择一个VPNSID之后,便可将选择的VPNSID作为目的地址转发报文,以使选择的VPNSID所指示的VM来处理该报文。
比如,对于图4所示的转发流程,如果数据中心路由节点1从第二转发表中选择的VPNSID为A8:1::B100,此时如图4所示,数据中心路由节点1可以将A8:1::B100作为报文的目的地址(图4中将目的地址标记为DA)进行转发。如果数据中心路由节点1从第二转发表中选择的VPNSID为A8:1::B101,此时如图4所示,数据中心路由节点1可以将A8:1::B101作为报文的目的地址进行转发。需要说明的是,图4中标号为②的两个步骤用于说明前述过程,且标号为②的两个步骤是或者的关系。
此外,由于各个VM是部署在VRF上的,因此,当数据中心路由节点将选择的VPNSID作为目的地址转发该报文时,是由部署该选择的VPNSID所指示的VM的VRF先接收该报文,然后将该报文转发至该选择的VPNSID所指示的VM。如图4中标号为③的步骤用于说明前述过程。
在图3所示的转发报文的过程中,需要在CSG上配置第一转发表,在各个数据中心路由节点中配置第二转发表,第一转发表和第二转发表的具体功能已在上述实施例进行了解释说明,接下来对第一转发表和第二转发表的配置过程进行解释说明。
图5是本申请实施例提供的一种配置第一转发表和第二转发表的方法流程图。如图5所示,该方法包括如下几个步骤:
步骤501:对于任一VRF,该VRF获取为该VRF连接的多个VM中任一VM配置的VPNSID。
其中,为该VRF连接的多个VM中任一VM配置的VPNSID可以由网络控制器来配置,也可以通过管理人员在VRF上直接配置,本申请对此不做具体限定。如果是网络控制器来配置VPNSID,则网络控制器在配置该VRF连接的多个VM中任一VM的VPNSID之后,将配置VPNSID发布给该VRF,以使该VRF获取到为该VRF连接的多个VM中任一VM配置的VPNSID。
在一种具体的实现方式中,网络控制器或管理人员根据该VRF的定位标识(Locator)为连接的各个VM配置VPNSID。比如,对于图4所示的通信网络,VRF1的定位标识为A8:1::/64,如图4所示,网络控制器或管理人员可以为VRF1连接的两个VM分别配置两个VPNSID,其中,针对图4中从上到下第一个VM配置的VPNSID为A8:1::B100,针对从上到下第二个VM配置的VPNSID为A8:1::B101。
步骤502:对于任一数据中心路由节点,该数据中心路由节点获取为该数据中心路由节点配置的代理VPNSID。
为该数据中心路由节点上配置的该数据中心路由节点的代理VPNSID可以由网络控制器来配置,也可以由管理人员在数据中心路由节点上配置,本申请对此不做具体限定。如果是网络控制器来配置代理VPNSID,则网络控制器在配置该数据中心路由节点的代理VPNSID之后,将配置代理VPNSID发布给该数据中心路由节点,以使该数据中心路由节点获取到为该数据中心路由节点配置的代理VPNSID。
在一种具体的实现方式中,网络控制器或管理人员根据该数据中心路由节点的定位标识(Locator)为该数据中心路由节点配置代理VPNSID。比如,对于图4所示的通信网络,数据中心路由节点1的定位标识为DE::/64,如图4所示,网络控制器或管理人员可以为数据中心路由节点1配置一个代理VPNSID,为DE::B100。按照前述同样的方式,如图4所示,基于数据中心路由节点2的定位标识DF::/64为数据中心路由节点2配置的代理VPNSID为DF::B100。
步骤503:CSG获取与用于执行目标VNF的多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中。
在本申请实施中,CSG可以通过下述两种具体的实现方式来获取与用于执行目标VNF的多个VM的VPNSID对应的代理VPNSID:
第一种实现方式:对于任一VRF,该VRF获取该VRF连接的多个VM中任一VM的VPNSID对应配置的多个代理VPNSID。该VRF向CSG发布第一通告消息,第一通告消息携带该VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID。CSG接收该VRF发送的第一通告消息。CSG根据各个VRF发送的第一通告消息便可获取到多个VM的VPNSID对应的代理VPNSID。
其中,对于任一VM,为该VM的VPNSID对应配置的多个代理VPNSID可以由网络控制器来配置,也可以通过管理人员在VRF上直接配置,本申请对此不做具体限定。如果是网络控制器来配置该VPNSID对应的多个代理VPNSID,则网络控制器在配置完该VM的VPNSID对应的多个代理VPNSID之后,将配置的该VM的VPNSID对应的多个代理VPNSID发布给该VRF,以使该VRF获取到为该VM的VPNSID对应配置的多个代理VPNSID。
如图2所示,VRF与CSG位于不同的域内,因此上述任一VRF向CSG发布第一通告消息可以是通过MP-BGP/EVPN(一种基于边界网关协议(Border Gateway Protocol,BGP)和以太网虚拟私有网络(Ethernet Virtual Private Network,EVPN)的多链路协议(Multilink Protocol),MP))来发布的。
在上述第一种实现方式中,VRF是主动向CSG上报该VRF连接的多个VM中任一VM的VPNSID对应配置的多个代理VPNSID的。
第二种实现方式:通信网络中的RR本地预先存储有VPNSID和代理VPNSID之间的对应关系,该对应关系包括多个VM的VPNSID和与每个VM的VPNSID对应的多个代理VPNSID,该对应关系的构建过程将在下述详细说明,在此先不展开阐述。对于任一VRF,RR获取该VRF连接的各个VM中每个VM的VPNSID,对于获取的任一VM的VPNSID,RR基于VPNSID和代理VPNSID之间的对应关系,便可确定与该VM的VPNSID对应的多个代理VPNSID。RR在获取到各个VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID之后,便可向CSG发送第二通告消息,第二通告消息携带与多个VRF中每个VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。CSG接收RR发送的第二通告消息,CSG根据第二通告消息便可获取到这多个代理VPNSID。
上述该RR具体为图4中的RR2。此外,上述RR向CSG发送第二通告消息也是通过MP-BGP/EVPN来发布的。
另外,对于任一VRF,RR获取该VRF连接的各个VM中每个VM的VPNSID的实现方式可以为:RR向该VRF发送VPNSID获取请求,该VPNSID获取请求用于指示该VRF将该VRF连接的各个VM中每个VM的VPNSID发送至RR。也即是,在第二种实现方式中,VRF是被动来向RR发送该VRF连接的各个VM的VPNSID的。
另外,在第二种实现方式中,RR本地存储的VPNSID和代理VPNSID之间的对应关系可以由管理人员在RR上直接配置。在一种具体的实现方式中,管理人员可以通过管理系统或命令行在RR上配置该VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID,以使RR获取到该VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID,从而构建VPNSID和代理VPNSID之间的对应关系。
步骤504:对于任一数据中心路由节点,该数据中心路由节点获取与该一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,该数据中心路由节点将获取的多个VM的VPNSID添加至与自身的代理VPNSID对应的第二转发表中。
其中,数据中心路由节点可以通过以下两种具体的实现方式来获取与该数据中心路由节点的代理VPNSID对应的多个VPNSID:
第一种实现方式:对于任一VRF,该VRF获取任一VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID。该VRF向各个数据中心路由节点发布第三通告消息,第三通告消息携带该VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID。对于任一数据中心路由节点,该数据中心路由节点接收该VRF发送的第三通告消息,并根据该数据中心路由节点的代理VPNSID,从第三通告消息中获取与该数据中心路由节点的代理VPNSID对应的VM的VPNSID。当该数据中心路由节点接收到全部的VRF发布的第三通告消息之后,便可根据所有的第三通告消息确定出与该数据中心路由节点的代理VPNSID对应的所有VM的VPNSID。
其中,VRF获取任一VRF连接的多个VM中任一VM的VPNSID对应的多个代理VPNSID可以参考步骤503中第一种实现方式,在此不再详细说明。
基于图2所示的通信网络可知,VRF和数据中心路由节点位于同一域内,因此,上述该VRF向各个数据中心路由节点发布第三通告消息是通过内部网络协议(Interior Gateway Protocol,IGP)发布的。
在上述第一种实现方式中,VRF是主动向数据中心路由节点上报该VRF连接的多个VM中任一VM的VPNSID对应配置的多个代理VPNSID的。
第二种实现方式:对于任一数据中心路由节点,该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该数据中心路由节点上配置。比如,管理人员可以通过命令行或管理系统直接在该数据中心路由节点上配置该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID。
上述步骤503和步骤504的两种实现方式可以组合使用,当通过步骤503中的第一种实现方式和步骤504的第一种实现方式来配置第一转发表和第二转发表时,这种配置方式还可以称为全动态配置方式。在全动态配置方式中,均是由VRF主动上报VM的VPNSID和对应的代理VPNSID,以使CSG和数据中心路由节点能够获取到VM的VPNSID和对应的代理VPNSID,进而配置各自的转发表。
当通过步骤503中的第一种实现方式和步骤504的第二种实现方式来配置第一转发表和第二转发表时,这种配置方式还可以称为半动态配置方式。在半动态配置方式中,VRF主动向CSG上报VM的VPNSID和对应的代理VPNSID,以使CSG配置第一转发表,由管理人员直接在数据中心路由节点配置代理VPNSID和对应的VM的VPNSID,以使数据中心路由节点生成第二转发表。
当通过步骤503中的第二种实现方式和步骤504的第二种实现方式来配置第一转发表和第二转发表时,这种配置方式还可以称为静态配置方式。在静态配置方式中,VRF不主动上报任何信息,因此需要RR从VRF上获取该VRF连接的各个VM的VPNSID,然后根据本地存储的VPNSID和代理VPNSID之间的对应关系,确定各个VM的VPNSID对应的代理VPNSID,进而通告给CSG,由CSG来生成第一转发表。由于VRF不主动上报任何信息,因此,数据中心路由节点只能通过管理员来人工配置与自身的代理VPNSID对应的VM的VPNSID。
通过上述不同的实现方式,提高了配置第一转发表和第二转发表的灵活性。
图6是本申请实施例提供的一种网络设备的结构示意图,该网络设备600可以为上述图1-5所示实施例中通信网络中任一节点,比如可以为CSG、数据中心路由节点、VRF等。该网络设备600可以为交换机,路由器或者其他转发报文的网络设备。在该实施例中,该网络设备600包括:主控板610、接口板630和接口板640。多个接口板的情况下可以包括交换网板(图中未示出),该交换网板用于完成各接口板(接口板也称为线卡或业务板)之间的数据交换。
主控板610用于完成系统管理、设备维护、协议处理等功能。接口板630和640用于提供各种业务接口(例如,POS接口、GE接口、ATM接口等),并实现报文的转发。主控板610上主要有3类功能单元:系统管理控制单元、系统时钟单元和系统维护单元。主控板610、接口板630以及接口板640之间通过系统总线与系统背板相连实现互通。接口板630上包括一个或多个处理器631。处理器631用于对接口板进行控制管理并与主控板上的中央处理器进行通信,以及用于报文的转发处理。接口板630上的存储器632用于存储转发表项,处理器631通过查找存储器632中存储的转发表项进行报文的转发。
所述接口板630包括一个或多个网络接口633用于接收其他设备发送的报文,并根据处理器631的指示发送报文。具体实现过程可以参考图3所示实施例中的301、303、304、306步骤。这里不再逐一赘述。
所述处理器631用于执行图1-5所示实施例中所描述的通信网络中的任一节点的处理步骤和功能,具体可以参看上述图3所示实施例中的302(作为CSG时的处理)或305步骤(作为数据中心路由节点时的处理),图5所示实施例中的501步骤(作为VRF时的处理),502步骤(作为数据中心路由节点时的处理),503步骤(作为CSG时的处理)以及步骤504步骤(作为数据中心路由节点时的处理)。这里不再逐一赘述。
可以理解,如图6所示,本实施例中包括多个接口板,采用分布式的转发机制,这种机制下,接口板640上的操作与所述接口板630的操作基本相似,为了简洁,不再赘述。此外,可以理解的是,图6中的接口板630中的处理器631和/或641可以是专用硬件或芯片,如网络处理器或者专用集成电路(application specific integrated circuit)来实现上述功能,这种实现方式即为通常所说的转发面采用专用硬件或芯片处理的方式。采用网络处理器这一专用硬件或芯片的具体实现方式可以参考下面图7所示的实施例。在另外的实施方式中,所述处理器631和/或641也可以采用通用的处理器,如通用的CPU来实现以上描述的功能。
此外,需要说明的是,主控板可能有一块或多块,有多块的时候可以包括主用主控板和备用主控板。接口板可能有一块或多块,该设备的数据处理能力越强,提供的接口板越多。多块接口板的情况下,该多块接口板之间可以通过一块或多块交换网板通信,有多块的时候可以共同实现负荷分担冗余备份。在集中式转发架构下,该设备可以不需要交换网板,接口板承担整个系统的业务数据的处理功能。在分布式转发架构下,该设备包括多块接口板,可以通过交换网板实现多块接口板之间的数据交换,提供大容量的数据交换和处理能力。所以,分布式架构的网络设备的数据接入和处理能力要大于集中式架构的设备。具体采用哪种架构,取决于具体的组网部署场景,此处不做任何限定。
具体的实施例中,存储器632可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM) 或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器632可以是独立存在,通过通信总线与处理器631相连接。存储器632也可以和处理器631集成在一起。
其中,存储器632用于存储程序代码,并由处理器631来控制执行,以执行上述实施例所提供的路径探测方法。处理器631用于执行存储器632中存储的程序代码。程序代码中可以包括一个或多个软件模块。这一个或多个软件模块可以为下面图9、图10任一实施例中提供的软件模块。
具体实施例中,所述网络接口633,可以是使用任何收发器一类的装置,用于与其它设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
图7是本申请实施例提供的另一种网络设备的结构示意图,该网络设备700可以为上述图1-5所示实施例中的通信网络中任一节点,比如可以为CSG、数据中心路由节点、VRF等。该网络设备700可以为交换机,路由器或者其他转发报文的网络设备。在该实施例中,该网络设备700包括:主控板710、接口板730、交换网板720和接口板740。主控板710用于完成系统管理、设备维护、协议处理等功能。交换网板720用于完成各接口板(接口板也称为线卡或业务板)之间的数据交换。接口板730和740用于提供各种业务接口(例如,POS接口、GE接口、ATM接口等),并实现数据包的转发。控制平面由主控板710的各管控单元及接口板730和740上的管控单元等构成。主控板710上主要有3类功能单元:系统管理控制单元、系统时钟单元和系统维护单元。主控板710、接口板730和740,以及交换网板720之间通过系统总线与系统背板相连实现互通。接口板730上的中央处理器731用于对接口板进行控制管理并与主控板上的中央处理器进行通信。接口板730上的转发表项存储器734用于存储转发表项,网络处理器732通过查找转发表项存储器734中存储的转发表项进行报文的转发。
所述接口板730的物理接口卡733用于接收报文。具体实现过程可以参考图3所示实施例中的301、304步骤。这里不再逐一赘述。
所述网络处理器732用于执行图1-5所示实施例中所描述的任一节点的处理步骤和功能,具体可以参看上述图3所示实施例中的302(作为CSG时的处理)或305步骤(作为数据中心路由节点时的处理),图5所示实施例中的501步骤(作为VRF时的处理),502步骤(作为数据中心路由节点时的处理),503步骤(作为CSG时的处理)以及步骤504步骤(作为数据中心路由节点时的处理)。这里不再逐一赘述。
然后,处理之后报文通过所述物理接口卡733向其他设备发送。具体实现过程可以参考图3所示实施例中的303、306步骤。这里不再逐一赘述。
可以理解,如图7所示,本实施例中包括多个接口板,采用分布式的转发机制,这种机制下,接口板740上的操作与所述接口板730的操作基本相似,为了简洁,不再赘述。此外, 如上所述,图7中的网络处理器732以及742的功能可以用专用集成电路(application specific integrated circuit)替换来实现。
此外,需要说明的是,主控板可能有一块或多块,有多块的时候可以包括主用主控板和备用主控板。接口板可能有一块或多块,该设备的数据处理能力越强,提供的接口板越多。接口板上的物理接口卡也可以有一块或多块。交换网板可能没有,也可能有一块或多块,有多块的时候可以共同实现负荷分担冗余备份。在集中式转发架构下,该设备可以不需要交换网板,接口板承担整个系统的业务数据的处理功能。在分布式转发架构下,该设备可以有至少一块交换网板,通过交换网板实现多块接口板之间的数据交换,提供大容量的数据交换和处理能力。所以,分布式架构的网络设备的数据接入和处理能力要大于集中式架构的设备。具体采用哪种架构,取决于具体的组网部署场景,此处不做任何限定。
图8是本申请实施例提供的一种上述图7所示网络设备中的接口板800的结构示意图,该接口板800所在的网络设备可以为上述图1-5所示实施例中通信网络中任一节点,比如可以为CSG、数据中心路由节点、VRF等。该接口板800可以包括物理接口卡(physical interface card,PIC)830,网络处理器(network processor,NP)810,以及流量管理模块(traffic management)820。
其中,PIC:物理接口卡(physical interface card),用于实现物理层的对接功能,原始的流量由此进入网络设备的接口板,以及处理后的报文从该PIC卡发出。
网络处理器NP 810用于实现报文的转发处理。具体而言,上行报文的处理包括:报文入接口的处理,转发表查找(如上述实施例中涉及第一转发表或第二转发表的相关内容);下行报文的处理:转发表查找(如上述实施例中涉及第一转发表或第二转发表的相关内容)等等。
流量管理TM 820,用于实现QoS、线速转发、大容量缓存,队列管理等功能。具体而言,上行流量管理包括:上行Qos处理(如拥塞管理和队列调度等)以及切片处理;下行流量管理包括:组包处理,多播复制,以及下行Qos处理(如拥塞管理和队列调度等)。
可以理解的是,若网络设备有多个接口板800的情况下,多个接口板800之间可以通过交换网840通信。
需要说明的是,图8仅示出了NP内部的示意性处理流程或模块,具体实现中各模块的处理顺序不限于此,而且实际应用中可以根据需要部署其他模块或者处理流程。本申请实施例对比不做限制。
图9是本申请实施例提供的一种CSG的结构示意图。其中,该通信网络还包括多个数据中心路由节点、多个VRF、以及用于执行目标VNF的多个VM,这多个VRF中每个VRF上连接有前述多个VM中的一个或多个。前述多个VM中每个VM配置有一个VPNSID,前述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
如图9所示,该CSG 900包括:
接收模块901,用于接收报文,该报文中携带目标VNF的标识。具体实现方式参考图3实施例中的步骤301。
选择模块902,用于从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID作 为目标代理VPNSID。具体实现方式参考图3实施例中的步骤302。
转发模块903,用于将目标代理VPNSID作为目的地址转发报文,以将报文转发至目标代理VPNSID所指示的数据中心路由节点,用于指示目标代理VPNSID所指示的数据中心路由节点根据第二转发表包括的多个VPNSID转发报文。其中,第一转发表为目标VNF的标识对应的转发表,第一转发表包括的多个代理VPNSID是指与多个VM的VPNSID对应配置的代理VPNSID。第二转发表包括的多个VPNSID是指多个VM的VPNSID中与目标代理VPNSID对应的VM的VPNSID。具体实现方式参考图3实施例中的步骤303。
可选地,CSG还包括添加模块,用于获取与多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中。
可选地,上述添加模块,具体用于:接收多个VRF中任一VRF发送的第一通告消息,该第一通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,该通信网络还包括RR。此时,上述添加模块,具体用于:接收RR发送的第二通告消息,第二通告消息携带与多个VM的VPNSID对应的代理VPNSID。
第五方面提供的CSG包括的各个模块的技术效果可以参考第一方面提供的报文转发方法,在此不再详细阐述。
在本申请实施例中,为了避免受到CSG的最大负载分担路数的限制,可以为每个数据中心路由节点配置一个代理VPNSID。针对任一VM的VPNSID,该VM的VPNSID对应配置多个代理VPNSID。如此,在CSG的本地转发表中可以采用代理VPNSID来替代原来的VM的VPNSID,以使CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担。而数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例提供的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了负载分担的效率。
需要说明的是:上述实施例提供的CSG在进行报文转发时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的CSG与报文转发方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图10是本申请实施例提供的一种通信网络中的多个数据中心路由节点中的任一数据中心路由节点的结构示意图。该通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
如图10所示,该数据中心路由节点1000包括:
接收模块1001,用于接收CSG发送的报文,报文携带目标代理VPNSID。具体实现方式参考图3实施例中的步骤304。
选择模块1002,用于在报文携带的目标代理VPNSID为该任一数据中心路由节点的代理VPNSID的情况下,从第二转发表包括的多个VM的VPNSID中选择一个VPNSID。具体实现方式参考图3实施例中的步骤305。
转发模块1003,用于将选择的VPNSID作为目的地址转发报文。其中,第二转发表包括的多个VPNSID是指与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID。具体实现方式参考图3实施例中的步骤306。
可选地,该数据中心路由节点还包括:
添加模块,用于获取与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
可选地,前述获取模块,具体用于:接收多个VRF中任一VRF发送的第三通告消息,第三通告消息携带任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;根据该数据中心路由节点的代理VPNSID,从第三通告消息中获取与该数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
可选地,该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该数据中心路由节点上配置。
在本申请实施例中,为了避免受到CSG的最大负载分担路数的限制,可以为每个数据中心路由节点配置一个代理VPNSID。针对任一VM的VPNSID,该VM的VPNSID对应配置多个代理VPNSID。如此,在CSG的本地转发表中可以采用代理VPNSID来替代原来的VM的VPNSID,以使CSG在负载分担时只需负责将负载分担至各个数据中心路由节点即可,由各个数据中心路由节点来完成实际的负载分担。而数据中心路由节点的最大负载分担路数可以高达128,因此,通过本申请实施例提供的报文转发方法,相当于增大了CSG最终进行负载分担的路数,从而提高了负载分担的效率。
需要说明的是:上述实施例提供的数据中心路由节点在进行报文转发时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的数据中心路由节点与报文转发方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
此外,本申请实施例还提供了一种通信网络中的多个VRF中的任一VRF,该通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,多个VRF中每个VRF上连接有多个VM中的一个或多个。其中,多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
该VRF包括:
获取模块,用于获取该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
发布模块,用于向CSG发布第一通告消息,第一通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,发布模块,还用于向多个数据中心路由节点发布第三通告消息,第三通告消息携带该VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
在本申请实施例中,VRF可以主动向CSG上报自身连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID,以便于CSG构建第一转发表,提高了CSG构建第一转发表的效率。
需要说明的是:上述实施例提供的VRF在进行报文转发时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的VRF与报文转发方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
此外,本申请实施例还提供了一种通信网络中的RR。该通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有多个VM中的一个或多个。多个VM中每个VM配置有一个VPNSID,多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID。
该RR包括:
获取模块,用于获取多个VRF中任一VRF连接的多个VM中每个VM的VPNSID;对于获取的任一VM的VPNSID,RR基于本地存储的VPNSID和代理VPNSID之间的对应关系确定该VM的VPNSID对应的代理VPNSID,得到与任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;
发布模块,用于向CSG发送第二通告消息,第二通告消息携带与多个VRF中每个VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,RR本地存储的VPNSID和代理VPNSID之间的对应关系由管理人员在该RR上配置,从而提高了CSG构建第一转发表的灵活性。
在由RR来向CSG上报多个VM的VPNSID对应的代理VPNSID,由CSG构建第一转发表的情况下,RR需要先获取多个VM中每个VM的VPNSID对应的多个代理VPNSID,然后向CSG发送第二通告消息,以便于CSG构建第一转发表,提高了CSG构建第一转发表的灵活性。
需要说明的是:上述实施例提供的RR在进行报文转发时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的RR与报文转发方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
此外,本申请实施例还提供了一种报文转发系统,该系统包括CSG、多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,该多个VRF中每个VRF上连接有该多个VM中的一个或多个,该多个VM中每个VM配置有一个虚拟私有网络段标识VPNSID,该多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
该CSG,用于获取与该多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中;
该多个数据中心路由节点中任一数据中心路由节点,用于获取与该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
可选地,CSG具体用于:接收该多个VRF中任一VRF发送的第一通告消息,该第一通 告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
可选地,CSG具体用于:该CSG接收该RR发送的第二通告消息,该第二通告消息携带与该多个VM的VPNSID对应的代理VPNSID。
可选地,该数据中心路由节点,用于接收该多个VRF中任一VRF发送的第三通告消息,该第三通告消息携带该任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;根据该数据中心路由节点的代理VPNSID,从该第三通告消息中获取与该数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
可选地,该数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在该数据中心路由节点上配置。
关于上述报文转发系统中各个节点的功能已经在前述实施例中进行了详细说明,在此不再阐述。
图11是本申请实施例提供的一种网络设备1100的结构示意图。图1至图5实施例中通信网络中任一节点,比如CSG、数据中心路由节点、VRF等均可以通过图11所示的网络设备1100来实现,此时,该网络设备1100可以为交换机,路由器或者其他转发报文的网络设备。另外,图1至图5实施例中的网络控制器同样可以通过图11所示的网络设备1100来实现,此时该网络设备1100的具体功能可以参考前述图1至图5任一实施例中的网络控制器的具体实现方式,在此不再赘述。参见图11,该设备包括至少一个处理器1101,通信总线1102、存储器1103以及至少一个通信接口1104。
处理器1101可以是一个通用中央处理器(central processing unit,CPU)、特定应用集成电路(application-specific integrated circuit,ASIC)或一个或多个用于控制本申请方案程序执行的集成电路。
通信总线1102可包括一通路,在上述组件之间传送信息。
存储器1103可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器1103可以是独立存在,通过通信总线1102与处理器1101相连接。存储器1103也可以和处理器1101集成在一起。
其中,存储器1103用于存储程序代码,并由处理器1101来控制执行,以执行上述任一实施例所提供的路径探测方法。处理器1101用于执行存储器1103中存储的程序代码。程序代码中可以包括一个或多个软件模块。图1至图5所提供的实施例中的通信网络中的任一节点可以通过处理器1101以及存储器1103中的程序代码中的一个或多个软件模块,来确定用于开发应用的数据。这一个或多个软件模块可以为图9和图10任一实施例中提供的软件模块。
通信接口1104,使用任何收发器一类的装置,用于与其它设备或通信网络通信,如以太网,无线接入网(radio access networkRAN),无线局域网(wireless local area networks,WLAN) 等。
在具体实现中,作为一种实施例,网络设备可以包括多个处理器,例如图11中所示的处理器1101和处理器1105。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(digital subscriber line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如:固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (21)

  1. 一种报文转发方法,其特征在于,应用于通信网络中的基站业务网关CSG,所述通信网络还包括多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个;所述多个VM中每个VM配置有一个虚拟私有网络段标识VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    所述方法包括:
    所述CSG接收报文,所述报文中携带所述目标VNF的标识;
    所述CSG从第一转发表包括的多个代理VPNSID中选择一个代理VPNSID作为目标代理VPNSID,所述第一转发表为所述目标VNF的标识对应的转发表,所述第一转发表包括的多个代理VPNSID是指与所述多个VM的VPNSID对应配置的代理VPNSID;
    所述CSG将所述目标代理VPNSID作为目的地址转发所述报文,以将所述报文转发至所述目标代理VPNSID所指示的数据中心路由节点,用于指示所述目标代理VPNSID所指示的数据中心路由节点根据第二转发表包括的多个VPNSID转发所述报文,所述第二转发表包括的多个VPNSID是指所述多个VM的VPNSID中与所述目标代理VPNSID对应的VM的VPNSID。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    所述CSG获取与所述多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至所述第一转发表中。
  3. 如权利要求2所述的方法,其特征在于,所述CSG获取与所述多个VM的VPNSID对应的代理VPNSID,包括:
    所述CSG接收所述多个VRF中任一VRF发送的第一通告消息,所述第一通告消息携带所述任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID。
  4. 如权利要求2所述的方法,其特征在于,所述通信网络还包括路由反射器RR;所述所述CSG获取与所述多个VM的VPNSID对应的代理VPNSID,包括:
    所述CSG接收所述RR发送的第二通告消息,所述第二通告消息携带与所述多个VM的VPNSID对应的代理VPNSID。
  5. 一种报文转发方法,其特征在于,应用于通信网络中的多个数据中心路由节点中的任一数据中心路由节点,所述通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个;所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    所述方法包括:
    所述任一数据中心路由节点接收所述CSG发送的报文,所述报文携带目标代理VPNSID;
    在所述报文携带的目标代理VPNSID为所述任一数据中心路由节点的代理VPNSID的情况下,所述任一数据中心路由节点从第二转发表包括的多个VM的VPNSID中选择一个VPNSID,所述第二转发表包括的多个VPNSID是指与所述任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID;
    所述任一数据中心路由节点将选择的VPNSID作为目的地址转发所述报文。
  6. 如权利要求5所述的方法,其特征在于,所述方法还包括:
    所述任一数据中心路由节点获取与所述任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至所述第二转发表中。
  7. 如权利要求6所述的方法,其特征在于,所述任一数据中心路由节点获取与所述任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,包括:
    所述任一数据中心路由节点接收所述多个VRF中任一VRF发送的第三通告消息,所述第三通告消息携带所述任一VRF连接的各个VM中每个VM的VPNSID对应的多个代理VPNSID;
    所述任一数据中心路由节点根据所述任一数据中心路由节点的代理VPNSID,从所述第三通告消息中获取与所述任一数据中心路由节点的代理VPNSID对应的的VM的VPNSID。
  8. 如权利要求6所述的方法,其特征在于,所述任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID由管理人员在所述任一数据中心路由节点上配置。
  9. 一种报文转发方法,其特征在于,应用于通信网络中的多个VRF中的任一VRF,所述通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个;所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    所述方法包括:
    所述任一VRF获取所述任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;
    所述任一VRF向所述CSG发布第一通告消息,所述第一通告消息携带所述任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
  10. 如权利要求9所述的方法,其特征在于,所述任一VRF获取所述任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID之后,还包括:
    所述任一VRF向所述多个数据中心路由节点发布第三通告消息,所述第三通告消息携带所述任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
  11. 一种报文转发方法,其特征在于,应用于通信网络中的RR,所述通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连 接有所述多个VM中的一个或多个;所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    所述方法包括:
    所述RR获取所述多个VRF中任一VRF连接的多个VM中每个VM的VPNSID;
    对于获取的任一VM的VPNSID,所述RR基于本地存储的VPNSID和代理VPNSID之间的对应关系确定所述任一VM的VPNSID对应的代理VPNSID,得到与所述任一VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID;
    所述RR向所述CSG发送第二通告消息,所述第二通告消息携带与所述多个VRF中每个VRF连接的多个VM中每个VM的VPNSID对应的多个代理VPNSID。
  12. 如权利要求11所述的方法,其特征在于,所述VPNSID和代理VPNSID之间的对应关系由管理人员在所述RR上配置。
  13. 一种通信网络中的CSG,所述通信网络还包括多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述CSG包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的程序以执行权利要求1-4任一项所述的方法。
  14. 一种通信网络中的数据中心路由节点,所述通信网络包括多个数据中心路由节点、CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述多个数据中心路由节点中的任一数据中心路由节点包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的程序以执行权利要求5-8任一项所述的方法。
  15. 一种通信网络中的VRF,所述通信网络包括多个VRF、CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述多个VRF中的任一VRF包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的程序以执行权利要求9-10任一项所述的方法。
  16. 一种通信网络中的RR,所述通信网络还包括CSG、多个数据中心路由节点、多个 VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述RR包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的程序以执行权利要求11-12任一项所述的方法。
  17. 一种芯片,所述芯片设置在通信网络中的CSG中,所述通信网络还包括多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述芯片包括处理器和接口电路;
    所述接口电路用于接收指令并传输至所述处理器;
    所述处理器用于执行权利要求1-4任一项所述的方法。
  18. 一种芯片,所述芯片设置在通信网络包括的多个数据中心路由节点中的任一数据中心路由节点中,所述通信网络还包括CSG、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述芯片包括处理器和接口电路;
    所述接口电路用于接收指令并传输至所述处理器;
    所述处理器用于执行权利要求5-8任一项所述的方法。
  19. 一种芯片,所述芯片设置在通信网络包括的多个VRF中的任一VRF中,所述通信网络还包括CSG、多个数据中心路由节点、用于执行目标VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述芯片包括处理器和接口电路;
    所述接口电路用于接收指令并传输至所述处理器;
    所述处理器用于执行权利要求9-10任一项所述的方法。
  20. 一种芯片,所述芯片设置在通信网络的RR中,所述通信网络还包括CSG、多个数据中心路由节点、多个VRF、用于执行目标VNF的多个VM,每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    其特征在于,所述芯片包括处理器和接口电路;
    所述接口电路用于接收指令并传输至所述处理器;
    所述处理器用于执行权利要求11-12任一项所述的方法。
  21. 一种报文转发系统,其特征在于,所述系统包括CSG、多个数据中心路由节点、多个虚拟路由转发VRF、以及用于执行目标虚拟网络功能VNF的多个VM,所述多个VRF中每个VRF上连接有所述多个VM中的一个或多个,所述多个VM中每个VM配置有一个虚拟私有网络段标识VPNSID,所述多个数据中心路由节点中每个数据中心路由节点配置有一个代理VPNSID;
    所述CSG,用于获取与所述多个VM的VPNSID对应的代理VPNSID,将获取的代理VPNSID添加至第一转发表中;
    所述多个数据中心路由节点中任一数据中心路由节点,用于获取与所述任一数据中心路由节点的代理VPNSID对应的多个VM的VPNSID,将获取的VM的VPNSID添加至第二转发表中。
PCT/CN2020/124463 2019-10-30 2020-10-28 报文转发方法、装置及计算机存储介质 WO2021083228A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911046986.5 2019-10-30
CN201911046986.5A CN112751766B (zh) 2019-10-30 2019-10-30 报文转发方法和系统、相关设备和芯片

Publications (1)

Publication Number Publication Date
WO2021083228A1 true WO2021083228A1 (zh) 2021-05-06

Family

ID=75640813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124463 WO2021083228A1 (zh) 2019-10-30 2020-10-28 报文转发方法、装置及计算机存储介质

Country Status (2)

Country Link
CN (1) CN112751766B (zh)
WO (1) WO2021083228A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334045B (zh) * 2022-08-12 2023-12-19 迈普通信技术股份有限公司 报文转发方法、装置、网关设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106034077A (zh) * 2015-03-18 2016-10-19 华为技术有限公司 一种动态路由配置方法、装置及系统
CN106101023A (zh) * 2016-05-24 2016-11-09 华为技术有限公司 一种vpls报文处理方法及设备
CN106487695A (zh) * 2015-08-25 2017-03-08 华为技术有限公司 一种数据传输方法、虚拟网络管理装置及数据传输系统
US20170104679A1 (en) * 2015-10-09 2017-04-13 Futurewei Technologies, Inc. Service Function Bundling for Service Function Chains

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813488B2 (en) * 2014-06-25 2017-11-07 Comcast Cable Communications, Llc Detecting virtual private network usage
CN107547339B (zh) * 2017-06-14 2020-12-08 新华三技术有限公司 一种网关媒体接入控制mac地址反馈方法及装置
CN111901235A (zh) * 2017-12-01 2020-11-06 华为技术有限公司 处理路由的方法和装置、以及数据传输的方法和装置
CN108718278B (zh) * 2018-04-13 2021-04-27 新华三技术有限公司 一种报文传输方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106034077A (zh) * 2015-03-18 2016-10-19 华为技术有限公司 一种动态路由配置方法、装置及系统
CN106487695A (zh) * 2015-08-25 2017-03-08 华为技术有限公司 一种数据传输方法、虚拟网络管理装置及数据传输系统
US20170104679A1 (en) * 2015-10-09 2017-04-13 Futurewei Technologies, Inc. Service Function Bundling for Service Function Chains
CN106101023A (zh) * 2016-05-24 2016-11-09 华为技术有限公司 一种vpls报文处理方法及设备

Also Published As

Publication number Publication date
CN112751766B (zh) 2023-07-11
CN112751766A (zh) 2021-05-04

Similar Documents

Publication Publication Date Title
JP7417825B2 (ja) スライスベースルーティング
US11861419B2 (en) Asynchronous object manager in a network routing environment
US10182496B2 (en) Spanning tree protocol optimization
US10237179B2 (en) Systems and methods of inter data center out-bound traffic management
EP3399703B1 (en) Method for implementing load balancing, apparatus, and network system
US20210289436A1 (en) Data Processing Method, Controller, and Forwarding Device
US20140149549A1 (en) Distributed cluster processing system and packet processing method thereof
EP2892196B1 (en) Method, network node and system for implementing point-to-multipoint multicast
WO2022048418A1 (zh) 一种转发报文的方法、设备和系统
WO2021083228A1 (zh) 报文转发方法、装置及计算机存储介质
JP7127537B2 (ja) トランスポートネットワーク制御装置、通信システム、転送ノードの制御方法及びプログラム
JPWO2019240158A1 (ja) 通信システム及び通信方法
WO2022012287A1 (zh) 路由优化方法、物理网络设备及计算机可读存储介质
WO2021082568A1 (zh) 业务报文转发方法、装置及计算机存储介质
US10320667B2 (en) Notification method and device and acquisition device for MAC address of ESADI
CN113595915A (zh) 转发报文的方法及相关设备
WO2022037330A1 (zh) 传输虚拟专用网的段标识vpn sid的方法、装置和网络设备
WO2023050818A1 (zh) 数据转发方法、系统、电子设备和存储介质
US20240056359A1 (en) Automated Scaling Of Network Topologies Using Unique Identifiers
WO2019061520A1 (zh) 切换路径的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20882731

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20882731

Country of ref document: EP

Kind code of ref document: A1