CN112751766B - Message forwarding method and system, related equipment and chip - Google Patents

Message forwarding method and system, related equipment and chip Download PDF

Info

Publication number
CN112751766B
CN112751766B CN201911046986.5A CN201911046986A CN112751766B CN 112751766 B CN112751766 B CN 112751766B CN 201911046986 A CN201911046986 A CN 201911046986A CN 112751766 B CN112751766 B CN 112751766B
Authority
CN
China
Prior art keywords
vpnsid
data center
vms
proxy
vpnsids
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046986.5A
Other languages
Chinese (zh)
Other versions
CN112751766A (en
Inventor
闫朝阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911046986.5A priority Critical patent/CN112751766B/en
Priority to PCT/CN2020/124463 priority patent/WO2021083228A1/en
Publication of CN112751766A publication Critical patent/CN112751766A/en
Application granted granted Critical
Publication of CN112751766B publication Critical patent/CN112751766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing

Abstract

The application discloses a message forwarding method and system, related equipment and a chip, and belongs to the technical field of network function virtualization. In the method, the CSG receives a message, selects one proxy VPNSID from a plurality of proxy VPNSIDs included in a first forwarding table as a target proxy VPNSID forwarding message, so as to forward the message to a data center routing node indicated by the target proxy VPNSID. That is, in the method, the CSG is only required to be responsible for load sharing to each data center routing node during load sharing, and the actual load sharing is completed by each data center routing node. The maximum load sharing route number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the route number of the CSG for load sharing finally, and the load sharing efficiency is improved.

Description

Message forwarding method and system, related equipment and chip
Technical Field
The present disclosure relates to the field of network function virtualization technologies, and in particular, to a method and system for forwarding a message, and related devices and chips.
Background
In the regional data center (regnional data center, RDC) technology of 5G, one virtual network function (virtualization network function, VNF) may be deployed on multiple virtual machines, each of which may individually execute the VNF to implement load sharing of the VNF. Therefore, when the base station service gateway (cellsite service gateway, CSG) receives a message carrying an identifier of the VNF, the CSG needs to forward the message to one of multiple virtual machines, and the virtual machine executes the VNF based on the message.
In the related art, for any VNF, the CSG obtains in advance a virtual private network segment identifier (virtual private network segment identifier, VPNSID) of each of a plurality of virtual machines deployed for the VNF, to obtain a plurality of VPNSIDs. And obtain a private network route of the VNF, the private network route of the VNF being used to uniquely identify the VNF. The CSG establishes correspondence between the plurality of VPNSIDs and the private network route of the VNF. When a CSG receives a message, if the message carries a private network route of the VNF, the message is mapped to one VPNSID of the multiple VPNSIDs through a multi-path hash algorithm according to the corresponding relationship, and then the VPNSID is forwarded as a destination address of the message, so as to implement forwarding of the message to a virtual machine indicated by the VPNSID.
However, in the method for forwarding the packet, since the number of load sharing that can be supported by the CSG at present is 8, the corresponding relationship includes at most 8 VPNSIDs, thereby limiting the efficiency of load sharing.
Disclosure of Invention
The application provides a message forwarding method and system, related equipment and a chip, which can improve the load sharing efficiency. The technical scheme is as follows:
In a first aspect, a method for forwarding a message is provided, where the method is applied to CSG in a communication network. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, a CSG receives a message, wherein the message carries an identifier of a target VNF; the CSG selects one agent VPNSID from a plurality of agents VPNSID included in the first forwarding table as a target agent VPNSID, and forwards the message by using the target agent VPNSID as a target address to a data center routing node indicated by the target agent VPNSID, wherein the data center routing node indicated by the target agent VPNSID is used for forwarding the message according to the plurality of VPNSIDs included in the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target agent VPNSID among the VPNSIDs of the plurality of VMs.
In the embodiment of the present application, to avoid being limited by the maximum load sharing number of CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, the VPNSID of that VM configures multiple proxies VPNSID correspondingly. In this way, the proxy VPNSID can be adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node when load sharing, and each data center routing node completes actual load sharing. The maximum load sharing route number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the route number of the CSG for load sharing finally, and the load sharing efficiency is improved.
Optionally, in the method, the CSG acquires a proxy VPNSID corresponding to VPNSID of the plurality of VMs, and adds the acquired proxy VPNSID to the first forwarding table.
In the embodiment of the present application, the proxy VPNSID is adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so before forwarding the packet, the CSG needs to obtain the proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs, so as to construct the first forwarding table provided in the embodiment of the present application, thereby improving the efficiency of load sharing in the subsequent process.
Optionally, the CSG obtains proxy VPNSID corresponding to VPNSID of the multiple VMs, specifically: the CSG receives a first notification message sent by any one of the plurality of VRFs, where the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the VMs connected by the any one VRF.
In one implementation, the VRF may actively report, to the CSG, a plurality of agents VPNSID corresponding to VPNSID of each VM in each VM connected to the VRF, so that the CSG builds the first forwarding table, thereby improving efficiency of the CSG to build the first forwarding table.
Optionally, the communication network further comprises an RR. At this time, the CSG acquires proxy VPNSID corresponding to VPNSID of the plurality of VMs, specifically: the CSG receives a second notification message sent by the RR, the second notification message carrying proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs.
In another implementation, the RR reports the proxy VPNSID corresponding to the VPNSID of the multiple VMs to the CSG, so that the CSG builds the first forwarding table, and flexibility of the CSG to build the first forwarding table is improved.
In a second aspect, a method for forwarding a message is provided, where the method is applied to any one of a plurality of data center routing nodes in a communication network. The communication network further includes a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, any data center routing node receives a message sent by a CSG, and the message carries a target agent VPNSID. And under the condition that the target proxy VPNSID carried by the message is the proxy VPNSID of any data center routing node, the any data center routing node selects one VPNSID from the VPNSIDs of a plurality of VMs included in the second forwarding table, and the any data center routing node forwards the message by taking the selected VPNSID as a destination address. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of a plurality of VMs corresponding to the proxy VPNSID of any data center routing node.
In the embodiment of the application, the proxy VPNSID is adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for sharing the load to each data center routing node when the load is shared, and each data center routing node completes the actual load sharing. Therefore, when the data center routing node receives the message, the message needs to be forwarded to one VM in the plurality of VMs according to the second forwarding table so as to realize load sharing. Because the maximum load sharing route number of the data center routing node can reach 128, the message forwarding method provided by the embodiment of the application is equivalent to increasing the route number of the CSG for load sharing finally, so that the load sharing efficiency is improved.
Optionally, in the method, the any one data center routing node acquires VPNSIDs of a plurality of VMs corresponding to the proxy VPNSID of the any one data center routing node, and adds the acquired VPNSIDs of the VMs to the second forwarding table.
Because the embodiment of the application completes actual load sharing by each data center routing node, the data center routing node needs to acquire the VPNSIDs of a plurality of VMs corresponding to own proxy VPNSIDs before forwarding the message, so as to construct the second forwarding table provided by the embodiment of the application, and further improve the efficiency of carrying out load sharing subsequently.
Optionally, the VPNSID of the multiple VMs corresponding to the proxy VPNSID of the arbitrary data center routing node is obtained by the arbitrary data center routing node, specifically: the routing node of any data center receives a third notification message sent by any VRF in a plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected by any VRF; and the any data center routing node acquires the VPNSID of the VM corresponding to the proxy VPNSID of the any data center routing node from the third notification message according to the proxy VPNSID of the any data center routing node.
In one implementation, the VRF may actively report, to the data center routing node, a plurality of agents VPNSID corresponding to VPNSID of each VM in each VM connected to the VRF, so that the data center routing node builds the second forwarding table, and efficiency of the data center routing node building the second forwarding table is improved.
Optionally, VPNSIDs of the multiple VMs corresponding to the proxy VPNSID of the arbitrary data center routing node are configured on the arbitrary data center routing node by an administrator.
In another implementation manner, VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node may be directly configured in a manual manner, so as to improve flexibility of the data center routing node in constructing the second forwarding table.
In a third aspect, a method for forwarding a packet is provided, where the method is applied to any one of multiple VRFs in a communication network, where the communication network further includes a CSG, multiple data center routing nodes, and multiple VMs for executing a target VNF, and one or more of the multiple VMs are connected to each of the multiple VRFs. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, for any VRF, the VRF acquires a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of a plurality of VMs connected by the VRF; the VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of a plurality of VMs connected by the VRF.
In the embodiment of the application, the VRF may actively report, to the CSG, a plurality of agents VPNSID corresponding to VPNSID of each VM in each VM connected to the VRF, so that the CSG builds the first forwarding table, and efficiency of the CSG to build the first forwarding table is improved.
Optionally, after the VRF obtains the plurality of agents VPNSID corresponding to the VPNSID of each VM in the plurality of VMs connected by the VRF, the VRF further issues a third notification message to the plurality of data center routing nodes, where the third notification message carries the plurality of agents VPNSID corresponding to the VPNSID of each VM in the plurality of VMs connected by the VRF.
In the embodiment of the application, the VRF may also actively report, to the data center routing node, a plurality of agents VPNSID corresponding to VPNSID of each VM in each VM connected to the VRF, so that the data center routing node builds the second forwarding table, and efficiency of the data center routing node in building the second forwarding table is improved.
In a fourth aspect, a method for forwarding a message is provided, where the method is applied to an RR in a communication network. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
In the method, an RR acquires a VPNSID of each VM of a plurality of VMs connected by any VRF of a plurality of VRFs; for the obtained VPNSID of any VM, the RR determines the proxy VPNSID corresponding to the VPNSID of the VM based on the corresponding relation between the locally stored VPNSID and the proxy VPNSID, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected with any VRF; the RR sends a second notification message to the CSG, the second notification message carrying a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of the plurality of VMs to which each of the plurality of VRFs is connected.
When the RR reports the proxy VPNSID corresponding to the VPNSID of each of the plurality of VMs to the CSG and the CSG constructs the first forwarding table, the RR needs to obtain the plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs first and then send a second notification message to the CSG, so that the CSG constructs the first forwarding table, and flexibility of the CSG in constructing the first forwarding table is improved.
Optionally, in the method, the corresponding relation between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by the administrator, so that flexibility of constructing the first forwarding table by the CSG is improved.
In a fifth aspect, a CSG in a communication network is provided. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The CSG includes:
a receiving module, configured to receive a packet, where the packet carries an identifier of a target VNF;
a selecting module, configured to select one agent VPNSID from a plurality of agents VPNSID included in the first forwarding table as a target agent VPNSID;
and the forwarding module is used for forwarding the message by taking the target proxy VPNSID as a destination address so as to forward the message to the data center routing node indicated by the target proxy VPNSID, and is used for indicating the data center routing node indicated by the target proxy VPNSID to forward the message according to a plurality of VPNSIDs included in the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target agent VPNSID among the VPNSIDs of the plurality of VMs.
Optionally, the CSG further includes an adding module configured to obtain proxy VPNSID corresponding to VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table.
Optionally, the adding module is specifically configured to: and receiving a first notification message sent by any VRF in the plurality of VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the VMs connected by the VRF.
Optionally, the communication network further comprises an RR. At this time, the adding module is specifically configured to: and receiving a second notification message sent by the RR, wherein the second notification message carries proxy VPNSIDs corresponding to the VPNSIDs of the VMs.
Technical effects of each module included in the CSG provided in the fifth aspect may refer to the packet forwarding method provided in the first aspect, which is not described in detail herein.
In a sixth aspect, there is provided any one of a plurality of data center routing nodes in a communication network. The communication network further includes a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The data center routing node includes:
and the receiving module is used for receiving a message sent by the CSG, wherein the message carries the target agent VPNSID.
And the selection module is used for selecting one VPNSID from the VPNSIDs of a plurality of VMs included in the second forwarding table under the condition that the target agent VPNSID carried by the message is the agent VPNSID of any data center routing node.
And the forwarding module is used for forwarding the message by taking the selected VPNSID as a destination address. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of a plurality of VMs corresponding to the proxy VPNSID of the data center routing node.
Optionally, the data center routing node further comprises:
and the adding module is used for obtaining the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node and adding the obtained VPNSIDs of the VMs to the second forwarding table.
Optionally, the foregoing acquiring module is specifically configured to: receiving a third notification message sent by any one of the plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected by any one VRF; and according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
Optionally, VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
Technical effects of each module included in the data center routing node provided in the sixth aspect may refer to the message forwarding method provided in the second aspect, which is not described in detail herein.
In a seventh aspect, there is provided any one of a plurality of VRFs in a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs being connected to each of the plurality of VRFs. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The VRF comprises:
an obtaining module, configured to obtain a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in the plurality of VMs connected by the VRF;
and the issuing module is used for issuing a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected by the VRF.
Optionally, the publishing module is further configured to publish a third notification message to the plurality of data center routing nodes, where the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected to the VRF.
The technical effects of the respective modules included in the VRF provided in the seventh aspect may refer to the packet forwarding method provided in the third aspect, which is not described in detail herein.
An eighth aspect provides an RR in a communication network. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The RR includes:
an acquisition module, configured to acquire a VPNSID of each VM in a plurality of VMs connected by any one of the plurality of VRFs; for the obtained VPNSID of any VM, the RR determines the proxy VPNSID corresponding to the VPNSID of the VM based on the corresponding relation between the locally stored VPNSID and the proxy VPNSID, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected with any VRF;
and the release module is used for sending a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected with each VRF.
Optionally, the corresponding relationship between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by the administrator, so that flexibility of constructing the first forwarding table by the CSG is improved.
Technical effects of each module included in the RR provided in the eighth aspect may refer to the packet forwarding method provided in the fourth aspect, which is not described in detail herein.
A ninth aspect provides a CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each of the plurality of VMs configured with a VPNSID, and a plurality of VMs for performing a target virtual network function VNF, each of the plurality of data center routing nodes configured with a proxy VPNSID;
The CSG includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of the above first aspects.
In a tenth aspect, a data center routing node in a communication network is provided, where the communication network includes a plurality of data center routing nodes, a CSG, a plurality of VRFs, and a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
any one of the plurality of data center routing nodes includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of the second aspects described above.
An eleventh aspect provides a VRF in a communication network, the communication network including a plurality of VRFs, a CSG, a plurality of data center routing nodes, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs being connected to each of the plurality of VRFs, each VM of the plurality of VMs being configured with one VPNSID, each data center routing node of the plurality of data center routing nodes being configured with one proxy VPNSID;
Any one of the plurality of VRFs includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of the above third aspects.
In a twelfth aspect, there is provided an RR in a communication network, the communication network further including a CSG, a plurality of data center routing nodes, a plurality of VRFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
the RR includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of the fourth aspects.
A thirteenth aspect provides a chip, the chip being disposed in a CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each of the plurality of VMs configured with one VPNSID, and a plurality of VMs for performing a target virtual network function VNF, each of the plurality of data center routing nodes configured with one proxy VPNSID;
The chip comprises a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform the method of any of the first aspects above.
A fourteenth aspect provides a chip, the chip being disposed in any one of a plurality of data center routing nodes included in a communication network, the communication network further including a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs being connected to each VRF, each VM of the plurality of VMs being configured with a VPNSID, each data center routing node of the plurality of data center routing nodes being configured with a proxy VPNSID;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform any of the above second aspects.
A fifteenth aspect provides a chip disposed in any one of a plurality of VRFs included in a communication network, the communication network further including a CSG, a plurality of data center routing nodes, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs being connected to each of the plurality of VRFs, each of the plurality of VMs being configured with a VPNSID, each of the plurality of data center routing nodes being configured with an agent VPNSID;
The chip comprises a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform any of the above aspects.
In a sixteenth aspect, a chip is provided, where the chip is disposed in an RR of a communication network, the communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, and a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with an agent VPNSID;
the chip comprises a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform any of the above fourth aspects.
A sixteenth aspect provides a packet forwarding system, the system comprising a CSG, a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, and a plurality of VMs for performing a target virtual network function VNF, one or more of the plurality of VMs being connected to each of the plurality of VRFs, each VM of the plurality of VMs being configured with a virtual private network segment identity VPNSID, each data center routing node of the plurality of data center routing nodes being configured with a proxy VPNSID;
The CSG is configured to obtain proxy VPNSID corresponding to VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table;
any one of the plurality of data center routing nodes is configured to obtain VPNSIDs of a plurality of VMs corresponding to an agent VPNSID of the data center routing node, and add the obtained VPNSID of the VM to the second forwarding table.
Optionally, the CSG is specifically configured to: and receiving a first notification message sent by any VRF in the plurality of VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the VMs connected by the VRF.
Optionally, the CSG is specifically configured to: the CSG receives a second notification message sent by the RR, the second notification message carrying proxy VPNSID corresponding to VPNSID of the plurality of VMs.
Optionally, the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries multiple proxy VPNSIDs corresponding to VPNSIDs of each VM in each VM connected to the any one VRF; and according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
Optionally, VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
The technical effects of each node in the foregoing packet forwarding system may refer to the technical effects of the packet forwarding method provided in the first aspect, the second aspect, the third aspect, and the fourth aspect, which are not described herein again.
Drawings
Fig. 1 is a schematic architecture diagram of a communication network according to an embodiment of the present application;
fig. 2 is a schematic architecture diagram of another communication network provided in an embodiment of the present application;
fig. 3 is a flowchart of a message forwarding method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a packet forwarding flow according to an embodiment of the present application;
fig. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another network device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an interface board in the network device shown in fig. 7 according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a CSG according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a routing node of a data center according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another network device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference herein to "a plurality" means two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
Before explaining the message forwarding method provided by the embodiment of the application, the communication network related to the embodiment of the application is explained.
Fig. 1 is a schematic architecture diagram of a communication network according to an embodiment of the present application. As shown in fig. 1, the communication network 100 includes a plurality of CSG, a plurality of operator edge devices (PEs), a plurality of data center routing nodes, a plurality of data centers, and a plurality of virtual routing forwarding (virtual route forwarding, VRFs).
The data center may be an RDC, a center data center (central data center, CDC) or an Edge Data Center (EDC), and fig. 1 illustrates an RDC as an example of a data center. The data center routing node may be a data center gateway (data center gateway, DCGW) deployed between a PE and a data center, or may be a DC Spine router deployed between a data center and a VRF, which embodiments of the present application are not particularly limited. Fig. 1 is an illustration of a DCGW deployed between a PE and a data center as an example of a data center routing node.
As shown in fig. 1, any CSG communicates with any DCGW through multiple PEs in the backbone network. Communication is performed between any DCGW and any RDC. Any RDC communicates with the connected VRFs, with one or more VMs connected to each VRF (one VM connected to each VRF is illustrated in FIG. 1). There are multiple VMs for executing the same VNF and it is possible to connect to different VRFs separately. As shown in fig. 1, there are three VMs for executing the VNF shown in fig. 1, which are the first three VRFs connected from top to bottom in fig. 1, respectively.
In order to enable accurate load sharing for a certain VNF, an end.dx type SID, also referred to as VPNSID, is configured for each of the respective VMs for executing the VNF. That is, in the present embodiment, each VPNSID is used to uniquely identify one VM. In this way, the forwarding table of the CSG in the related art includes a plurality of VPNSIDs corresponding to the identifier of the VNF, so that the CSG forwards the message to the VM indicated by one of the VPNSIDs according to the forwarding table.
As shown in fig. 1, the current CSG supports at most 8 paths of load sharing, that is, when the CSG receives a certain message, the CSG forwards the message to one VM of the 8 VMs for processing based on a multi-path hash algorithm, which seriously affects the efficiency of load sharing. The embodiment of the application provides a method for forwarding a message in the basic scene so as to improve the load sharing efficiency.
In addition, the VNF shown in fig. 1 may be an access management function (access management function, AMF), a session management function (session management function, SMF), a user plane function (user plane function, UPF), or the like.
In addition, any of the VRFs described above is connected to a VM through a designated Access Circuit (AC) three-layer interface or subinterface, which is not described in detail herein.
It should be noted that the number of the respective devices shown in fig. 1 is merely for illustration, and does not constitute a limitation on the architecture of the communication network provided in the embodiment of the present application.
For convenience of description to follow, the communication network shown in fig. 1 is simplified, and the simplified communication network is shown in fig. 2. The subsequent method of forwarding the message is illustrated by the communication network shown in fig. 2. As shown in fig. 2, the communication network 200 includes a CSG, a plurality of data center routing nodes (illustrated by two data center routing nodes in fig. 2 as data center routing node 1 and data center routing node 2, respectively), a plurality of VRFs (illustrated by two VRFs in fig. 2 as VRFs 1 and VRF2, respectively), and a plurality of VMs for executing target VNFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto (illustrated by two VMs connected thereto in fig. 2).
Further, as shown in fig. 2, the network is divided into a plurality of domains, each domain including a set of hosts and a set of routers, the hosts and routers in one domain being unified managed by one controller. CSG, data center routing node 1 and data center routing node 2 in fig. 2 are located in the same domain, and data center routing node 1, data center routing node 2, and VRF1 and VRF2 are located in another domain. Each domain is also deployed with a Route Reflector (RR), labeled RR1 and RR2 in fig. 2, respectively. Wherein the routing reflector in each domain functions as: any routing device in the domain can communicate with other routing devices through the routing reflector, and network connection between the two routing devices is not required to be directly established, so that consumption of network resources is reduced.
The function of each node in the communication network shown in fig. 2 will be described in detail in the following embodiments, and will not be described in detail here.
The following describes the message forwarding method provided in the embodiment of the present application by taking the communication network shown in fig. 2 as an example, and for the deployment situation of other nodes in the communication network shown in fig. 1, the following embodiments may be referred to implement message forwarding.
Fig. 3 is a flowchart of a message forwarding method provided in an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
step 301: and the CSG receives a message, wherein the message carries the identifier of the target VNF.
In the embodiment of the present application, to avoid being limited by the maximum load sharing number of CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, multiple proxy VPNSIDs may be configured for the VPNSID of that VM. Therefore, the proxy VPNSID can be adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for carrying out load sharing to each data center routing node when carrying out load sharing, the actual load sharing is completed by each data center routing node, and the maximum load sharing route number of the data center routing nodes can be up to 128.
Therefore, the CSG stores a first forwarding table corresponding to the identifier of the target VNF, where the first forwarding table includes a plurality of agents VPNSIDs, so that the CSG forwards the packet through steps 302 and 303 described below. The plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs corresponding to VPNSIDs of the plurality of VMs for executing the target VNF. The configuration process of the first forwarding table will be described in the following embodiments, which will not be described herein.
Fig. 4 is a schematic diagram of message forwarding according to an embodiment of the present application. As shown in FIG. 4, the first forwarding table stored in the CSG includes two agents VPNSID, namely DE:: B100 and DF:: B100. DE:B100 is the proxy VPNSID for data center routing node 1. DF:B100 is the proxy VPNSID for data center routing node 2.
Wherein, for the VPNSID of any VM, a plurality of agents VPNSID are configured for the VPNSID of the VM. As shown in FIG. 4, two corresponding proxies VPNSID, DE:: B100 and DF:: B100, respectively, are configured for the VPNSID of the first VM from top to bottom (the VPNSID is A8:1:: B100). Two corresponding agents VPNSID, DE:: B100 and DF:: B100, respectively, are also configured for the VPNSID of the second VM from top to bottom (the VPNSID is A8:1:: B101). Furthermore, for the third VM and the fourth VM from top to bottom, the two corresponding proxies VPNSID may be configured as well. That is, for each VM's VPNSID shown in FIG. 4, two corresponding agents VPNSID are configured, DE:: B100 and DF:: B100, respectively.
After the VPNSIDs of the respective VMs configure the corresponding proxy VPNSIDs in the above manner, at this time, the first forwarding table includes two proxy VPNSIDs (DE:: B100 and DF:: B100, respectively) that are all proxy VPNSIDs corresponding to the VPNSIDs of the plurality of VMs executing the target VNF. The process of configuring the proxy VPNSID corresponding to the VPNSID of each VM and the specific implementation manner of generating the first forwarding table will be described in detail in the following embodiments of generating the first forwarding table, which are not described herein.
Step 302: the CSG selects one proxy VPNSID from among a plurality of proxy VPNSIDs included in the first forwarding table as a target proxy VPNSID.
In a specific implementation, the CSG may select one proxy VPNSID from a plurality of proxy VPNSIDs included in the first forwarding table through a multi-path hash algorithm. The multipath hash algorithm may be an equal-cost multipath (ECMP) algorithm. At this time, the probabilities that the plurality of agents VPNSID in the first forwarding table are selected are the same, so as to realize that the CSG uniformly shares the received message to each data center routing node. For example, for the forwarding flow shown in FIG. 4, the proxy VPNSID selected by the CSG based on the multi-path hash algorithm is DE:B100, indicating that the message needs to be forwarded to the data center routing node 1 at this time.
It will be appreciated that, in other implementations, different hash algorithms may be used to make the probabilities of each agent VPNSID in the first forwarding table being selected different, and that a particular type of hash algorithm may be determined according to the load balancing policy.
Step 303: the CSG forwards the message with the target proxy VPNSID as the destination address.
In the embodiment of the present application, since the proxy VPNSID is used to replace the VPNSID in the related art, the CSG may use the target proxy VPNSID as the destination address of the message to forward the message after selecting the proxy VPNSID from the first forwarding table as the target proxy VPNSID.
For example, for the forwarding flow shown in fig. 4, the CSG takes the proxy vpnsid=de:: B100 as the destination address of the message (the destination address is labeled DA in fig. 4) whose source address (the source address is labeled SA in fig. 4) is the CSG. In addition, as shown in fig. 4, the packet further includes a payload (payload). The step labeled (1) as in fig. 4 is used to illustrate the foregoing process.
Through steps 301 to 303 described above, the csg may forward the message to the data centre routing node indicated by the selected target agent VPNSID. And a second forwarding table is configured in the data center routing node indicated by the target proxy VPNSID, the second forwarding table comprises a plurality of VPNSIDs, the data center routing node corresponding to the target proxy VPNSID is used for indicating the data center routing node to forward messages according to the plurality of VPNSIDs included in the second forwarding table, and the plurality of VPNSIDs included in the second forwarding table are VPNSIDs corresponding to the target proxy VPNSID in the VPNSIDs of the VMs. Thus, for any data center routing node, if the data center routing node is the data center routing node indicated by the CSG selected target agent VPNSID, the received message may be processed by the following steps 304 to 306. The configuration process of the second forwarding table will be described in the following embodiments, which are not described here.
Step 304: for any data center routing node, the data center routing node receives a message sent by the CSG, wherein the message carries a target agent VPNSID.
Because any data center routing node in the network may receive the message sent by the CSG, for any data center routing node, when the data center routing node receives the message sent by the CSG, it needs to determine whether the message is processed by itself. In a specific implementation manner, since the destination address carried by the message is the target proxy VPNSID, the data center routing node can compare whether the target proxy VPNSID carried in the message is consistent with the configured proxy VPNSID of the data center routing node. If the message is inconsistent, the message is processed by other data center routing nodes, and the message is forwarded to the other data center routing nodes for processing. If so, indicating that the message is being processed by itself, the data center routing node may continue forwarding the message through steps 305 and 306 described below.
For example, for the forwarding flow shown in fig. 4, when the data center routing node 1 receives the packet, since the proxy VPNSID carried by the packet is DE:: B100, the proxy VPNSID is the own proxy VPNSID, and therefore, the data center routing node 1 may continue to forward the packet through the following steps 305 and 306.
When the data center routing node 2 receives the message, the data center routing node 2 forwards the message to the data center routing node indicated by the target proxy VPNSID, because the target proxy VPNSID carried by the message is inconsistent with the own proxy VPNSID.
Step 305: and when the target proxy VPNSID carried by the message is the proxy VPNSID of the data center routing node, the data center routing node selects one VPNSID from the VPNSIDs of the VMs included in the second forwarding table, wherein the VPNSIDs of the VMs included in the second forwarding table refer to the VPNSIDs of the VMs corresponding to the proxy VPNSID of any data center routing node.
Since the local storage of the data center routing node has the second forwarding table corresponding to the own proxy VPNSID, and the second forwarding table includes VPNSIDs of multiple VMs corresponding to the own proxy VPNSID, in the case that the proxy VPNSID carried by the packet is the proxy VPNSID of any data center routing node, the data center routing node may directly select one VPNSID from the second forwarding table through the multi-path hash algorithm, so as to forward the packet through step 306 described below.
The multipath hash algorithm may also be an ECMP algorithm. At this time, the probabilities of the VPNSID of each VM in the second forwarding table being selected are the same, so as to achieve that the data center routing node uniformly shares the received message to each VM. It will be appreciated that, in another implementation, different hash algorithms may be used to make the probabilities of VPNSID being selected of each VM in the second forwarding table different, and a specific type of hash algorithm may be specifically determined according to the load balancing policy.
For example, for the forwarding flow shown in fig. 4, it is assumed that the second forwarding table of the data center routing node 1 includes two VPNSIDs, namely a8:1:b100 and a8:1:b101, respectively, so that the data center routing node 1 can select one of the two VPNSIDs through the ECMP algorithm.
Step 306: the data center routing node forwards the message with the selected VPNSID as a destination address.
Since the message eventually needs to be processed by the VM, the data center routing node, after selecting one VPNSID from the second forwarding table, forwards the message with the selected VPNSID as the destination address, so that the VM indicated by the selected VPNSID processes the message.
For example, for the forwarding flow shown in fig. 4, if the VPNSID selected by the data center routing node 1 from the second forwarding table is a8:1:b100, at this time, as shown in fig. 4, the data center routing node 1 may forward a8:1:b100 as the destination address of the packet (the destination address is labeled DA in fig. 4). If the VPNSID selected by the data center routing node 1 from the second forwarding table is A8:1:B101, the data center routing node 1 can forward A8:1:B101 as the destination address of the message as shown in FIG. 4. It should be noted that the two steps denoted by (2) in fig. 4 are used to illustrate the foregoing process, and the two steps denoted by (2) are or are in relation.
In addition, since each VM is deployed on the VRF, when the data center routing node forwards the message with the selected VPNSID as the destination address, the VRF deploying the VM indicated by the selected VPNSID receives the message first and then forwards the message to the VM indicated by the selected VPNSID. The step as labeled (3) in fig. 4 is used to illustrate the foregoing process.
In the process of forwarding a packet shown in fig. 3, a first forwarding table needs to be configured on the CSG, and a second forwarding table needs to be configured in each data center routing node, so that specific functions of the first forwarding table and the second forwarding table have been explained in the foregoing embodiments, and next, the configuration process of the first forwarding table and the second forwarding table is explained.
Fig. 5 is a flowchart of a method for configuring a first forwarding table and a second forwarding table according to an embodiment of the present application. As shown in fig. 5, the method comprises the following steps:
step 501: for any VRF, the VRF obtains a VPNSID configured for any VM of the plurality of VMs to which the VRF is connected.
The VPNSID configured for any VM of the multiple VMs connected to the VRF may be configured by the network controller, or may be directly configured on the VRF by an administrator, which is not specifically limited in this application. If the network controller configures the VPNSID, the network controller issues the configured VPNSID to the VRF after configuring the VPNSID of any one of the plurality of VMs of the VRF connection, so that the VRF obtains the VPNSID configured for any one of the plurality of VMs of the VRF connection.
In a specific implementation, a network controller or administrator configures VPNSID for each VM connected according to a location identifier (Locator) of the VRF. For example, for the communication network shown in fig. 4, the location identifier of VRF1 is a8:1:1:64, as shown in fig. 4, the network controller or administrator may configure two VPNSIDs for two VMs connected to VRF1, where VPNSID configured for the first VM from top to bottom in fig. 4 is a8:1:1:b100, and VPNSID configured for the second VM from top to bottom is a8:1:1:b101.
Step 502: for any data center routing node, the data center routing node obtains the proxy VPNSID configured for the data center routing node.
The proxy VPNSID for the data center routing node configured on the data center routing node may be configured by a network controller, or may be configured on the data center routing node by a manager, which is not specifically limited in this application. If the network controller configures the proxy VPNSID, the network controller issues the configured proxy VPNSID to the data center routing node after configuring the proxy VPNSID of the data center routing node, so that the data center routing node obtains the proxy VPNSID configured for the data center routing node.
In a specific implementation, a network controller or administrator configures an agent VPNSID for the data center routing node according to a location identifier (Locator) of the data center routing node. For example, for the communication network shown in FIG. 4, the location of the data center routing node 1 is identified as DE:/64, and as shown in FIG. 4, the network controller or administrator may configure the data center routing node 1 with an agent VPNSID of DE:: B100. In the same manner as described above, the proxy VPNSID configured for the data center routing node 2 is DF::: B100 based on the location identity DF:/64 of the data center routing node 2, as shown in FIG. 4.
Step 503: the CSG acquires a proxy VPNSID corresponding to VPNSID of a plurality of VMs for executing the target VNF, and adds the acquired proxy VPNSID to the first forwarding table.
In the implementation of the present application, the CSG may obtain proxy VPNSID corresponding to VPNSID of multiple VMs for executing the target VNF through the following two specific implementations:
the first implementation mode: for any VRF, the VRF acquires a plurality of agents VPNSIDs configured corresponding to the VPNSID of any VM in the plurality of VMs connected by the VRF. The VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any one of a plurality of VMs connected by the VRF. The CSG receives a first advertisement message sent by the VRF. The CSG may obtain proxy VPNSIDs corresponding to the VPNSIDs of the multiple VMs according to the first advertisement message sent by each VRF.
For any VM, the plurality of agents VPNSID configured for the VPNSID of the VM may be configured by the network controller, or may be directly configured on the VRF by an administrator, which is not specifically limited in this application. If the network controller configures the plurality of agents VPNSID corresponding to the VPNSID, the network controller issues the plurality of agents VPNSID corresponding to the VPNSID of the configured VM to the VRF after configuring the plurality of agents VPNSID corresponding to the VPNSID of the VM, so that the VRF obtains the plurality of agents VPNSID configured for the VPNSID of the VM.
As shown in fig. 2, the VRFs and CSG are located in different domains, so any of the VRFs may publish the first advertisement message to the CSG via MP-BGP/EVPN, a multi-link protocol (Multilink Protocol) based on border gateway protocol (Border Gateway Protocol, BGP) and ethernet virtual private network (Ethernet Virtual Private Network, EVPN), MP.
In the first implementation manner, the VRF actively reports, to the CSG, a plurality of agents VPNSID configured corresponding to VPNSID of any VM in the plurality of VMs connected to the VRF.
The second implementation mode: the correspondence between VPNSID and proxy VPNSID is stored in advance in the RR in the communication network, and includes VPNSID of the plurality of VMs and a plurality of proxy VPNSID corresponding to VPNSID of each VM, and the construction process of the correspondence will be described in detail below, which will not be described herein. For any one of the VRFs, the RR acquires the VPNSID of each of the VMs connected to the VRF, and for the acquired VPNSID of any one of the VMs, the RR can determine a plurality of proxy VPNSIDs corresponding to the VPNSID of the VM based on the correspondence between the VPNSID and the proxy VPNSID. After acquiring the plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to each VRF, the RR may send a second notification message to the CSG, where the second notification message carries the plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in each VM connected to each VRF. The CSG receives the second notification message sent by the RR, and the CSG can obtain the plurality of proxy VPNSIDs according to the second notification message.
The RR is specifically RR2 in fig. 4. In addition, the RR sends the second notification message to the CSG, which is also issued by MP-BGP/EVPN.
In addition, for any VRF, the implementation manner of the RR to obtain the VPNSID of each VM in the respective VM connected to the VRF may be: the RR sends a VPNSID acquisition request to the VRF, the VPNSID acquisition request being used to instruct the VRF to send the VPNSID of each of the VMs to which the VRF is connected to the RR. That is, in a second implementation, the VRF is passively sending the VPNSID of each VM to which the VRF is connected to the RR.
In addition, in a second implementation, the correspondence between the VPNSID locally stored by the RR and the proxy VPNSID may be configured directly on the RR by the administrator. In a specific implementation manner, an administrator may configure, through a management system or a command line, multiple proxy VPNSIDs corresponding to VPNSIDs of any VM in multiple VMs connected to the VRF on the RR, so that the RR obtains multiple proxy VPNSIDs corresponding to VPNSIDs of any VM in multiple VMs connected to the VRF, thereby constructing a correspondence between the VPNSIDs and the proxy VPNSIDs.
Step 504: for any data center routing node, the data center routing node acquires the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node, and the data center routing node adds the acquired VPNSIDs of the plurality of VMs to a second forwarding table corresponding to the proxy VPNSID of the data center routing node.
The data center routing node may acquire a plurality of VPNSIDs corresponding to the proxy VPNSID of the data center routing node through the following two specific implementation manners:
the first implementation mode: for any VRF, the VRF acquires a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any VM in the plurality of VMs connected by any VRF. The VRF issues a third notification message to each data center routing node, where the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of any one of the plurality of VMs to which the VRF is connected. For any data center routing node, the data center routing node receives a third notification message sent by the VRF, and obtains the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node from the third notification message according to the proxy VPNSID of the data center routing node. After the data center routing node receives the third notification messages issued by all the VRFs, VPNSIDs of all the VMs corresponding to the proxy VPNSIDs of the data center routing node can be determined according to all the third notification messages.
The first implementation in step 503 may be referred to for the VRF to obtain multiple proxies VPNSID corresponding to VPNSID of any VM in the multiple VMs connected by any VRF, which will not be described in detail herein.
Based on the communication network shown in fig. 2, the VRF and the data center routing nodes are located in the same domain, and thus, the VRF issues a third notification message to each data center routing node through an internal network protocol (Interior Gateway Protocol, IGP).
In the first implementation manner, the VRF actively reports, to the data center routing node, a plurality of agents VPNSID configured corresponding to VPNSID of any VM in the plurality of VMs connected to the VRF.
The second implementation mode: for any data center routing node, the VPNSIDs of the multiple VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by a manager. For example, a manager may configure VPNSIDs of multiple VMs corresponding to the proxy VPNSID of the data center routing node directly on the data center routing node through a command line or management system.
The two implementations of step 503 and step 504 described above may be used in combination, and such a configuration may also be referred to as a fully dynamic configuration when the first forwarding table and the second forwarding table are configured by the first implementation in step 503 and the first implementation in step 504. In the full dynamic configuration mode, the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID, so that the CSG and the data center routing node can obtain the VPNSID of the VM and the corresponding proxy VPNSID, and further configure respective forwarding tables.
When the first forwarding table and the second forwarding table are configured by the first implementation in step 503 and the second implementation in step 504, this configuration may also be referred to as a semi-dynamic configuration. In the semi-dynamic configuration mode, the VRF actively reports the VPNSID of the VM and the corresponding proxy VPNSID to the CSG, so that the CSG configures the first forwarding table, and the manager directly configures the proxy VPNSID and the VPNSID of the corresponding VM at the data center routing node, so that the data center routing node generates the second forwarding table.
When the first forwarding table and the second forwarding table are configured by the second implementation in step 503 and the second implementation in step 504, this configuration may also be referred to as a static configuration. In the static configuration mode, the VRF does not actively report any information, so that the RR is required to acquire VPNSIDs of each VM connected to the VRF from the VRF, then determine an agent VPNSID corresponding to the VPNSID of each VM according to a corresponding relationship between the locally stored VPNSID and the agent VPNSID, and then notify the CSG, and generate the first forwarding table by the CSG. Because the VRF does not actively report any information, the data center routing node can only manually configure the VPNSID of the VM corresponding to its own proxy VPNSID through an administrator.
Through the different implementation manners, the flexibility of configuring the first forwarding table and the second forwarding table is improved.
Fig. 6 is a schematic structural diagram of a network device according to an embodiment of the present application, where the network device 600 may be any node in the communication network in the embodiments shown in fig. 1 to 5, for example, may be a CSG, a data center routing node, a VRF, or the like. The network device 600 may be a switch, router, or other network device that forwards messages. In this embodiment, the network device 600 includes: a main control board 610, an interface board 630 and an interface board 640. In the case of multiple interface boards, a switching fabric (not shown) may be included, which is used to complete the data exchange between the interface boards (also called line cards or service boards).
The main control board 610 is used for performing functions such as system management, equipment maintenance, protocol processing, etc. The interface boards 630 and 640 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and to enable forwarding of messages. The main control board 610 mainly has 3 kinds of functional units: the system comprises a system management control unit, a system clock unit and a system maintenance unit. The main control board 610, the interface board 630 and the interface board 640 are connected with the system back board through a system bus to realize intercommunication. The interface board 630 includes one or more processors 631 thereon. The processor 631 is used for controlling and managing the interface board, communicating with the central processor on the main control board, and forwarding the message. The memory 632 on the interface board 630 is used for storing forwarding table entries, and the processor 631 performs forwarding of the message by looking up the forwarding table entries stored in the memory 632.
The interface board 630 includes one or more network interfaces 633 for receiving messages sent by other devices and sending messages according to instructions of the processor 631. Specific implementation may refer to steps 301, 303, 304, 306 in the embodiment shown in fig. 3. And will not be described in detail herein.
The processor 631 is configured to perform the processing steps and functions of any node in the communication network described in the embodiments shown in fig. 1 to 5, and specifically, reference may be made to step 302 (as a process when a CSG is routed) or step 305 (as a process when a data center is routed) in the embodiment shown in fig. 3, step 501 (as a process when a VRF is routed), step 502 (as a process when a data center is routed), step 503 (as a process when a CSG is routed), and step 504 (as a process when a data center is routed). And will not be described in detail herein.
It will be appreciated that, as shown in fig. 6, the present embodiment includes a plurality of interface boards, and a distributed forwarding mechanism is adopted, under which the operations on the interface board 640 are substantially similar to those of the interface board 630, and for brevity, a description is omitted. It will be appreciated that the processor 631 and/or 641 of the interface board 630 of fig. 6 may be dedicated hardware or chips, such as a network processor or an application specific integrated circuit (application specific integrated circuit), to perform the functions described above, i.e., the so-called forwarding plane may employ dedicated hardware or chip processing. A specific implementation of dedicated hardware or chips using a network processor may be referred to as an embodiment shown in fig. 7 below. In other embodiments, the processors 631 and/or 641 may also employ general purpose processors, such as general purpose CPUs, to perform the functions described above.
In addition, it should be noted that the main control board may have one or more blocks, and the main control board and the standby main control board may be included when there are multiple blocks. The interface board may have one or more pieces, the more data processing capabilities of the device, the more interface boards are provided. Under the condition of a plurality of interface boards, the interface boards can communicate through one or a plurality of exchange network boards, and load sharing redundancy backup can be realized jointly when a plurality of interface boards exist. Under the centralized forwarding architecture, the device can be used for processing the service data of the whole system without a switching network board. Under the distributed forwarding architecture, the device comprises a plurality of interface boards, and can realize data exchange among the plurality of interface boards through the exchange network board, thereby providing high-capacity data exchange and processing capacity. Therefore, the data access and processing power of the network devices of the distributed architecture is greater than that of the devices of the centralized architecture. The specific architecture employed is not limited in any way herein, depending on the specific networking deployment scenario.
In particular embodiments, memory 632 may be, but is not limited to, read-only Memory (ROM) or other type of static storage device that can store static information and instructions, random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be electrically erasable programmable read-only Memory (electrically erasable programmable read-only Memory, EEPROM), compact disc read-only Memory (compact disc read-only Memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 632 may be stand alone and be coupled to the processor 631 via a communication bus. Memory 632 may also be integrated with processor 631.
The memory 632 is used for storing program codes and is controlled by the processor 631 to execute the path detection method provided in the above embodiment. The processor 631 is configured to execute program code stored in the memory 632. One or more software modules may be included in the program code. The one or more software modules may be the software modules provided in any of the embodiments of fig. 9, 10 below.
In a specific embodiment, the network interface 633 may be a device using any transceiver or the like for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
Fig. 7 is a schematic structural diagram of another network device according to an embodiment of the present application, where the network device 700 may be any node in the communication network in the embodiments shown in fig. 1 to 5, for example, may be a CSG, a data center routing node, a VRF, or the like. The network device 700 may be a switch, router, or other network device that forwards messages. In this embodiment, the network device 700 includes: a main control board 710, an interface board 730, a switching network board 720, and an interface board 740. The main control board 710 is used for performing functions such as system management, equipment maintenance, protocol processing, etc. The switch board 720 is used to complete data exchange between interface boards (interface boards are also referred to as line cards or service boards). The interface boards 730 and 740 are used to provide various service interfaces (e.g., POS interface, GE interface, ATM interface, etc.) and to enable forwarding of data packets. The control plane is formed by the control units of the main control board 710, the control units on the interface boards 730 and 740, and the like. The main control board 710 mainly has 3 kinds of functional units: the system comprises a system management control unit, a system clock unit and a system maintenance unit. The main control board 710, the interface boards 730 and 740, and the switch board 720 are connected to the system back board through a system bus to implement intercommunication. The central processor 731 on the interface board 730 is used for controlling and managing the interface board and communicating with the central processor on the main control board. The forwarding table entry memory 734 on the interface board 730 is used for storing forwarding table entries, and the network processor 732 performs forwarding of the message by looking up the forwarding table entries stored in the forwarding table entry memory 734.
The physical interface card 733 of the interface board 730 is used for receiving messages. Specific implementation may refer to steps 301 and 304 in the embodiment shown in fig. 3. And will not be described in detail herein.
The network processor 732 is configured to perform the processing steps and functions of any node described in the embodiments shown in fig. 1-5, and specifically, reference may be made to step 302 (as a CSG-time processing) or step 305 (as a data center node-routing processing) in the embodiment shown in fig. 3, step 501 (as a VRF-time processing) in the embodiment shown in fig. 5, step 502 (as a data center node-routing processing), step 503 (as a CSG-time processing), and step 504 (as a data center node-routing processing). And will not be described in detail herein.
The processed message is then sent to other devices via the physical interface card 733. Specific implementation may refer to steps 303 and 306 in the embodiment shown in fig. 3. And will not be described in detail herein.
It will be appreciated that, as shown in fig. 7, the present embodiment includes a plurality of interface boards, and a distributed forwarding mechanism is adopted, under which operations on the interface board 740 are substantially similar to those of the interface board 730, and for brevity, a description is omitted. Further, as described above, the functions of the network processors 732 and 742 in fig. 7 may be implemented instead with an application specific integrated circuit (application specific integrated circuit).
In addition, it should be noted that the main control board may have one or more blocks, and the main control board and the standby main control board may be included when there are multiple blocks. The interface board may have one or more pieces, the more data processing capabilities of the device, the more interface boards are provided. The physical interface card on the interface board may also have one or more pieces. The switching network board may not be provided, or may be provided with one or more blocks, and load sharing redundancy backup can be jointly realized when the switching network board is provided with the plurality of blocks. Under the centralized forwarding architecture, the device can be used for processing the service data of the whole system without a switching network board. Under the distributed forwarding architecture, the device can have at least one switching network board, and data exchange among a plurality of interface boards is realized through the switching network board, so that high-capacity data exchange and processing capacity are provided. Therefore, the data access and processing power of the network devices of the distributed architecture is greater than that of the devices of the centralized architecture. The specific architecture employed is not limited in any way herein, depending on the specific networking deployment scenario.
Fig. 8 is a schematic structural diagram of an interface board 800 in the network device shown in fig. 7, where the network device where the interface board 800 is located may be any node in the communication network in the embodiment shown in fig. 1 to 5, for example, may be a CSG, a data center routing node, a VRF, or the like. The interface board 800 may include a physical interface card (physical interface card, PIC) 830, a network processor (network processor, NP) 810, and a traffic management module (traffic management) 820.
Wherein, PIC: and the physical interface card (physical interface card) is used for realizing the docking function of the physical layer, the original flow enters the interface board of the network equipment, and the processed message is sent out from the PIC card.
The network processor NP 810 is configured to implement forwarding processing of a packet. Specifically, the processing of the uplink message includes: processing of the message ingress interface, forwarding table lookup (e.g., related content relating to the first forwarding table or the second forwarding table in the above embodiments); and (3) processing a downlink message: forwarding table lookup (related content as in the above embodiments relating to the first forwarding table or the second forwarding table) and so on.
Traffic management TM 820 is used to implement QoS, line speed forwarding, high capacity buffering, queue management, and so on. Specifically, the upstream traffic management includes: uplink Qos processing (such as congestion management and queue scheduling, etc.) and slicing processing; the downstream traffic management includes: packet processing, multicast replication, and downstream Qos processing (e.g., congestion management and queue scheduling, etc.).
It will be appreciated that if the network device has multiple interface boards 800, the multiple interface boards 800 may communicate via the switching network 840.
It should be noted that, fig. 8 only shows a schematic process flow or a module inside the NP, and the processing sequence of each module in the specific implementation is not limited to this, and other modules or process flows may be deployed in actual applications as required. The comparison of the examples is not limited.
Fig. 9 is a schematic structural diagram of a CSG according to an embodiment of the present application. Wherein the communication network further comprises a plurality of data center routing nodes, a plurality of VRFs, each of which has one or more of the plurality of VMs connected thereto, and a plurality of VMs for executing the target VNF. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
As shown in fig. 9, the CSG 900 includes:
a receiving module 901, configured to receive a message, where the message carries an identifier of a target VNF. Detailed description reference is made to step 301 in the embodiment of fig. 3.
A selection module 902 is configured to select one agent VPNSID from a plurality of agents VPNSID included in the first forwarding table as a target agent VPNSID. Detailed description reference is made to step 302 in the embodiment of fig. 3.
The forwarding module 903 is configured to forward the message with the target proxy VPNSID as a destination address, so as to forward the message to a data center routing node indicated by the target proxy VPNSID, and is configured to instruct the data center routing node indicated by the target proxy VPNSID to forward the message according to a plurality of VPNSID forwarding tables included in the second forwarding table. The first forwarding table is a forwarding table corresponding to the identifier of the target VNF, and the plurality of proxy VPNSIDs included in the first forwarding table refer to proxy VPNSIDs configured corresponding to the VPNSIDs of the plurality of VMs. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of VMs corresponding to the target agent VPNSID among the VPNSIDs of the plurality of VMs. Detailed description reference is made to step 303 in the embodiment of fig. 3.
Optionally, the CSG further includes an adding module configured to obtain proxy VPNSID corresponding to VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table.
Optionally, the adding module is specifically configured to: and receiving a first notification message sent by any VRF in the plurality of VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the VMs connected by the VRF.
Optionally, the communication network further comprises an RR. At this time, the adding module is specifically configured to: and receiving a second notification message sent by the RR, wherein the second notification message carries proxy VPNSIDs corresponding to the VPNSIDs of the VMs.
Technical effects of each module included in the CSG provided in the fifth aspect may refer to the packet forwarding method provided in the first aspect, which is not described in detail herein.
In the embodiment of the present application, to avoid being limited by the maximum load sharing number of CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, the VPNSID of that VM configures multiple proxies VPNSID correspondingly. In this way, the proxy VPNSID can be adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node when load sharing, and each data center routing node completes actual load sharing. The maximum load sharing route number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the route number of the CSG for load sharing finally, and the load sharing efficiency is improved.
It should be noted that: in the foregoing embodiment, only the division of the functional modules is used for illustrating the CSG when forwarding a message, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the CSG and the packet forwarding method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the CSG and the packet forwarding method embodiments are detailed in the method embodiments and are not described herein again.
Fig. 10 is a schematic structural diagram of any one of a plurality of data center routing nodes in a communication network according to an embodiment of the present application. The communication network further includes a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
As shown in fig. 10, the data center routing node 1000 includes:
the receiving module 1001 is configured to receive a packet sent by the CSG, where the packet carries a target agent VPNSID. Detailed description reference is made to step 304 in the embodiment of fig. 3.
A selecting module 1002, configured to select one VPNSID from the VPNSIDs of the multiple VMs included in the second forwarding table, where the target VPNSID carried in the packet is the proxy VPNSID of the routing node of the any data center. Detailed description reference is made to step 305 in the embodiment of fig. 3.
And the forwarding module 1003 is configured to forward the message with the selected VPNSID as a destination address. The plurality of VPNSIDs included in the second forwarding table refer to VPNSIDs of a plurality of VMs corresponding to the proxy VPNSID of the data center routing node. Detailed description reference is made to step 306 in the embodiment of fig. 3.
Optionally, the data center routing node further comprises:
and the adding module is used for obtaining the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node and adding the obtained VPNSIDs of the VMs to the second forwarding table.
Optionally, the foregoing acquiring module is specifically configured to: receiving a third notification message sent by any one of the plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected by any one VRF; and according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
Optionally, VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
In the embodiment of the present application, to avoid being limited by the maximum load sharing number of CSG, one proxy VPNSID may be configured for each data center routing node. For the VPNSID of any VM, the VPNSID of that VM configures multiple proxies VPNSID correspondingly. In this way, the proxy VPNSID can be adopted in the local forwarding table of the CSG to replace the VPNSID of the original VM, so that the CSG only needs to be responsible for load sharing to each data center routing node when load sharing, and each data center routing node completes actual load sharing. The maximum load sharing route number of the data center routing node can be up to 128, so that the message forwarding method provided by the embodiment of the application is equivalent to increasing the route number of the CSG for load sharing finally, and the load sharing efficiency is improved.
It should be noted that: when the data center routing node provided in the above embodiment performs message forwarding, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data center routing node provided in the above embodiment and the message forwarding method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
In addition, the embodiment of the application further provides any one of a plurality of VRFs in the communication network, where the communication network further includes a CSG, a plurality of data center routing nodes, and a plurality of VMs for executing the target VNF, and one or more of the plurality of VMs are connected to each of the plurality of VRFs. Wherein each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The VRF comprises:
and the acquisition module is used for acquiring a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected by the VRF.
And the issuing module is used for issuing a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected by the VRF.
Optionally, the publishing module is further configured to publish a third notification message to the plurality of data center routing nodes, where the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected to the VRF.
In the embodiment of the application, the VRF may actively report, to the CSG, a plurality of agents VPNSID corresponding to VPNSID of each VM in each VM connected to the VRF, so that the CSG builds the first forwarding table, and efficiency of the CSG to build the first forwarding table is improved.
It should be noted that: when forwarding a message, the VRF provided in the foregoing embodiment only uses the division of each functional module to illustrate, and in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the VRF provided in the foregoing embodiment and the message forwarding method embodiment belong to the same concept, and specific implementation processes of the VRF and the message forwarding method embodiment are detailed in the method embodiment and are not repeated herein.
In addition, the embodiment of the application also provides RR in the communication network. The communication network further includes a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto. Each VM of the plurality of VMs is configured with a VPNSID and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID.
The RR includes:
an acquisition module, configured to acquire a VPNSID of each VM in a plurality of VMs connected by any one of the plurality of VRFs; for the obtained VPNSID of any VM, the RR determines the proxy VPNSID corresponding to the VPNSID of the VM based on the corresponding relation between the locally stored VPNSID and the proxy VPNSID, and obtains a plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in a plurality of VMs connected with any VRF;
And the release module is used for sending a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected with each VRF.
Optionally, the corresponding relationship between the VPNSID locally stored by the RR and the proxy VPNSID is configured on the RR by the administrator, so that flexibility of constructing the first forwarding table by the CSG is improved.
When the RR reports the proxy VPNSID corresponding to the VPNSID of each of the plurality of VMs to the CSG and the CSG constructs the first forwarding table, the RR needs to obtain the plurality of proxy VPNSIDs corresponding to the VPNSID of each of the plurality of VMs first and then send a second notification message to the CSG, so that the CSG constructs the first forwarding table, and flexibility of the CSG in constructing the first forwarding table is improved.
It should be noted that: when forwarding a message, the RR provided in the foregoing embodiment only uses the division of the foregoing functional modules to illustrate, and in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the RR and the message forwarding method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the RR and the message forwarding method embodiment are detailed in the method embodiment, which is not described herein again.
In addition, the embodiment of the application further provides a message forwarding system, which comprises a CSG, a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, and a plurality of VMs for executing a target virtual network function VNF, wherein one or more of the plurality of VMs are connected to each of the plurality of VRFs, each VM of the plurality of VMs is configured with a virtual private network segment identifier VPNSID, and each data center routing node of the plurality of data center routing nodes is configured with an agent VPNSID;
the CSG is configured to obtain proxy VPNSID corresponding to VPNSID of the multiple VMs, and add the obtained proxy VPNSID to the first forwarding table;
any one of the plurality of data center routing nodes is configured to obtain VPNSIDs of a plurality of VMs corresponding to an agent VPNSID of the data center routing node, and add the obtained VPNSID of the VM to the second forwarding table.
Optionally, the CSG is specifically configured to: and receiving a first notification message sent by any VRF in the plurality of VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the VMs connected by the VRF.
Optionally, the CSG is specifically configured to: the CSG receives a second notification message sent by the RR, the second notification message carrying proxy VPNSID corresponding to VPNSID of the plurality of VMs.
Optionally, the data center routing node is configured to receive a third notification message sent by any one of the multiple VRFs, where the third notification message carries multiple proxy VPNSIDs corresponding to VPNSIDs of each VM in each VM connected to the any one VRF; and according to the proxy VPNSID of the data center routing node, the VPNSID of the VM corresponding to the proxy VPNSID of the data center routing node is obtained from the third notification message.
Optionally, VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the data center routing node are configured on the data center routing node by an administrator.
The functions of the nodes in the foregoing packet forwarding system have been described in detail in the foregoing embodiments, and will not be described herein.
Fig. 11 is a schematic structural diagram of a network device 1100 according to an embodiment of the present application. Any node in the communication network in the embodiments of fig. 1 to 5, such as CSG, data center routing node, VRF, etc., may be implemented by the network device 1100 shown in fig. 11, where the network device 1100 may be a switch, a router, or other network device for forwarding a message. In addition, the network controller in the embodiments of fig. 1 to 5 may also be implemented by the network device 1100 shown in fig. 11, and the specific function of the network device 1100 may refer to the specific implementation manner of the network controller in any of the embodiments of fig. 1 to 5, which is not described herein. Referring to fig. 11, the device includes at least one processor 1101, a communication bus 1102, a memory 1103 and at least one communication interface 1104.
The processor 1101 may be a general purpose central processing unit (central processing unit, CPU), application Specific Integrated Circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in the present application.
Communication bus 1102 may include a path to transfer information between the aforementioned components.
The Memory 1103 may be, but is not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (electrically erasable programmable read-only Memory, EEPROM), a compact disc (compact disc read-only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1103 may be separate and coupled to the processor 1101 by a communication bus 1102. The memory 1103 may also be integrated with the processor 1101.
The memory 1103 is configured to store program codes and is controlled by the processor 1101 to execute the path detecting method according to any of the above embodiments. The processor 1101 is configured to execute program code stored in the memory 1103. One or more software modules may be included in the program code. Any node in the communication network in the embodiments provided in fig. 1-5 may determine data for developing an application by one or more software modules in program code in the processor 1101 and memory 1103. The one or more software modules may be the software modules provided in any of the embodiments of fig. 9 and 10.
Communication interface 1104 uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access networkRAN), wireless local area network (wireless local area networks, WLAN), etc.
In a particular implementation, as one embodiment, a network device may include multiple processors, such as processor 1101 and processor 1105 shown in FIG. 11. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above embodiments are provided for the purpose of not limiting the present application, but rather, any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (21)

1. The message forwarding method is characterized by being applied to a base station service gateway (CSG) in a communication network, wherein the communication network further comprises a plurality of data center routing nodes, a plurality of Virtual Routing Forwarding (VRF) and a plurality of Virtual Machines (VM) for executing a target Virtual Network Function (VNF), and each VRF is connected with one or more VMs; each VM of the plurality of VMs is configured with a Virtual Private Network Segment Identifier (VPNSID), and each data center routing node of the plurality of data center routing nodes is configured with an agent (VPNSID);
The method comprises the following steps:
the CSG receives a message, wherein the message carries the identifier of the target VNF;
the CSG selects one proxy VPNSID from a plurality of proxy VPNSIDs included in a first forwarding table as a target proxy VPNSID, the first forwarding table is a forwarding table corresponding to an identifier of the target VNF, the plurality of proxy VPNSIDs included in the first forwarding table are proxy VPNSIDs configured corresponding to VPNSIDs of the plurality of VMs, and the VPNSID of each VM is configured with a plurality of proxy VPNSIDs corresponding to each other;
and the CSG forwards the message by taking the target proxy VPNSID as a destination address so as to forward the message to a data center routing node indicated by the target proxy VPNSID, wherein the data center routing node indicated by the target proxy VPNSID is used for indicating the data center routing node indicated by the target proxy VPNSID to forward the message according to a plurality of VPNSIDs included in a second forwarding table, and the plurality of VPNSIDs included in the second forwarding table are VPNSIDs of the VMs corresponding to the target proxy VPNSID.
2. The method of claim 1, wherein the method further comprises:
the CSG acquires proxy VPNSIDs corresponding to the VPNSIDs of the VMs, and adds the acquired proxy VPNSIDs to the first forwarding table.
3. The method of claim 2, wherein the CSG obtaining the proxy VPNSID corresponding to the VPNSIDs of the plurality of VMs comprises:
and the CSG receives a first notification message sent by any one of the VRFs, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected by the VRFs.
4. The method of claim 2, wherein the communication network further comprises a routing reflector RR; the CSG obtaining an agent VPNSID corresponding to VPNSID of the plurality of VMs includes:
and the CSG receives a second notification message sent by the RR, wherein the second notification message carries proxy VPNSIDs corresponding to the VPNSIDs of the VMs.
5. The message forwarding method is characterized by being applied to any one of a plurality of data center routing nodes in a communication network, wherein the communication network further comprises a CSG, a plurality of VRFs and a plurality of VMs for executing target VNs, and each VRF is connected with one or more of the plurality of VMs; each VM of the plurality of VMs is configured with a VPNSID, and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
The method comprises the following steps:
the routing node of any data center receives a message sent by the CSG, wherein the message carries a target agent VPNSID;
when the target proxy VPNSID carried by the message is the proxy VPNSID of the any data center routing node, the any data center routing node selects one VPNSID from the VPNSIDs of a plurality of VMs included in a second forwarding table, wherein the plurality of VPNSIDs included in the second forwarding table are the VPNSIDs of a plurality of VMs corresponding to the proxy VPNSID of the any data center routing node, and the VPNSID of each VM is configured with a plurality of proxy VPNSIDs corresponding to the proxy VPNSID of the any data center routing node;
and the routing node of any data center forwards the message by taking the selected VPNSID as a destination address.
6. The method of claim 5, wherein the method further comprises:
and the any data center routing node acquires the VPNSIDs of the plurality of VMs corresponding to the proxy VPNSID of the any data center routing node, and adds the acquired VPNSIDs of the VMs to the second forwarding table.
7. The method of claim 6, wherein the any one data center routing node obtaining VPNSIDs for a plurality of VMs corresponding to proxy VPNSIDs for the any one data center routing node, comprising:
The routing node of any data center receives a third notification message sent by any VRF in the plurality of VRFs, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in each VM connected by the VRF;
and the any data center routing node acquires the VPNSID of the VM corresponding to the proxy VPNSID of the any data center routing node from the third notification message according to the proxy VPNSID of the any data center routing node.
8. The method of claim 6, wherein VPNSIDs of the plurality of VMs corresponding to proxy VPNSIDs of the arbitrary data center routing node are configured on the arbitrary data center routing node by a manager.
9. The message forwarding method is characterized by being applied to any one of a plurality of VRFs in a communication network, wherein the communication network further comprises a CSG, a plurality of data center routing nodes and a plurality of VMs for executing target VNs, and each VRF in the plurality of VRFs is connected with one or more VMs in the plurality of VMs; each VM of the plurality of VMs is configured with a VPNSID, and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
The method comprises the following steps:
the VRF acquires a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in a plurality of VMs connected by the VRF;
and the any VRF issues a first notification message to the CSG, wherein the first notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of the plurality of VMs connected by the any VRF, so that when the CSG forwards a message carrying the identification of the any VRF, one proxy VPNSID is selected from a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of the plurality of VMs connected by the any VRF to serve as a target proxy VPNSID, and the target proxy VPNSID is used as a target address to forward the message.
10. The method of claim 9, wherein after the any one VRF obtains the plurality of proxy VPNSIDs corresponding to the VPNSID of each VM in the plurality of VMs connected by the any one VRF, further comprising:
and the any VRF issues a third notification message to the plurality of data center routing nodes, wherein the third notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each VM in the plurality of VMs connected by the any VRF.
11. The message forwarding method is characterized by being applied to RRs in a communication network, wherein the communication network further comprises a CSG, a plurality of data center routing nodes, a plurality of VRFs and a plurality of VMs for executing target VNs, and each VRF is connected with one or more of the VMs; each VM of the plurality of VMs is configured with a VPNSID, and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
The method comprises the following steps:
the RR acquires the VPNSID of each VM in a plurality of VMs connected by any VRF in the plurality of VRFs;
for the obtained VPNSID of any VM, the RR determines an agent VPNSID corresponding to the VPNSID of the any VM based on a correspondence between the locally stored VPNSID and the agent VPNSID, so as to obtain a plurality of agent VPNSIDs corresponding to the VPNSID of each of a plurality of VMs connected to the any VRF;
and the RR sends a second notification message to the CSG, wherein the second notification message carries a plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of a plurality of VMs connected with each of the plurality of VRFs, so that when the CSG forwards a message carrying the identifier of any VRF, one proxy VPNSID is selected from the plurality of proxy VPNSIDs corresponding to the VPNSIDs of each of the plurality of VMs connected with any VRF as a target proxy VPNSID, and the target proxy VPNSID is used as a target address to forward the message.
12. The method of claim 11, wherein the correspondence between the VPNSID and the proxy VPNSID is configured on the RR by a manager.
13. A CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each of the plurality of VMs configured with one VPNSID, and a plurality of VMs for performing a target virtual network function VNF, each of the plurality of data center routing nodes configured with one proxy VPNSID;
Wherein the CSG includes a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of claims 1-4.
14. A data center routing node in a communication network, the communication network comprising a plurality of data center routing nodes, a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with one VPNSID, each data center routing node of the plurality of data center routing nodes configured with one proxy VPNSID;
wherein any one of the plurality of data center routing nodes comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of claims 5-8.
15. A VRF in a communication network, the communication network comprising a plurality of VRFs, a CSG, a plurality of data center routing nodes, a plurality of VMs for executing target VNFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each of the plurality of VMs configured with one VPNSID, each of the plurality of data center routing nodes configured with one proxy VPNSID;
Wherein any one of the plurality of VRFs comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of claims 9-10.
16. An RR in a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
wherein the RR comprises a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory to perform the method of any one of claims 11-12.
17. A chip disposed in a CSG in a communication network, the communication network further comprising a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs, each of the plurality of VRFs having one or more of the plurality of VMs connected thereto, each of the plurality of VMs configured with a VPNSID, and a plurality of VMs for performing a target virtual network function VNF, each of the plurality of data center routing nodes configured with a proxy VPNSID;
The chip is characterized by comprising a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform the method of any of claims 1-4.
18. A chip disposed in any one of a plurality of data center routing nodes included in a communication network, the communication network further including a CSG, a plurality of VRFs, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs connected to each VRF, each VM of the plurality of VMs configured with a VPNSID, each data center routing node of the plurality of data center routing nodes configured with a proxy VPNSID;
the chip is characterized by comprising a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform the method of any of claims 5-8.
19. A chip disposed in any one of a plurality of VRFs included in a communication network, the communication network further including a CSG, a plurality of data center routing nodes, a plurality of VMs for executing target VNFs, one or more of the plurality of VMs being connected to each of the plurality of VRFs, each of the plurality of VMs being configured with a VPNSID, each of the plurality of data center routing nodes being configured with a proxy VPNSID;
The chip is characterized by comprising a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform the method of any of claims 9-10.
20. A chip disposed in an RR of a communication network, the communication network further comprising a CSG, a plurality of data center routing nodes, a plurality of VRFs, a plurality of VMs for executing target VNFs, each VRF having one or more of the plurality of VMs connected thereto, each VM of the plurality of VMs configured with one VPNSID, each data center routing node of the plurality of data center routing nodes configured with one proxy VPNSID;
the chip is characterized by comprising a processor and an interface circuit;
the interface circuit is used for receiving the instruction and transmitting the instruction to the processor;
the processor is configured to perform the method of any of claims 11-12.
21. The message forwarding system is characterized by comprising a CSG, a plurality of data center routing nodes, a plurality of virtual routing forwarding VRFs and a plurality of VMs for executing target virtual network functions VNs, wherein one or more of the VMs are connected to each of the plurality of VRFs, each VM of the plurality of VMs is configured with a Virtual Private Network Segment Identifier (VPNSID), and each data center routing node of the plurality of data center routing nodes is configured with a proxy VPNSID;
The CSG is configured to obtain proxy VPNSID corresponding to VPNSID of the multiple VMs, and add the obtained proxy VPNSID to a first forwarding table;
any one of the plurality of data center routing nodes is configured to obtain VPNSIDs of a plurality of VMs corresponding to an agent VPNSID of the any one of the plurality of data center routing nodes, and add the obtained VPNSID of the VM to a second forwarding table.
CN201911046986.5A 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip Active CN112751766B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911046986.5A CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip
PCT/CN2020/124463 WO2021083228A1 (en) 2019-10-30 2020-10-28 Message forwarding method, device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046986.5A CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip

Publications (2)

Publication Number Publication Date
CN112751766A CN112751766A (en) 2021-05-04
CN112751766B true CN112751766B (en) 2023-07-11

Family

ID=75640813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046986.5A Active CN112751766B (en) 2019-10-30 2019-10-30 Message forwarding method and system, related equipment and chip

Country Status (2)

Country Link
CN (1) CN112751766B (en)
WO (1) WO2021083228A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334045B (en) * 2022-08-12 2023-12-19 迈普通信技术股份有限公司 Message forwarding method, device, gateway equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547339A (en) * 2017-06-14 2018-01-05 新华三技术有限公司 A kind of gateway media access control MAC address feedback method and device
CN108718278A (en) * 2018-04-13 2018-10-30 新华三技术有限公司 A kind of message transmitting method and device
CN109873760A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Handle the method and apparatus of routing and the method and apparatus of data transmission

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813488B2 (en) * 2014-06-25 2017-11-07 Comcast Cable Communications, Llc Detecting virtual private network usage
CN106034077B (en) * 2015-03-18 2019-06-28 华为技术有限公司 A kind of dynamic route collocating method, apparatus and system
CN106487695B (en) * 2015-08-25 2019-10-01 华为技术有限公司 A kind of data transmission method, virtual network managing device and data transmission system
US9729441B2 (en) * 2015-10-09 2017-08-08 Futurewei Technologies, Inc. Service function bundling for service function chains
CN106101023B (en) * 2016-05-24 2019-06-28 华为技术有限公司 A kind of VPLS message processing method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547339A (en) * 2017-06-14 2018-01-05 新华三技术有限公司 A kind of gateway media access control MAC address feedback method and device
CN109873760A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Handle the method and apparatus of routing and the method and apparatus of data transmission
CN108718278A (en) * 2018-04-13 2018-10-30 新华三技术有限公司 A kind of message transmitting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
改进BGP实现大型复杂IP网络的负载均衡;徐建锋等;《电信科学》;20041015(第10期);全文 *
面向网络功能虚拟化的高性能负载均衡机制;王煜炜等;《计算机研究与发展》;20180415(第04期);全文 *

Also Published As

Publication number Publication date
WO2021083228A1 (en) 2021-05-06
CN112751766A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
JP7417825B2 (en) slice-based routing
US9531643B2 (en) Extending virtual station interface discovery protocol (VDP) and VDP-like protocols for dual-homed deployments in data center environments
US9655232B2 (en) Spanning tree protocol (STP) optimization techniques
US10237179B2 (en) Systems and methods of inter data center out-bound traffic management
US20210289436A1 (en) Data Processing Method, Controller, and Forwarding Device
EP3399703B1 (en) Method for implementing load balancing, apparatus, and network system
CN111510378A (en) EVPN message processing method, device and system
CN109660442B (en) Method and device for multicast replication in Overlay network
EP3197107A1 (en) Message transmission method and apparatus
IL230406A (en) Method and cloud computing system for implementing a 3g packet core in a cloud computer with openflow data and control planes
US11663052B2 (en) Adaptive application assignment to distributed cloud resources
EP4037265A1 (en) Packet forwarding method, apparatus, storage medium, and system
CN113992569B (en) Multipath service convergence method, device and storage medium in SDN network
US20150146571A1 (en) Method, device and system for controlling network path
US20160205033A1 (en) Pool element status information synchronization method, pool register, and pool element
EP3989512A1 (en) Method for controlling traffic forwarding, device, and system
WO2022048418A1 (en) Method, device and system for forwarding message
CN112751766B (en) Message forwarding method and system, related equipment and chip
WO2019240158A1 (en) Communication system and communication method
WO2022166465A1 (en) Message processing method and related apparatus
WO2022012287A1 (en) Route optimization method, physical network device and computer-readable storage medium
EP4040745A1 (en) Service packet forwarding method, device, and computer storage medium
CN116074236A (en) Message forwarding method and device
CN113595915A (en) Method for forwarding message and related equipment
JP7273130B2 (en) Communication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant