CN117675559A - Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment - Google Patents

Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment Download PDF

Info

Publication number
CN117675559A
CN117675559A CN202311504907.7A CN202311504907A CN117675559A CN 117675559 A CN117675559 A CN 117675559A CN 202311504907 A CN202311504907 A CN 202311504907A CN 117675559 A CN117675559 A CN 117675559A
Authority
CN
China
Prior art keywords
vpc
network
transit
data centers
tenant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311504907.7A
Other languages
Chinese (zh)
Inventor
李伟刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd
Original Assignee
New H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd filed Critical New H3C Technologies Co Ltd
Priority to CN202311504907.7A priority Critical patent/CN117675559A/en
Publication of CN117675559A publication Critical patent/CN117675559A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment, which are used for solving the technical problem of low cross-data center multi-cloud service arrangement efficiency. According to the invention, the transmission router is introduced to serve as a transit virtual router to be communicated with the virtual router under each site tenant, and the VPC intercommunication service is uniformly arranged through the super controller, so that three layers of inter-access among multiple DCs through the transmission router to cross the data center service is realized. The two-layer interworking service of the tenant Network crossing the data center is realized under the control of the super controller by introducing a transmission Network Transit Network to connect the tenant networks needing L2 interworking in different DCs.

Description

Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment
Technical Field
The present invention relates to the field of communications and cloud computing technologies, and in particular, to a method, an apparatus, and a device for arranging multi-data center cross-domain intercommunication multi-cloud service.
Background
In the cloud data center, various resources form a resource pool through a virtualization technology, service and physical network decoupling is achieved, on-demand self-service and automatic deployment of a service network is achieved through an SDN technology, and multi-tenant, elastic expansion and contraction and rapid deployment are supported.
Virtual private cloud (Virtual Private Cloud, VPC) is a technology that creates isolated private networks in public cloud environments. VPC allows users to create a virtual isolated network environment on the cloud service provider's infrastructure where users can deploy and manage various cloud resources such as virtual machines, storage, databases, etc. The VPC provides a series of network configuration options including defining a custom IP address range, subnet partitioning, routing tables, access control policies, etc., and the user can flexibly configure the network according to his own needs. Through the VPC, users can build sandboxed environments, isolate applications from data, enable secure communications, and so on. The VPC can also be connected with a local network of a user, and a mixed cloud environment between public cloud and private data centers is realized by means of special VPN connection or direct connection and the like, so that a flexible and safe cloud computing solution is provided.
In a multi-data center scenario, the service requirements of cross-data center deployment, cross-data center intercommunication among different service systems, cross-data center intercommunication automatic deployment and the like need to be solved.
Customers may need to deploy across Data Centers (DCs), for example, customers may draw a separate VPC for a large website, which may span multiple Data centers, so there is a need for inter-DC interworking of traffic inside this VPC, while routing and firewall isolation is required.
The customers may divide different VPCs for different services, the different VPCs may be deployed in different DCs, if there is a need for interworking between services, the VPCs may be required to perform layer 3 (L3) interworking across DCs (interworking between VPCs is generally L3 interworking, if L2 interworking is required, the interworking virtual machines VM typically need to be divided into the same VPC)
The automatic deployment is realized through SDN, and the method mainly comprises the following two steps: firstly, a cross-DC virtualized network is arranged; second, instantiation is to be performed in each DC. At present, in a data center interconnection (Data Center Interconnect, DCI) scheme, a set of controllers are generally required to be deployed in each DC, each controller manages a respective DC area, and when service interworking is required between DC areas, administrators of each DC cooperate with each other to deploy L2/L3 DCI interconnection service, so that the interworking purpose is achieved. Therefore, the intercommunication between the single-point tenant Router and the tenant Router can be realized, but if the intercommunication service requirement is larger, when the intercommunication is needed between the multi-point tenant routers or the sub-network segments, the mode of collaborative deployment of the administrators has the technical problems of low efficiency, difficult management, high cost and the like from the aspects of resource data planning, service configuration and operation and maintenance.
Disclosure of Invention
In view of the above, the invention provides a multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment, which are used for solving the technical problem of low cross-data center multi-cloud service arrangement efficiency.
Based on one aspect of the embodiment of the invention, the invention provides a multi-data center cross-domain intercommunication multi-cloud service arrangement method, which is applied to a super controller, wherein the super controller manages and controls software defined network SDN controllers in a plurality of data centers, and the method comprises the following steps:
and creating a transmission network architecture (Transit Fabric) for multi-cloud service arrangement by the super controller, issuing arrangement configuration based on a transmission private cloud Transit VPC network model, realizing inter-data center intercommunication of tenant virtual private cloud VPC, and uniformly arranging multi-tenant VPC intercommunication service in the Transit Fabric.
Further, the method for uniformly arranging the multi-tenant VPC intercommunication service in the Transit Fabric comprises the following steps:
the super controller issues arrangement configuration to a plurality of data centers through interfaces between the super controller and controllers in the plurality of data centers, and creates a Transit VPC in the plurality of data centers respectively;
issuing arrangement configuration to a plurality of data centers, so that a tenant VPC with intercommunication requirements in the plurality of data centers establishes connection with a Transit VPC;
the super controller issues route configuration to a plurality of data centers, establishes routes among the data centers Transit VPCs and routes among the tenants VPCs in the data centers and the Transit VPCs, and controls and realizes the inter-data-center business flow intercommunication of the tenants VPCs in the data centers.
Further, the method further comprises:
under the condition that different VPC services are respectively deployed in different data centers, the service arrangement method for realizing the intercommunication of different tenant VPCs across the data center L3 layers comprises the following steps:
configuring east-west Transit Router in a super controller, and respectively creating virtual routing instances of the Transit Router, namely Transit VRF, in the interconnection DCI edge equipment of the exit data centers of all the data centers;
and establishing a route association between a tenant virtual route instance, namely a tenant VRF and a Transit VRF, on DCI edge equipment of each data center, thereby realizing three-layer interconnection of different tenants VPC in each data center.
Further, according to different usage scenarios and service arrangement modes, a mode of inter-working of the tenant VPC across three layers of the data center L3 includes: the VPC full interworking mode, the VPC on-demand interworking mode in which the source network segment is specified and the destination network segment is not specified, and the VPC on-demand interworking mode in which the source network segment and the destination network segment are specified.
Further, the method further comprises:
under the condition that the same VPC service is deployed in different data centers, the service arrangement method for realizing L2 layer intercommunication of the VPC of the same tenant across the data centers comprises the following steps:
creating a transmission Network in a transmission Fabric on the super controller, and associating the transmission Network with a tenant Network needing two-layer intercommunication;
the method comprises the steps that a local private network virtual switching instance VSI and an interworking VSI configuration are issued on export DCI edge equipment of each data center, and local VXLAN on the DCI edge equipment is mapped to the same VXLAN;
after the message encapsulated with the local private Network VSI and sent by the same tenant VPC of each data center reaches the DCI edge equipment, the local private Network VSI mapping in the message is converted into the interworking VSI, the message is sent to the opposite-end DCI edge equipment through the data center interconnection VXLAN tunnel in the Transit Network, and the opposite-end DCI edge equipment converts the interworking VSI mapping in the message into the opposite-end private Network VSI, so that the L2 layer interworking of the same tenant VPC and Network segment in each data is realized.
Based on another aspect of the embodiments of the present invention, the present invention further provides a multi-data center cross-domain interworking multi-cloud service orchestration device, where the device is applied to a super controller, and the super controller manages and controls software defined network SDN controllers in a plurality of data centers, where the device issues orchestration configuration based on a transmission private cloud VPC network model by creating a transmission network architecture for multi-cloud service orchestration, so as to implement tenant virtual private cloud VPC cross-data center interworking, and unify orchestration of multi-tenant VPC interworking services in the transmission Fabric.
The device provided by the invention can be realized in a mode of software, hardware or a combination of software and hardware. When implemented as a software module, the program code of the software module is loaded into a storage medium of the device, and the program code in the storage medium is read and executed by a processor.
Further, the apparatus comprises:
the transmission VPC creation module is used for issuing arrangement configuration to the plurality of data centers through interfaces between the transmission VPC creation module and controllers in the plurality of data centers, and creating a Transit VPC in the plurality of data centers respectively;
the VPC association module is used for issuing arrangement configuration to the plurality of data centers through interfaces between the VPC association module and controllers in the plurality of data centers, so that a tenant VPC with intercommunication requirements in the plurality of data centers establishes connection with a Transit VPC; and issuing route configuration to a plurality of data centers, establishing routes among the Transit VPCs of the data centers and routes among the tenants VPCs in the data centers and the Transit VPCs, and controlling and realizing the inter-data-center business flow intercommunication of the tenants VPCs in the data centers.
Further, the apparatus comprises:
the three-layer intercommunication arrangement module is used for realizing the arrangement of the intercommunication services of different tenants VPCs across the L3 layers of the data centers under the condition that different VPC services are respectively deployed in different data centers; the method comprises the steps of configuring east-west Transit Router, and respectively creating virtual routing instances of the Transit Router, namely Transit VRF, in the interconnection DCI edge equipment of the exit data centers of all data centers; establishing a tenant virtual routing instance, namely routing association between a tenant VRF and a Transit VRF, on DCI edge equipment of each data center, thereby realizing three-layer interconnection of different tenants VPC in each data center;
the two-layer intercommunication arrangement module is used for realizing the arrangement of the L2-layer intercommunication of the VPCs of the same tenant across the data centers under the condition that the same VPC service is deployed in different data centers respectively; the method comprises the steps of creating a transmission Network in a transmission Fabric on a super controller, and associating the transmission Network with a tenant Network needing two-layer intercommunication; the method comprises the steps that a local private network virtual switching instance VSI and an interworking VSI configuration are issued on export DCI edge equipment of each data center, and local VXLAN on the DCI edge equipment is mapped to the same VXLAN; after the message encapsulated with the local private Network VSI and sent by the same tenant VPC of each data center reaches the DCI edge equipment, the local private Network VSI mapping in the message is converted into the interworking VSI, the message is sent to the opposite-end DCI edge equipment through the data center interconnection VXLAN tunnel in the Transit Network, and the opposite-end DCI edge equipment converts the interworking VSI mapping in the message into the opposite-end private Network VSI, so that the L2 layer interworking of the same tenant VPC and Network segment in each data is realized.
Based on another aspect of the embodiment of the present invention, the present invention further provides an electronic device, where the electronic device includes a processor, a communication interface, a storage medium, and a communication bus, where the processor, the communication interface, and the storage medium complete communication with each other through the communication bus;
a storage medium storing a computer program;
and the processor is used for implementing the multi-data center cross-domain intercommunication multi-cloud service arrangement method provided by the invention when executing the computer program stored on the storage medium.
In order to realize the inter-tenant VPC (virtual private network) intercommunication of the cross-data center, the invention introduces a transmission Router, and the VPC intercommunication service is uniformly arranged through the super controller. The transmission Router is used as a Transit virtual Router to be communicated with the virtual Router under each site tenant, the Transit Router is instantiated to the data center interconnection edge equipment of a plurality of DC sites, the tenant Router in each DC tenant VPC is accessed to the Transit Router, and three-layer mutual access automatic pull-through of the cross-data center service is realized among the plurality of DCs through the Transit Router.
In order to realize two-layer intercommunication service of networking Network crossing data center, the invention introduces a transmission Network Transit Network. The transmission Network is used for connecting the tenants Network needing L2 intercommunication in different DCs, and realizing two-layer mutual access automatic pull-through of the cross-data center service.
According to the invention, the super controller is deployed in the arrangement layer, and the network service arrangement of the cross-data center can be realized through the super controller, so that the automatic deployment and pull-through of the service interview of the multiple data centers can be realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will briefly describe the drawings required to be used in the embodiments of the present invention or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to these drawings of the embodiments of the present invention for a person having ordinary skill in the art.
Fig. 1 is a schematic diagram of a hierarchical structure of a network supporting multi-data center cross-domain interworking multi-cloud service orchestration according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a transmission network architecture supporting multi-data center cross-domain interworking multi-cloud service orchestration according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of L3 layer interconnection logic of multi-data center cross-domain interworking multi-cloud service orchestration according to an embodiment of the present invention;
fig. 4 is a schematic diagram of L2 layer interconnection logic of multi-data center cross-domain interworking multi-cloud service orchestration according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the method for arranging multi-data center cross-domain intercommunication multi-cloud service according to an embodiment of the present invention.
Detailed Description
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used in this embodiment of the invention, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present invention to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one from another or similar information, entity or step, but not to describe a particular sequence or order. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present invention. Furthermore, the word "if" as used may be interpreted as "at … …" or "at … …" or "in response to a determination". The "and/or" in the present invention is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. Also, in the description of the present invention, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Fig. 1 is a schematic diagram of a hierarchical structure of a network supporting multi-data center cross-domain interworking multi-cloud service orchestration according to one embodiment of the present invention. The invention provides a business arrangement network which is divided into four layers, namely an arrangement layer, a control layer, a network layer and a forwarding layer.
Arranging a layer: the top layer is an arrangement layer, a Super Controller is arranged in the arrangement layer, and service arrangement and operation and maintenance of the cross-DC Controller are realized through a Super Controller. The super controller provides interfaces for interfacing with SDN controllers in different data centers in the south direction, shielding DC controller differences from different vendors.
The super controller is a centralized controller for managing and controlling the whole SDN network, is a top-level controller in a cross-DC SDN network architecture, is responsible for coordinating and managing the work of a plurality of DC controllers, and provides high-level network management and control functions. The super controller is capable of centrally managing and coordinating multiple DC controllers, organizing them into a unified network control plane. The superordinate controller is responsible for distributing and coordinating network configuration, policies, and routing information to ensure overall network operation and performance optimization. The super controller can analyze and process network information and events from different controllers, and make decisions and issue instructions according to the requirements and policies of the network. The super controller may manage across multiple SDN domains, organizing them into a centralized network control plane. This allows a network administrator to uniformly manage and monitor SDN networks scattered in different geographic locations or cloud environments through one interface. Through the super controller, the SDN network can realize centralized, flexible and extensible management and control. The super controller provides advanced network programming, network topology management, policy management, security management, and cross-domain management functions to support automation, intelligence, and efficient performance of the SDN network.
In the embodiment of the invention, the super controller (deployed in the host machine in a component mode, also called as SUC component) realizes the DC intercommunication of the tenant virtual private cloud VPC across the data center through the transmission private cloud Transit VPC network model. The super controller nanotubes SDN controllers (deployed in a host machine in a component mode and also called as DC control components) in each data center, and the service issuing of the Transit VPC is realized by calling the northbound interface of the DC control components.
The Transit VPC network model is an architecture model that enables centralized network management and connection of multiple VPCs in a cloud environment. It enables connection and traffic management to other VPCs by deploying network virtual devices (such as virtual routers or virtual firewalls) in the central VPC. In conventional multi-VPC architectures, each VPC typically needs to establish a direct connection with the other VPCs, which can lead to network management and configuration complications, as the number of VPCs increases, so does the complexity of network interconnections. The Transit VPC model aggregates the interconnections of all VPCs into a central VPC, forming a centralized network architecture.
Control layer: the control layer consists of a plurality of SDN controllers, namely DC control components, which are positioned in different data centers, wherein the DC control components are responsible for arranging and issuing services in data center sites, and issuing configurations of tenants, virtual routers, subnets and the like. For example, the first DC control component is disposed in the first DC and is responsible for traffic orchestration and distribution in the first DC, and control of the network.
The DC control component interfaces with a super controller and a VMM (Virtual Machine Manager virtual machine manager) of the orchestration layer to complete the automatic online of the calculation and network linkage service. An SDN controller (DC control component) within each data center is responsible for intra-domain network traffic orchestration and a VMM is responsible for lifecycle management of the virtual machines.
Network layer: the network layer comprises a physical network and a logical network, each data center, namely the physical network inside the DC, can be networked by adopting a Spine-Leaf architecture, and a data center interconnection GateWay (Data Center Interconnect GateWay, DCI GW) device in the network layer is responsible for inter-service interworking among the DCs. DCI technology is a technology for connecting and expanding a data center in combination with SDN technology, VPN technology, optical fiber communication technology, and the like. DCI interconnection allows high-speed and reliable network connection to be established between different data centers, and data sharing, application intercommunication and resource expansion are realized. Compared with the traditional data center interconnection technology, the method can meet the requirements of high bandwidth, low delay, reliability and safety among data centers.
The logic network is a virtual network which is connected with the virtual machine and is constructed on the basis of service according to the needs through network virtualization and virtual extensible local area network (Virtual eXtensible Local Area Network, VXLAN) technology, and the automatic distribution of the network is realized through a model defined by a super controller and a controller in a data center, so that the mapping from the logic network to the physical network is completed.
And (3) a forwarding layer: in the forwarding layer, the invention adopts the VXLAN technology to respectively create VXLAN tunnels in DC and also create VXLAN tunnels between DC edge devices (DCI-ED) so as to form three-section VXLAN forwarding. As in fig. 1, there is a VXLAN tunnel between a Leaf node inside the data center and a data center interconnect edge device DCI-ED, and there is a VXLAN tunnel established between the two DCI-EDs. In the control layer, BGP-EVPN is adopted as a control surface of the VXLAN, BGP EVPN route regeneration is configured on DCI GW, VXLAN messages received from one side data center are firstly unpacked and then are re-packed and then sent to the other side data center, so that three-section VXLAN forwarding of data messages crossing the data center is realized, and communication among virtual machines, namely VMs, crossing the data center is ensured.
Fig. 2 is a schematic diagram of a transmission network architecture supporting multi-data center cross-domain interworking multi-cloud service orchestration according to an embodiment of the present invention. A transport network architecture (Transit Fabric) is a centralized network architecture that is typically implemented using SDN technology and Virtual Private Networks (VPN) to programmatically configure and manage networks, providing flexibility and scalability for use in large-scale enterprise networks or inter-networking across geographic locations.
In the embodiment of the invention, in order to realize multi-data center cross-domain intercommunication multi-cloud service arrangement, a Transit Fabric for multi-cloud service arrangement is required to be created on a Super Controller, and data center interconnection edge equipment DCI ED (which is usually deployed on gateway equipment and is equivalent to DCI GW) in each site is managed in the Transit Fabric, so that cross-site traffic mutual access is realized through the DCI ED. Transit Fabric is a logic Fabric formed by a cross-site DCI device group, transit VPC is realized based on a Transit Fabric architecture, and configuration of the Transit VPC is issued to SDN controllers in the data centers through interfaces between Super controllers and the SDN controllers in the data centers, and issued to DCI edge devices in the data centers through the SDN controllers in the data centers.
To achieve inter-tenant VPC across DC, the present invention also introduces a transport Router (Transit Router),the VPC intercommunication service is uniformly arranged through the super controller. The Transit Router is a virtual Router with transfer interconnection capability, and is used as a transfer virtual Router to be communicated with virtual routers under each site tenant, the Transit Router is instantiated to DCI ED edge equipment of a plurality of DC sites, the tenant Router in each DC tenant VPC is introduced into the Transit Router, and three-layer mutual access of multi-cloud service of a cross data center is realized among the plurality of DCs through the Transit Router. The invention realizes L3 interconnection of multi-tenant VPC cross-site by instantiating the Transit Router in the Transit Fabric.
The invention uniformly arranges the multi-tenant VPC intercommunication service in the created Transit Fabric through the Super Controller, realizes the intercommunication of the tenant VPC across the data center, and comprises the following realization steps:
s1, a super controller issues arrangement configuration to a plurality of data centers through interfaces between the super controller and controllers in the plurality of data centers, and a Transit VPC is respectively established in the plurality of data centers;
transit VPC is a special VPC that serves to connect different VPCs and data centers. Each Transit VPC has a control connection relation with the Super Controller, and the routing between the VPCs is realized through the Super Controller.
S2, the super controller issues arrangement configuration to the plurality of data centers through interfaces between the super controller and controllers in the plurality of data centers, so that a tenant VPC with intercommunication requirements in the plurality of data centers establishes connection with a Transit VPC.
S3, the super controller issues route configuration to a plurality of data centers, establishes routes among Transit VPCs of the data centers and routes among tenants VPC and Transit VPCs in the data centers, and controls and realizes the inter-data-center business flow intercommunication of the tenants VPCs in the data centers;
and creating a Transit Router in the Transit VPC, and establishing a routing relationship between the tenant Router and the Transit Router in the data center. The Super Controller is responsible for monitoring and deciding the forwarding paths of the traffic and issuing the routing information to the relevant Transit VPC and tenant VPC.
In addition, a security policy can be set through the Super Controller to protect traffic security between the inter-DC traffic and the tenant VPC. Security control measures such as access control policies, traffic auditing, intrusion detection, etc. may be configured to secure communications across DCs.
Through the steps, the cross-data center intercommunication between the tenants VPC can be realized by the Super Controller component and the Transit VPC model. The Super Controller is responsible for route control and traffic forwarding decision, and provides a security management function to ensure smooth and secure traffic between tenants VPC. The method can realize that the tenant VPC expands across a plurality of data centers and provides advanced network functions and flexibility.
[ east-west L3 layer cloudy service intercommunication scheme ]
Fig. 3 is a schematic diagram of L3 layer interconnection logic of multi-data center cross-domain interworking cloud service orchestration according to an embodiment of the present invention. In the case where different VPC services are deployed in different DCs, respectively, there is sometimes a need for L3 layer interworking across data centers. This embodiment describes how to implement the L3 layer interworking between two tenants VPCs located at different sites specifically, taking the tenant VPC interworking between two sites as an example.
And configuring a Transit Router in a Transit Fabric created by the Super Controller, wherein the configured tenant Router is interconnected through the Transit Router. Transit Router corresponds to Transit VPC and tenant Router corresponds to tenant VPC. A Transit VRF, which is a virtual routing instance of the Transit Router, is created in a DCI GW, which is an exit DCI edge device of each data center (site 1 and site 2), and the same Transit VRF configuration is issued. The method comprises the steps of establishing a route association between a tenant virtual route instance, namely a tenant VRF and a Transit VRF, on DCI edge equipment of each data center (establishing a route association between the tenant VRF1 and the Transit VRF, which are virtual route instances of the tenant Router1, on DCI edge equipment in site 1, and establishing a route association between the tenant VRF2 and the Transit VRF, which are virtual route instances of the tenant Router2, on DCI edge equipment in site 2), so that three layers of interconnection of different tenants VPC in each data center (site 1 and site 2) are realized. The Transit VRF traffic between different DCs can realize route transfer pull through BGP EVPN.
S31, configuring east-west Transit Router in a super controller, so that a plurality of data centers can be interconnected through the Transit Router;
transit Router is a centrally located Router that connects all VPCs, and connects each VPC to a Transit VPC by connecting to a Virtual Interface (VIF) on the Transit VPC. Each VIF corresponds to a VPC. When traffic from one VPC needs to access another VPC, the Transit Router routes traffic to and from the Transit VPC to the target VPC via the BGP protocol.
Specifically, an L3 layer Segment identifier (L3 Segment ID), an ingress Route Target (RT), an egress RT, and a Transit Router for L3 layer interworking mapping of the multiple data centers can be configured on the Transit Router, and the Transit Router is finally instantiated on the site 1 and site 2 exit DCI GW as a Transit virtual Router to implement Route interworking with tenant virtual routers in tenant VPC on the site 1 and site 2 respectively.
The L3 Segment ID is an identifier used in a Software Defined Network (SDN) environment to uniquely identify an L3 network or subnet. It plays a key role in logic isolation and routing control in interconnections across data centers.
Ingress RT and egress RT are one method of implementing route isolation and traffic control in a Virtual Private Cloud (VPC). Introducing RT is to apply a routing policy to a particular network resource (e.g., subnet, virtual machine, etc.). It defines the external routes (via BGP, etc. protocols) that the resources can receive, and their corresponding traffic transmission paths. The incoming RT associates an external route with a particular resource so that it can receive traffic from a particular network or source. The pull RT is the process of applying a routing policy to propagate internal routing information to other network resources. It determines routes that are network resource providers (i.e., routes that are referenced by other network resources) and specifies the manner in which these routes are exported to other networks. By using incoming and outgoing RTs, routing control and traffic management in the VPC can be achieved, providing flexible network traffic isolation and routing functions.
S32, issuing configuration to enable tenant routers in the data center to be connected into the Transit routers, and achieving route intercommunication between the tenant routers and the Transit routers;
according to different usage scenarios and service arrangement modes, tenant VPC is divided into three types of modes across three layers of data center L3 intercommunication: the VPC full intercommunication mode, the VPC on-demand intercommunication mode in which the appointed source network segment does not appointed the destination network segment, and the VPC on-demand intercommunication mode in which the appointed source network segment and the destination network segment are simultaneously arranged can meet the demands of users in different scenes.
Mode 1, vpc full interworking (no source and destination segments specified): the Transit Router and the tenant RT mutually guide, the Transit Router transmits routes to the tenant Router, and receives BGP routes corresponding to RT values under the tenant Router; the tenant Router transmits the route to the Transit Router, and receives the BGP route corresponding to the RT value under the Transit Router.
Mode 2, vpc interworking on demand (source network segment specified but destination network segment not specified): the Transit Router does not route to the tenant Router, and the next hop points to the virtual private network instance (vpn-instance) of the opposite party by configuring static route under the Transit Router, so as to guide the tenant Router. The tenant Router transmits the route to the Transit Router, and receives the BGP route corresponding to the RT value under the Transit Router.
Mode 3, vpc interworking on demand (designating source and destination segments): no route transfer is carried out in two directions, and flow forwarding is realized by respectively configuring two-way static routes (vpn-instance of the next hop pointing to the opposite side) under two routers
And a VXLAN-DCI tunnel is established between DCI GWs, and the tunnel adopts a VXLAN encapsulation format. The DCI GW establishes a VXLAN tunnel with a VTEP in the data center. After receiving the message from the VXLAN tunnel or the VXLAN-DCI tunnel, the DCI GW releases the VXLAN encapsulation, re-encapsulates the message according to the destination IP address, and forwards the message to the VXLAN-DCI tunnel or the VXLAN tunnel, thereby realizing the intercommunication among the cross data centers.
[ east-west L2 layer cloudy service intercommunication scheme ]
Fig. 4 is a schematic diagram of L2 layer interconnection logic of multi-data center cross-domain interworking cloud service orchestration according to an embodiment of the present invention. In the case where the same VPC service is deployed in different data center DCs, respectively, there is sometimes a requirement for L2 interworking across the DCs. When the L2 intercommunication service crossing the DC is arranged through the super controller, the embodiment of the invention introduces the Transit Network, and realizes the L2 layer mutual access automatic pull-through of the business crossing the data center by connecting the Transit Network with the tenant Network needing the L2 intercommunication in different DCs.
The transport Network is a Network architecture used in a cloud computing or Software Defined Network (SDN) environment. It is a centralized network architecture that interconnects multiple Virtual Private Clouds (VPCs). The unified management and control of the whole Network can be realized by centrally managing the route in the Transit Network. The Transit Network can be connected with a plurality of VPCs to form a logically expanded Network, so that different cloud resources, virtual machines, containers and the like can be easily interconnected, and safe and high-availability data flow is realized. Transit Network may provide security isolation and boundary protection, restricting data traffic across VPCs, and access policies. Transit Network provides an extensible Network architecture that facilitates adding, deleting or changing VPCs without affecting other connected networks.
To realize the inter-data center L2 layer interworking of the same VPC service, a Transit Network needs to be created in a Transit Fabric on the super controller, and the Transit Network is associated with a tenant Network requiring two-layer interworking. Multiple interconnected sites need to be configured with the same mapping Segment ID, and the incoming RT of the local end interconnection and the outgoing RT of the opposite end interconnection are matched to ensure that the host route and the MAC table entry of the virtual machine can be correctly received and released.
The tenant Network needs to configure the virtual link layer Network and the local VXLAN information of the interworking Network in the present DC. After the tenant Network is connected with the Transit Network, local private Network VSI and interworking VSI configuration can be issued on the exit DCI edge device corresponding to the data center, and after the mapped far-end VXLAN is specified, the mapping of the local VXLAN to the far-end VXLAN can be realized, and the local VXLAN on DCI edge devices of different data centers is mapped to the same VXLAN.
When receiving the MAC/IP release route sent by the virtual tunnel endpoint VTEP in the data center, the local exit DCI equipment learns the MAC/IP release route into a local VXLAN, and replaces the VXLAN carried in the route with the mapped far-end VXLAN before notifying other endpoint equipment ED of the MAC/IP release route.
When the far-end DC export DCI equipment receives the mapped MAC/IP release route in the far-end VXLAN, the far-end export DCI equipment learns the route into the local VXLAN.
As shown in the example of fig. 4, after a packet sent from the same tenant VPC in the site 1 and encapsulated with the local private Network VSI reaches the DCI edge device (DCI GW & Border), the local private Network VSI mapping in the packet is converted into an interworking VSI, the packet is sent to the opposite DCI edge device through the data center interconnection VXLAN tunnel in the Transit Network, and the opposite DCI edge device converts the interworking VSI mapping in the packet into the opposite private Network VSI, thereby implementing L2 layer interworking between the same tenant VPC and the site 2 and the Network segment. A virtual switch instance (Virtual Switch Instance, VSI) is a virtual network entity that logically emulates the functionality of a physical switch and provides isolation and communication of a Virtual Local Area Network (VLAN).
Fig. 5 is a schematic structural diagram of an electronic device for implementing a multi-data center cross-domain interworking multi-cloud service orchestration method according to an embodiment of the present invention, where the device 500 includes: a processor 510 such as a Central Processing Unit (CPU), a communication bus 520, a communication interface 540, and a memory 530. Wherein the processor 510 and the memory 530 may communicate with each other via a communication bus 520. The memory 530 stores a computer program that, when executed by the processor 510, implements the multi-data center cross-domain interworking multi-cloud service orchestration method provided by the present invention.
Memory refers to a device for storing computer programs and/or data based on some storage medium, which may be a Volatile Memory (VM) or a Non-Volatile Memory (NVM). The memory is an internal memory for directly exchanging data with the processor, and can read and write data at any time, and has high speed, and is used as a storage medium for temporary data of an operating system and other running programs. The memory may be synchronous dynamic random access memory (Synchronous Dynamic Random Access Memory, SDRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), or the like. The nonvolatile memory is a memory using a persistent storage medium, and has a large capacity and can store data permanently, and may be a storage class memory (Storage Class Memory, SCM), a Solid State Disk (SSD), a NAND flash memory, a magnetic Disk, or the like. SCM is a common name for new storage medium between memory and flash memory, and is a composite storage technology combining persistent storage characteristic and memory characteristic, and has access speed slower than that of DRAM and SSD hard disk.
The processor may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It should be appreciated that embodiments of the invention may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in non-transitory (or referred to as non-persistent) memory. The method may be implemented in a computer program using standard programming techniques, including a non-transitory storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose. Furthermore, the operations of the processes described in the present invention may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described herein includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention.
The foregoing is merely exemplary of the present invention and is not intended to limit the present invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A multi-data center cross-domain interworking multi-cloud service orchestration method, wherein the method is applied to a super controller, and the super controller manages and controls software defined network, SDN, controllers in a plurality of data centers, the method comprising:
and creating a transmission network architecture (Transit Fabric) for multi-cloud service arrangement by the super controller, issuing arrangement configuration based on a transmission private cloud Transit VPC network model, realizing inter-data center intercommunication of tenant virtual private cloud VPC, and uniformly arranging multi-tenant VPC intercommunication service in the Transit Fabric.
2. The method according to claim 1, wherein the method for unifying and arranging multi-tenant VPC interworking service in Transit Fabric is as follows:
the super controller issues arrangement configuration to a plurality of data centers through interfaces between the super controller and controllers in the plurality of data centers, and creates a Transit VPC in the plurality of data centers respectively;
issuing arrangement configuration to a plurality of data centers, so that a tenant VPC with intercommunication requirements in the plurality of data centers establishes connection with a Transit VPC;
the super controller issues route configuration to a plurality of data centers, establishes routes among the data centers Transit VPCs and routes among the tenants VPCs in the data centers and the Transit VPCs, and controls and realizes the inter-data-center business flow intercommunication of the tenants VPCs in the data centers.
3. The method according to claim 1, wherein the method further comprises:
under the condition that different VPC services are respectively deployed in different data centers, the service arrangement method for realizing the intercommunication of different tenant VPCs across the data center L3 layers comprises the following steps:
configuring a east-west transmission Router (Transit Router) in a super controller, and respectively creating a virtual routing instance of the Transit Router, namely Transit VRF, in the interconnection DCI edge equipment of the exit data centers of all the data centers;
and establishing a route association between a tenant virtual route instance, namely a tenant VRF and a Transit VRF, on DCI edge equipment of each data center, thereby realizing three-layer interconnection of different tenants VPC in each data center.
4. The method of claim 3, wherein the step of,
according to different usage scenarios and service arrangement modes, the mode of inter-working of the tenant VPC across three layers of the data center L3 comprises the following steps: the VPC full interworking mode, the VPC on-demand interworking mode in which the source network segment is specified and the destination network segment is not specified, and the VPC on-demand interworking mode in which the source network segment and the destination network segment are specified.
5. A method according to claim 3, characterized in that the method further comprises:
under the condition that the same VPC service is deployed in different data centers, the service arrangement method for realizing L2 layer intercommunication of the VPC of the same tenant across the data centers comprises the following steps:
creating a transmission Network in a transmission Fabric on the super controller, and associating the transmission Network with a tenant Network needing two-layer intercommunication;
the method comprises the steps that a local private network virtual switching instance VSI and an interworking VSI configuration are issued on export DCI edge equipment of each data center, and local VXLAN on the DCI edge equipment is mapped to the same VXLAN;
after the message encapsulated with the local private Network VSI and sent by the same tenant VPC of each data center reaches the DCI edge equipment, the local private Network VSI mapping in the message is converted into the interworking VSI, the message is sent to the opposite-end DCI edge equipment through the data center interconnection VXLAN tunnel in the Transit Network, and the opposite-end DCI edge equipment converts the interworking VSI mapping in the message into the opposite-end private Network VSI, so that the L2 layer interworking of the same tenant VPC and Network segment in each data is realized.
6. The device is characterized in that the device is applied to a super controller, the super controller manages and controls a software defined network SDN controller in a plurality of data centers, and the device distributes arrangement configuration based on a transmission private cloud transmission VPC network model by creating a transmission network architecture transmission Fabric for multi-cloud service arrangement, realizes inter-data center intercommunication of tenant virtual private cloud VPCs, and unifies arrangement of multi-tenant VPC intercommunication services in the transmission Fabric.
7. The apparatus of claim 6, wherein the apparatus comprises:
the transmission VPC creation module is used for issuing arrangement configuration to the plurality of data centers through interfaces between the transmission VPC creation module and controllers in the plurality of data centers, and creating a Transit VPC in the plurality of data centers respectively;
the VPC association module is used for issuing arrangement configuration to the plurality of data centers through interfaces between the VPC association module and controllers in the plurality of data centers, so that a tenant VPC with intercommunication requirements in the plurality of data centers establishes connection with a Transit VPC; and issuing route configuration to a plurality of data centers, establishing routes among the Transit VPCs of the data centers and routes among the tenants VPCs in the data centers and the Transit VPCs, and controlling and realizing the inter-data-center business flow intercommunication of the tenants VPCs in the data centers.
8. The apparatus of claim 6, wherein the apparatus comprises:
the three-layer intercommunication arrangement module is used for realizing the arrangement of the intercommunication services of different tenants VPCs across the L3 layers of the data centers under the condition that different VPC services are respectively deployed in different data centers; the method comprises the steps of configuring east-west Transit Router, and respectively creating virtual routing instances of the Transit Router, namely Transit VRF, in the interconnection DCI edge equipment of the exit data centers of all data centers; establishing a tenant virtual routing instance, namely routing association between a tenant VRF and a Transit VRF, on DCI edge equipment of each data center, thereby realizing three-layer interconnection of different tenants VPC in each data center;
the two-layer intercommunication arrangement module is used for realizing the arrangement of the L2-layer intercommunication of the VPCs of the same tenant across the data centers under the condition that the same VPC service is deployed in different data centers respectively; the method comprises the steps of creating a transmission Network in a transmission Fabric on a super controller, and associating the transmission Network with a tenant Network needing two-layer intercommunication; the method comprises the steps that a local private network virtual switching instance VSI and an interworking VSI configuration are issued on export DCI edge equipment of each data center, and local VXLAN on the DCI edge equipment is mapped to the same VXLAN; after the message encapsulated with the local private Network VSI and sent by the same tenant VPC of each data center reaches the DCI edge equipment, the local private Network VSI mapping in the message is converted into the interworking VSI, the message is sent to the opposite-end DCI edge equipment through the data center interconnection VXLAN tunnel in the Transit Network, and the opposite-end DCI edge equipment converts the interworking VSI mapping in the message into the opposite-end private Network VSI, so that the L2 layer interworking of the same tenant VPC and Network segment in each data is realized.
9. An electronic device is characterized by comprising a processor, a communication interface, a storage medium and a communication bus, wherein the processor, the communication interface and the storage medium are communicated with each other through the communication bus;
a storage medium storing a computer program;
a processor for implementing the method of any of claims 1-5 when executing a computer program stored on a storage medium.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1 to 5.
CN202311504907.7A 2023-11-13 2023-11-13 Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment Pending CN117675559A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311504907.7A CN117675559A (en) 2023-11-13 2023-11-13 Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311504907.7A CN117675559A (en) 2023-11-13 2023-11-13 Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment

Publications (1)

Publication Number Publication Date
CN117675559A true CN117675559A (en) 2024-03-08

Family

ID=90069038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311504907.7A Pending CN117675559A (en) 2023-11-13 2023-11-13 Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment

Country Status (1)

Country Link
CN (1) CN117675559A (en)

Similar Documents

Publication Publication Date Title
US11683386B2 (en) Systems and methods for protecting an identity in network communications
EP3703330B1 (en) Automatic configuration of perimeter firewalls based on security group information of sdn virtual firewalls
US10020989B2 (en) Provisioning services in legacy mode in a data center network
EP2891282B1 (en) System and method providing distributed virtual routing and switching (dvrs)
JP5976942B2 (en) System and method for providing policy-based data center network automation
CN103905523A (en) Cloud computing network virtualization method and system based on SDN
US20140321459A1 (en) Architecture for agentless service insertion
CN103763367A (en) Method and system for designing distributed virtual network in cloud calculating data center
CN109445912A (en) A kind of configuration method of virtual machine, configuration system and SDN controller
CN112602292B (en) Inter-slice sharing in a 5G core network
US20230300002A1 (en) Mapping vlan of container network to logical network in hypervisor to support flexible ipam and routing container traffic
Casellas et al. Orchestration of IT/cloud and networks: From inter-DC interconnection to SDN/NFV 5G services
CN112671811B (en) Network access method and equipment
CN117675559A (en) Multi-data center cross-domain intercommunication multi-cloud service arrangement method, device and equipment
CN111147302B (en) Network virtualization implementation method and system
Paul et al. Service chaining for NFV and delivery of other applications in a global multi-cloud environment
EP3827556B1 (en) A platform comprising a plurality of routing entities
Tzanakaki et al. Converged wireless access/optical metro networks in support of cloud and mobile cloud services deploying SDN principles
CN116648892A (en) Layer 2networking storm control in virtualized cloud environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination