WO2024093315A1 - 一种针对多资源池网络的管理方法、云管理平台及装置 - Google Patents

一种针对多资源池网络的管理方法、云管理平台及装置 Download PDF

Info

Publication number
WO2024093315A1
WO2024093315A1 PCT/CN2023/104303 CN2023104303W WO2024093315A1 WO 2024093315 A1 WO2024093315 A1 WO 2024093315A1 CN 2023104303 W CN2023104303 W CN 2023104303W WO 2024093315 A1 WO2024093315 A1 WO 2024093315A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource pool
network
cloud
resource
management platform
Prior art date
Application number
PCT/CN2023/104303
Other languages
English (en)
French (fr)
Inventor
朱娜
姚博
田应军
申思
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2024093315A1 publication Critical patent/WO2024093315A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present application relates to the field of computer technology, and in particular to a management method, a cloud management platform and a device for a multi-resource pool network.
  • the hybrid multi-cloud multi-pool architecture usually contains two or more resource pools, which are used to run the business together.
  • the business requires multiple resource pools to share business data, that is, multiple resource pools need to build an interconnected network.
  • multiple resource pools due to the natural technical isolation between multiple resource pools, such as the multiple resource pools coming from different suppliers, using different network models and communication technologies, the configuration of the interconnected network between any multiple resource pools is currently very complex and inefficient, which has always been a pain point in building and managing heterogeneous systems. How to solve the complex network management problem in the hybrid multi-cloud multi-pool architecture is an issue that needs to be solved urgently.
  • the present application provides a management method, a cloud management platform and a device for a multi-resource pool network, which are used to provide users with a unified management tool for a hybrid multi-cloud multi-pool network, thereby simplifying the difficulty of users managing the hybrid multi-cloud multi-pool network.
  • the present application provides a management method for a multi-resource pool network, which is applied to a cloud management platform.
  • the cloud management platform can provide a network intercommunication service for tenants, and the cloud management platform obtains the service configuration information configured by the tenant for the network intercommunication service, and the service configuration information includes one or more of the following: a network identifier (such as a segment), a terminal node (such as an endpoint) identifier, and a terminal node type; wherein the network identifier is used to indicate the identifier of the tenant's global network, and the global network includes a network composed of at least two resource pool components, and a network connection can be established between any two resource pools in at least two resource pools, so as to achieve Intercommunication across resource pools; wherein, the at least two resource pools may come from multiple service providers (or cloud resource providers), for example, one of the at least two resource pools comes from cloud vendor A, and the other resource pool comes from cloud vendor B, wherein each resource pool generally includes multiple computing nodes,
  • the cloud management platform provides a network intercommunication service
  • the tenant can configure the service configuration information of the network intercommunication service according to the multi-resource pool network configuration of the business needs in the actual application, such as configuring the terminal node identifier for identifying each resource pool in the actual multiple resource pool configuration, and the terminal node type representing each resource pool type.
  • the cloud management platform creates the terminal node corresponding to each resource pool in turn according to the terminal node type, thereby constructing a global network of multiple resource pools that can represent the tenant's multiple resource pools, wherein the multiple resource pools can come from multiple service providers and are no longer limited to multiple resource pools within the same service provider, providing tenants with a unified management method for hybrid multi-cloud multi-pool networks, simplifying the difficulty of users managing hybrid multi-cloud multi-pool networks.
  • any two terminal nodes are interconnected by default, or the connectivity status between the terminal nodes is provided to the tenant for configuration.
  • the service configuration information also includes one or more of the following: a terminal node pair, and the connectivity status of the terminal node pair; wherein the terminal node pair includes two terminal nodes, and the connectivity status of the terminal node pair includes allowing connectivity and/or prohibiting connectivity.
  • the service configuration information further includes routing rules of network segments and terminal nodes included in at least one resource pool.
  • the tenant is provided with the flexibility to configure the routing rules of each network segment and terminal node in the resource pool.
  • the service configuration information further includes a local security policy and an inter-domain security policy.
  • the flexibility of configuring traffic security policies for communication between multiple resource pools is provided to tenants, thereby improving the traffic security of communication within a resource pool and between resource pools.
  • the method also includes: obtaining one or more of the following configured by the tenant on the cloud management platform: the type of resource pool, location information of the resource pool, private network VPC information of the resource pool, subnet information of the resource pool, information on the interface of the inter-domain gateway within the resource pool accessing the resource pool, and virtual local area network VLAN information.
  • the cloud management platform is also used to manage a cloud service system, which includes a global controller and at least two local controllers, one local controller corresponding to one of the at least two resource pools; the method also includes: the global controller obtains the service configuration information from the cloud management platform; the global controller sends the service configuration information to each local controller.
  • the cloud management platform maps the service configuration information configured by the tenant to each resource pool through the cloud service system.
  • the tenant does not need to worry about the underlying network implementation, which simplifies the management difficulty of the hybrid multi-cloud and multi-pool network for the tenant.
  • the method further includes: the local controller calls a first application programming interface API of a controller in a corresponding site in the resource pool to send routing information to the controller in the site.
  • a standard first API is provided and unified, and the local controller can exchange routing information with the site controller in any type of resource pool based on the first API to meet various business needs of tenants.
  • the routing information includes part or all of the following:
  • next hop The type of next hop, the vnid encapsulated by the network virtualization technology VxLAN tunnel, the outer destination Internet Protocol IP address encapsulated by the VxLAN tunnel, and the outer destination LAN address mac address encapsulated by the VxLAN tunnel.
  • the routing information can be used for VxLAN message encapsulation in the computing nodes in the resource pool, so that the computing nodes and the inter-domain gateway can be directly connected in one hop, achieving the optimal data plane path.
  • the method further includes: the local controller calls a second application programming interface API of a site controller in a corresponding resource pool to send a subscription request to the site controller, wherein the subscription request is used to request subscription to resource change events in the resource pool.
  • types of resource pools include: homogeneous cloud, heterogeneous cloud, virtualized resource pool, and traditional resource pool.
  • the present application also provides a cloud management platform, which has the functions corresponding to the cloud management platform in the method example of the first aspect above.
  • the beneficial effects can be found in the description of the first aspect and will not be repeated here.
  • the functions can be implemented by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the structure of the device includes an acquisition module and a creation module.
  • the first acquisition module and the second acquisition module can also be the same module, and the first determination module and the second determination module can be the same module. These modules can perform the functions corresponding to the cloud management platform in the method example of the first aspect above. Please refer to the detailed description in the method example for details, which will not be repeated here.
  • the present application also provides a computing device cluster, which includes at least one computing device, and the at least one computing device has the corresponding functions of implementing the cloud management platform in the method example of the first aspect above.
  • the beneficial effects can be found in the description of the first aspect and will not be repeated here.
  • the structure of each computing device includes a processor and a memory, and the processor is configured to support the computing device to execute part or all of the corresponding functions of the cloud management platform in the method of the first aspect above.
  • the memory is coupled to the processor, which stores the necessary program instructions and data for the computing device.
  • the structure of the computing device also includes a communication interface for communicating with other devices.
  • the present application also provides a computer-readable storage medium, in which instructions are stored, and when the computer-readable storage medium is run on a computer, the computer executes the method in the above-mentioned first aspect and various possible designs of the first aspect.
  • the present application also provides a computer program product comprising instructions, which, when executed on a computer, enables the computer to execute the method in the above-mentioned first aspect and various possible designs of the first aspect.
  • the present application also provides a computer chip, which is connected to a memory, and the chip is used to read and execute a software program stored in the memory, and to execute the methods in the above-mentioned first aspect and various possible implementation methods of the first aspect.
  • FIG1 is a schematic diagram of a VxLAN network model architecture
  • FIG2 is a schematic diagram of a data center architecture
  • FIG3 is a schematic diagram of a network model
  • FIG4 is a second schematic diagram of the architecture of a network model
  • FIG5 is a third schematic diagram of the architecture of a network model
  • FIG6 is a schematic diagram of a possible system architecture provided in an embodiment of the present application.
  • FIG7A is a schematic diagram of a possible network intercommunication service provided in an embodiment of the present application.
  • FIG7B is a schematic diagram of the structure of another possible network intercommunication service provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of various routing strategy configuration methods provided in an embodiment of the present application.
  • FIG9 is a schematic diagram of the architecture of a forwarding node provided in an embodiment of the present application.
  • FIG10 is a schematic diagram of another possible system architecture provided in an embodiment of the present application.
  • FIG11 is a flow chart of a method for implementing multi-resource pool communication provided in an embodiment of the present application.
  • FIG12 is one of the schematic diagrams of a user interface provided in an embodiment of the present application.
  • FIG13 is a second schematic diagram of a user interface provided in an embodiment of the present application.
  • FIG14 is a third schematic diagram of a user interface provided in an embodiment of the present application.
  • FIG15 is a fourth schematic diagram of a user interface provided in an embodiment of the present application.
  • FIG16 is a schematic diagram of a data plane architecture of a multi-resource pool interconnected network provided in an embodiment of the present application.
  • FIG17 is a schematic diagram of the structure of a computing device provided in an embodiment of the present application.
  • FIG18 is a schematic diagram of the structure of a computing device provided in an embodiment of the present application.
  • FIG19 is a schematic diagram of the structure of a computing device cluster provided in an embodiment of the present application.
  • FIG. 20 is a schematic diagram of the structure of another computing device cluster provided in an embodiment of the present application.
  • Overlay refers to building a logical network on top of a physical network.
  • Overlay network is a logical network built on underlay network, and underlay network is the underlying physical foundation of overlay network.
  • Overlay network has various network protocols and standards, such as virtual extensible local area network (VxLAN), generic routing encapsulation (GRE), etc. Among them, VxLAN is currently a common protocol standard in overlay network.
  • VxLAN virtual extensible local area network
  • GRE generic routing encapsulation
  • VLAN Virtual Local Area Network
  • VLAN is a network isolation technology that logically divides a physical LAN into multiple broadcast domains, where LAN specifies a local area network.
  • VLAN technology divides a large physical Layer 2 domain into many small logical Layer 2 domains, which are called VLANs. Devices in the same VLAN can communicate at Layer 2, and different VLANs are isolated at Layer 2.
  • a physical LAN can be divided into multiple VLANs, and all devices in the same VLAN are in the same broadcast domain, and broadcasts cannot be transmitted across VLANs.
  • VLAN is a network isolation technology that can logically divide a data center's physical LAN into multiple VLANs.
  • VLANs are distinguished by VLAN numbers.
  • the standard defines that the address bits of VLAN numbers are only 12 bits, that is, the range of available VLAN numbers is 1 to 4094, which can meet the needs of traditional data centers.
  • the number of virtual machines in data centers has increased by orders of magnitude compared to the original physical machines.
  • the isolation capability of VLAN is obviously not enough for public clouds or other large-scale virtualized cloud computing services with tens of thousands or even more tenants. Therefore, VxLAN came into being.
  • VxLAN is an extension of VLAN. It uses network virtualization technology to virtualize multiple Layer 2 networks on a set of physical network devices. Specifically, VxLAN uses tunnel technology to establish a Layer 2 Ethernet network tunnel based on the Layer 3 network, thereby realizing cross-regional Layer 2 interconnection. In other words, VxLAN can create virtual Layer 2 subnets or segments across the physical Layer 3 network. Each Layer 2 subnet has a unique VxLAN network identifier (VNI) that segments the traffic. Among them, the length of VNI is 24 bits, and it supports a maximum of more than 16 million virtual networks, which can meet the ultra-multi-tenant multi-instance scenarios of clouds and other large virtualized networks.
  • VNI VxLAN network identifier
  • FIG. 1 for a schematic diagram of a VxLAN network model, which includes network devices 101, 102, 200, and hosts 1-8.
  • the network device can be an independent network device, such as a switch, a router, a gateway, etc., or it can be a server where a virtual machine is located. Different network devices may have different functions.
  • network devices 101 and 102 are collectively referred to as layer-2 network devices, and network device 200 is referred to as a layer-3 network device.
  • Figure 1 is based on the layer-2 network device being a switch, a layer-3 network device, and a layer-4 network device.
  • the network device is shown as a router as an example, and this application does not limit this.
  • a layer 2 network device can access one or more hosts, that is, establish a connection with one or more hosts to form a local area network, which can be a LAN or VLAN, which can be understood as a subnet or broadcast domain.
  • the host can be a server or a computing instance running in the server, such as a virtual machine, container, etc.
  • the layer 2 network device can provide a data path for any two hosts in the local area network to achieve communication between any two hosts in the local area network.
  • host 1 source host
  • host 4 destination host
  • network device 101 is used to receive the message sent by host 1 and forward the message to host 4.
  • the layer 3 network device can connect multiple local area networks to achieve communication between hosts in different local area networks.
  • the larger network formed by the multiple local area networks can also be called a layer 3 network.
  • VxLAN technology can create multiple virtual Layer 2 networks on a three-layer network architecture by establishing VxLAN tunnels, such as establishing a VxLAN tunnel between two Layer 2 network devices.
  • the Layer 2 network device can also be called a VxLAN tunnel endpoint (VTEP) device (VTEP for short), including the starting point (also called the source VTEP) or the end point (also called the destination VTEP) of the VxLAN tunnel.
  • VNI VxLAN tunnel endpoint
  • VTEP VxLAN tunnel endpoint
  • VTEP VxLAN tunnel endpoint
  • different virtual Layer 2 networks are identified by VNI. It can be understood that a VNI represents a tenant, and the IP address within the same VNI is unique, that is, the IP addresses of hosts with the same VNI are different, while hosts belonging to different VNIs can have the same IP address. Multiple hosts connected to a network device can have different VNIs.
  • a VxLAN tunnel refers to a virtual channel established between two network devices for transmitting VxLAN messages.
  • a resource pool is a configuration mechanism and a logical abstraction for flexible resource management, used to partition host resources.
  • a resource pool includes one or more hosts, or a resource pool can also be divided according to computing instances, which include virtual machines, containers, etc.
  • a resource pool includes multiple virtual machines.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the following one or more items or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • one or more items of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or plural.
  • Figure 2 is a schematic diagram of the architecture of a data center.
  • the physical architecture of the data center includes one or more traditional virtual machine resource pools and one or more traditional physical machine resource pools.
  • a traditional virtual machine resource pool includes multiple virtual machines
  • a traditional physical machine resource pool includes multiple physical servers.
  • Virtual machines and physical servers can be collectively referred to as computing resources, which are used to run enterprise services.
  • the second and third layer forwarding of east-west traffic (referring to horizontal traffic, such as the traffic between virtual machines in the virtual machine resource pool and physical servers in the physical machine resource pool) in the physical network of the data center is completed by the spine/leaf nodes.
  • the leaf node can be a switch responsible for forwarding layer 2 network traffic
  • the spine node can be a router responsible for forwarding layer 3 network traffic.
  • the security protection of east-west traffic is completed by the firewall hanging in the borderleaf.
  • north-south traffic referring to vertical traffic, such as Internet users accessing virtual machines in the virtual machine resource pool through the Internet, and virtual machines in the virtual machine resource pool sending feedback messages to Internet users
  • vertical traffic such as Internet users accessing virtual machines in the virtual machine resource pool through the Internet
  • virtual machines in the virtual machine resource pool sending feedback messages to Internet users
  • the equipment in the egress access area will also deploy some security devices such as firewalls/WAF to ensure the security of services in the data center.
  • the cloud When enterprise businesses go to the cloud, that is, enterprise customers use cloud computing resources of cloud providers to build their own clouds locally, from the perspective of the entire data center network, the cloud is integrated into the global network of the enterprise customer's data center as an independent resource pool, and is part of the data center. Enterprise businesses are deployed on various resource pools, so the cloud will definitely face the demand for cross-resource pool network interconnection. Take the office application of a certain enterprise customer as an example. The office application is deployed in different resource pools. When back-end data sharing is required, the enterprise customer's demand for the global network of the data center is to support the interconnection network of cross-resource pool and cross-cloud dedicated lines (direct connect, DC).
  • DC direct connect
  • FIG 3 is a schematic diagram of a data plane architecture for intercommunication across resource pools.
  • the cloud system includes two clouds, each cloud includes multiple computing nodes (such as physical servers), and multiple cloud servers (such as virtual machines) can be created on each computing node.
  • the cloud server is used to process tenants' business.
  • the inter-domain gateway cluster uses x86 servers to achieve inter-domain communication, and the cloud uses an overlay architecture, such as a VxLAN architecture.
  • Each computing node can use vSwitch (software) for vtep encapsulation, and the computing node and the inter-domain gateway cluster are directly connected in one hop.
  • vSwitch software
  • the inter-domain gateway is implemented using an x86 server, there is a high cost problem, and the computing power is completely provided by the CPU of the x86 server, which makes the CPU easily become a bottleneck.
  • the traffic pressure is high, the delay of inter-domain communication will also increase, and the stability will also deteriorate.
  • the architecture adopted by this solution can only be applied to homogeneous clouds, not heterogeneous clouds, and does not support traditional virtual machine resources.
  • homogeneous cloud refers to multiple clouds from the same cloud provider. For example, when the two clouds in Figure 3 are from the same cloud provider, the two clouds are homogeneous clouds. On the contrary, if the two clouds are from different cloud providers, the two clouds are heterogeneous clouds.
  • FIG4 is another schematic diagram of a data plane architecture for intercommunication across resource pools.
  • hardware devices such as TOR-NVE
  • the inter-domain gateway is also implemented in hardware. Both the inter-domain gateway and TOR-NVE are automatically configured through an SDN controller (not shown in FIG4 ), which can achieve optimal paths between computing nodes and inter-domain gateways.
  • This solution is usually adopted by network equipment manufacturers with hardware SDN controllers.
  • the SDN controller is used to uniformly manage the devices in the resource pool (such as TOR-NVE) and the inter-domain gateway.
  • the resource pool such as TOR-NVE
  • vSwitch as an overlay solution within the domain.
  • the architecture adopted by this solution can only be applied to homogeneous clouds, but not heterogeneous clouds.
  • FIG. 5 is a schematic diagram of another data plane architecture for interoperability across resource pools.
  • This solution relies on the dedicated line capabilities provided by the resource pool to the outside world, supports third parties to access the dedicated line gateway in the resource pool through the inter-domain gateway, and multiple clouds can access the same third-party inter-domain gateway cluster to achieve interoperability across resource pools.
  • the cloud uses an overlay architecture, and the domain can use vSwitch for vtep encapsulation.
  • the inter-domain gateways achieve interoperability through the dedicated line gateway domain network.
  • This solution supports both homogeneous and heterogeneous clouds and has stronger compatibility.
  • this solution relies on the dedicated line capabilities provided within the domain, which has complex configuration, long data plane paths, and high latency.
  • an embodiment of the present application provides a general network interoperability service for hybrid multi-cloud and multi-pool deployment.
  • the network interoperability service can support multi-cloud at the platform as a service (PaaS) level and automatically connect the network at the infrastructure as a service (IaaS) level.
  • PaaS service
  • IaaS infrastructure as a service
  • Tenants can achieve the same configuration, management and operation and maintenance of multiple resource pool networks by leasing the service.
  • Fig. 6 is a schematic diagram of a possible application scenario provided by an embodiment of the present application.
  • a cloud vendor provides cloud services, such as cloud services including but not limited to network intercommunication services, cloud computing services, etc.
  • the system supporting network intercommunication services includes a cloud management platform 100 and a cloud service system 200.
  • the functions of the cloud management platform 100 include: providing access interfaces (such as interfaces or APIs).
  • Tenants such as the above-mentioned enterprise customers or other users who have the need to build multiple resource pools for intercommunication
  • the cloud management platform 100 After the cloud management platform 100 successfully authenticates the cloud account and password, the tenant can further pay to select and purchase cloud services and/or cloud resources on the cloud management platform 100. After the purchase is successful, the cloud management platform 100 notifies the cloud resources to provide services for the tenant.
  • cloud services and/or cloud resources are, for example, a virtual machine.
  • the tenant can select the specifications (memory, processor, and disk) and quantity of the virtual machine on the cloud management platform 100. After the tenant successfully pays, the cloud management platform 100 notifies the cloud computing node to create a virtual machine with a corresponding number and specifications, and opens the remote desktops of these virtual machines.
  • the cloud management platform 100 provides the tenant with the connection account and password of each remote desktop, so that the tenant can remotely log in to the virtual machine through the account and password.
  • cloud services and/or cloud resources can also be, for example, various cloud services such as containers, bare metal servers, elastic IPs (EIPs), and the embodiments of this application do not limit the type of cloud services.
  • Tenants can form a resource pool by purchasing the cloud resources of the cloud vendor to run the tenant's business. Of course, in this application, tenants can also rent cloud resources from other cloud vendors to build a resource pool, and this application does not make specific limitations on this.
  • the cloud service is a network interconnection service.
  • the tenant can purchase the network interconnection service on the cloud management platform 100 and configure the service configuration information included in the network interconnection service.
  • the service configuration information obtained after the configuration is completed can be used to express the tenant's demand for communication between multiple resource pools.
  • the tenant's communication demand may include which resource pools the tenant's business is deployed on, which resource pools need to be interconnected (i.e., establish a network connection), etc.
  • the tenant's business is deployed in the four resource pools shown in Figure 6, and any two resource pools in the four resource pools need to be interconnected, or some of the four resource pools are interconnected, such as resource pool 1 and resource pool 2 need to be interconnected, resource pool 2 and resource pool 3 need to be interconnected, and resource pool 1 and resource pool 3 are not interconnected.
  • an embodiment of the present application provides a network interoperability model.
  • the network interoperability service in this embodiment is implemented based on the network interoperability model. Please refer to Figures 7A and 7B for understanding.
  • the network interoperability service includes the network interoperability model and various strategies applied to the network interoperability model, such as topology strategy, security strategy, and routing strategy.
  • FIG7A exemplarily shows a schematic diagram of a network intercommunication model provided by this embodiment.
  • the network intercommunication model includes It includes network segments and endpoints (also called terminal nodes).
  • Segment is an object that provides global routing interconnection and can be used to represent the global network of a communication system composed of multiple resource pools.
  • Multiple endpoints can be added to a segment, each endpoint represents a resource pool, and the "endpoint" here can also be understood as a connection, one end of the endpoint is connected to the segment, and the other end is connected to the network segment of different types of resource pools.
  • Tenants can configure the corresponding resource pool information by adding endpoints, where the resource pool information includes the type of resource pool, subnet segment, VPC and other information.
  • the network interconnection model also includes the connection between endpoints.
  • the resource pools represented by two endpoints with a connection are interconnected, and the resource pools represented by two endpoints without a connection are not interconnected.
  • any two endpoints on the segment are naturally connected, which does not require tenant configuration. In other words, after the tenant configures the resource pool to which the endpoint is connected, the resource pools represented by any two endpoints on the segment are interconnected by default.
  • the endpoints on the segment are all connected or all unconnected by default, and whether there is a connection between the two endpoints on the segment can be specified by the tenant.
  • the cloud management platform 100 provides topology strategies for tenants to specify whether there can be a connection between the resource pools connected to the two endpoints. It can be seen that the endpoints on the network interconnection model are independent of the type of resource pool and can be understood as an abstraction of multiple types of resource pools. Therefore, a network between resource pools of any type can be constructed based on the network interconnection model.
  • the topology policy is used to flexibly set the connectivity between any two endpoints under the segment, that is, whether the two endpoints can communicate with each other. If the two endpoints have connectivity, it means they can communicate with each other, and if they do not have connectivity, it means they cannot communicate with each other.
  • the tenant can add 4 endpoints on the segment, namely endpoint 1-endpoint 4, and endpoint 1-endpoint 4 represent resource pool 1-resource pool 4 respectively.
  • the tenant adds endpoint pairs such as endpoint 1 and endpoint 3 in the topology policy, and the connectivity is set to prohibited, indicating that resource pool 1 and resource pool 3 cannot communicate with each other.
  • the specific configuration method will be introduced below and will not be repeated here.
  • this embodiment also provides a routing strategy applied to the network intercommunication model.
  • the routing strategy means that each segment has a routing table, which may include multiple table items. Each table item includes a destination network segment and a next hop. Tenants can configure iterative routing through the routing table and flexibly specify the next hop to access a certain destination network segment.
  • the routing table of the segment includes: Item 1: The destination network segment is subnet 1 and the next hop of subnet 1 is endpoint 1, indicating that subnet 1 is connected to endpoint 1. Item 2: The destination subnet 2 and the next hop of subnet 2 is endpoint 2, indicating that subnet 2 is connected to endpoint 2.
  • a tenant wants to add a subnet of resource pool 3 (assuming it is subnet 3) in the global network
  • One method is, as shown in (a) of Figure 8, the tenant adds a new endpoint on the segment, such as endpoint3, and the endpoint3 is connected to the subnet 3.
  • Another method is that the tenant directly configures the routing table of the segment without adding an endpoint to the segment.
  • the subnet 3 can be connected to an existing endpoint of the segment, such as endpoint1 or endpoint2. Accordingly, the tenant only needs to add entry 3 to the routing table of the segment: the target network segment is subnet 3, and the next hop endpoint1 of subnet 3.
  • this embodiment also provides a security policy applied to the network intercommunication model, and the security policy can be used to configure the degree of traffic protection between connected resource pools to ensure the security of traffic between resource pools.
  • the security policy in this embodiment includes a proximal security policy and an inter-domain security policy.
  • the proximal security policy is used to protect the traffic security of computing nodes in the resource pool.
  • the inter-domain security policy is used to protect the traffic security between domains.
  • the inter-domain security policy is applied to a security model provided by this embodiment, and the architecture of this security model will be introduced below.
  • the network interconnection model is an abstract expression of the global network composed of multiple resource pools, which is unrelated to the network implementation of a single resource pool. That is, the network interconnection service can shield the differences in network models within different resource pools, allowing tenants to achieve unified management of hybrid multi-cloud and multi-pool networks.
  • the cloud management platform 100 After the tenant has completed the configuration of the service configuration information and has successfully paid, the cloud management platform 100 notifies the cloud service system 200 to provide services to the tenant.
  • the cloud service system 200 manages multiple resource pools according to the service configuration information.
  • the management scope includes establishing network connections between multiple resource pools according to the service configuration information, and mapping the various policies configured by the tenant to the network model of the resource pool, which will eventually take effect in the resource pool.
  • the cloud service system 200 ensures that there are network connections between the multiple resource pools that meet the tenant's demands and operate in accordance with the policies configured by the tenant.
  • subsequent tenants can also modify the service configuration information according to new communication demands, thereby adjusting the network connection status between multiple resource pools.
  • the following introduces the network implementation architecture of multiple resource pools interconnected in conjunction with the cloud service system 200 shown in FIG. 6 .
  • the cloud service system 200 includes a global controller 201 and a plurality of local controllers (eg, local controllers 211 - 214 in FIG. 6 ).
  • the global controller 201 is responsible for global resource processing and configuration distribution. For example, the global controller 201 obtains service configuration information from the cloud management platform 100 and sends the service configuration information (or network configuration information related to the service configuration information) to the local controllers 211-214. Afterwards, the local controllers 211-214 facilitate the interconnection between the tenant's multiple resource pools based on the service configuration information/network configuration information.
  • the local controllers 211-214 are used to communicate with the global controller 201, such as receiving service configuration information/network configuration information sent by the global controller.
  • the service configuration information/network configuration information is used to establish a network connection between resource pools. For example, one or more tenant configuration policies included in the service configuration information/network configuration information are mapped to the resource pool.
  • the relationship between the local controller and the resource pool is 1:1, that is, one resource pool is assigned one local controller, or one local controller is responsible for managing one resource pool.
  • local controller 211 is used to manage resource pool 1
  • local controller 212 is used to manage resource pool 2
  • local controller 213 is used to manage resource pool 3
  • local controller 214 is used to manage resource pool 4.
  • the relationship between the local controller and the resource pool is n:1, where n is a positive integer, that is, one resource pool can be assigned multiple local controllers, and multiple local controllers are hot standby for each other, or multiple local controllers jointly manage sites.
  • n is a positive integer
  • the types of resource pools applicable to this embodiment include cloud types and non-cloud types.
  • Cloud types include but are not limited to private clouds, public clouds, hybrid clouds, and edge clouds
  • non-cloud types include but are not limited to virtualized resource pools, traditional resource pools, and the like.
  • This embodiment of the application does not limit the types of multiple resource pools for the same tenant.
  • resource pool 1 in Figure 6 can be a private cloud
  • resource pool 2 is a public cloud
  • resource pool 3 is a virtualized resource pool
  • resource pool 4 is a traditional resource pool.
  • Different types of resource pools may have different architectures. First, take a private cloud as an example to introduce the architecture of the cloud-type resource pool in this embodiment.
  • the private cloud includes a site controller 301 , an inter-domain gateway 311 , a forwarding node 321 , and multiple computing instances.
  • Compute instances are used to run tenants' businesses, including but not limited to virtual machines, containers, bare metal servers, etc. Alternatively, multiple compute instances can be created on a physical compute node through virtualization technology. Compute nodes can be servers, desktop computers, etc.
  • the computing instances included in the private cloud can be leased by the tenant from any cloud vendor, such as from the cloud vendor that provides the network intercommunication service, or from other cloud vendors, and this application has no limitation on this.
  • the private cloud and the cloud management platform 100 are provided by the same cloud vendor, the private cloud can be called a homogeneous cloud.
  • the private cloud and the cloud management platform 100 are provided by different cloud vendors, the private cloud is a heterogeneous cloud. That is, the multiple resource pools in the embodiments of the present application support both homogeneous clouds and heterogeneous clouds.
  • the private cloud in Figure 6 is a homogeneous cloud
  • the public cloud in Figure 6 is a heterogeneous cloud
  • the private cloud and the public cloud in Figure 6 are both homogeneous clouds, or both are heterogeneous clouds, and so on.
  • the forwarding node 321 is used to forward messages, such as receiving messages sent by the computing instance and forwarding them to the next hop, or forwarding messages to the computing instance.
  • the forwarding node is used to forward messages between computing instances in the same subnet, such as forwarding messages from computing instance 1 in the same subnet of resource pool 1 to computing instance 2 in the subnet.
  • This embodiment also supports the use of an overlay architecture in the resource pool, such as a VxLAN network.
  • the forwarding node 321 is used as a vtep endpoint, it is specifically used to perform VxLAN encapsulation on the message of the computing node and send the encapsulated message to another vtep endpoint (such as the inter-domain gateway 311).
  • the forwarding node can be software, such as vSwitch, which can be deployed in a computing node.
  • the computing node is the forwarding node of the multiple computing instances.
  • the forwarding node can also be hardware, such as TOR-NVE and any forwarding device with network communication and data processing functions to meet the needs of multiple scenarios.
  • the inter-domain gateway 311 is used to realize communication across resource pools, that is, to forward the message of the computing instance in the resource pool where the inter-domain gateway 311 is located to the inter-domain gateway in another resource pool, such as the inter-domain gateway 312.
  • resource pool 2 Taking resource pool 2 as an example, in one scenario, resource pool 1 and resource pool 2 adopt an overlay architecture, and a VxLAN tunnel is established between the inter-domain gateway 311 and the inter-domain gateway 312.
  • the inter-domain gateway 311 serves as a vtep endpoint, which is specifically used to perform VxLAN encapsulation on the message from the computing instance in resource pool 1, and forward the encapsulated VxLAN message to another vtep endpoint, such as the inter-domain gateway 312 in resource pool 2, so as to realize traffic intercommunication across resource pools.
  • the inter-domain gateway in this embodiment can be a hardware gateway or a software gateway. If there are networking conditions, a hardware gateway can be deployed.
  • the hardware inter-domain gateway can support large-scale and high-performance communication networks to meet the demands of enterprise customers for equipment reuse and high performance. If there are no networking conditions, a software gateway can be deployed on a computing node.
  • the local controller 211 has a standard southbound interface to support docking with hardware devices from different manufacturers, such as inter-domain gateways. Therefore, the inter-domain gateway 311 in this embodiment can be a device of a cloud vendor that provides network intercommunication services (which can be called a one-party inter-domain gateway), or it can also be a device of other vendors (which can be called a three-party inter-domain gateway).
  • the cloud service system 200 also includes an inter-domain gateway (such as at least one of the inter-domain gateways 311-314 in Figure 6).
  • the in-site controller 301 is used to manage the routing and resource changes in the private resource pool, such as virtual machine migration, IP migration, IP addition and subtraction, etc. It is worth noting that the in-site controller 301 is the original control plane device in the private cloud, which is used to send routing information to the forwarding node and manage resource changes in the VxLAN network in the resource pool. It should be noted that in order to be compatible with the original hardware architecture of the existing resource pool, not all resource pools must be deployed. As shown in Figure 6, there is no in-site controller in the traditional resource pool.
  • the above article uses private cloud as an example to introduce the architecture of cloud-type resource pools.
  • virtualized resource pools do not have The overlay architecture is adopted, which is a pure VLAN network. Therefore, there is no resource change such as virtual machine migration, and the inter-domain gateway only needs to provide VLAN access capabilities.
  • the traditional resource pool does not deploy a site controller, so there is no automatic control.
  • the local controller controls the inter-domain gateway and does not need to interact with the resource pool.
  • the local controller can adopt different management strategies, which will be explained in detail below.
  • the cloud service system 200 can be built by the cloud vendor after learning the tenant's communication demands, specifically to meet the tenant's communication demands, and is used to provide network intercommunication services for the tenant, and is not originally existing.
  • the cloud service system 200 can be automatically created, such as the cloud management platform 100 notifying the computing nodes in the cloud to create (such as installing) the global controller 201 and multiple local controllers, or the cloud service system 200 can also be manually created by the cloud vendor's personnel, without specific limitation.
  • the global controller 201 and the local controllers 211-214 can be distributed software systems, or the global controller 201 and the local controllers 211-214 can also be distributed hardware systems that implement the above-mentioned software functions.
  • the global controller 201 can be installed in a computing node used to run the cloud management platform 100, or in an independent computing node outside the cloud management platform 100.
  • the local controller can be deployed close to the global controller side, or deployed in a resource pool.
  • the local controller is usually deployed in the corresponding resource pool, such as deployed in a computing node in the resource pool, or in a server dedicated to running the local controller, without specific limitation.
  • the global layer and the local layer in the cloud service system 200 are separated in management and control, and the local controller 211 controls the resource pool nearby, shielding the global controller 201 from the differences between different resource pools.
  • the network interconnection service provided in this embodiment is a unified tool for tenants to build and manage the interconnection networks of multiple resource pools. Tenants only need to configure the network interconnection service on the cloud management platform 100 to express their communication demands.
  • the cloud management platform 100 maps the tenants' communication demands to multiple resource pools through the cloud service system 200, and finally takes effect in each resource pool, thereby realizing the automatic connection of multiple resource pool networks.
  • the network interconnection service supports any type of resource pool of the tenant, meeting the needs of various scenarios, and the tenant does not need to worry about the problems at the underlying network implementation level.
  • the tenant can continue to use the network interconnection service to uniformly manage and operate the interconnection networks of the multiple resource pools, making it easier, more convenient, and more efficient for tenants to create and manage hybrid multi-cloud multi-pool interconnection networks, solving the complex network management problems of enterprise customers.
  • FIG. 6 only shows a small number of devices.
  • the inter-domain gateway in FIG. 6 can be a single inter-domain gateway, or it can be replaced by an inter-domain gateway cluster, etc.
  • the architecture shown in FIG. 6 is only a possible example, but ordinary technicians in this field should understand that the system architecture of actual applications can also include more, fewer or different components than those shown in the figure, and the components shown can be combined or divided in any way, and this application does not make specific restrictions on this.
  • this embodiment also provides an architectural schematic diagram of a security model.
  • FIG. 10 is based on FIG. 6.
  • the cloud service system 200 also includes a security gateway deployed in each resource pool of the tenant.
  • the security gateway can be a hardware gateway or a software gateway, which is used to secure the traffic between domains (between resource pools and resource pools), such as filtering the messages received by the inter-domain gateway to improve the security of the traffic between domains.
  • the above describes the relevant contents of providing network interconnection services for tenants.
  • the following describes a specific implementation process of building a multi-resource pool interconnection network through the network interconnection service.
  • FIG. 11 exemplarily shows a method flow diagram of a management method for a multi-resource pool network provided by an embodiment of the present application. To keep it simple, FIG. 11 only shows the information interaction process within one resource pool. As shown in FIG. 11, the method may include the following steps:
  • Step 1101 The cloud management platform 100 obtains service configuration information input or selected by the tenant on the cloud management platform 100 .
  • the cloud management platform 100 may provide an interface or an API or other access interface for tenants to configure service parameters. These two configuration methods are described in detail below.
  • Configuration method 1 Configuration through the user interface.
  • the cloud management platform 100 may provide a console user interface for tenants to perform configuration.
  • FIG12 is a schematic diagram of a console user interface (hereinafter referred to as user interface) provided in the present embodiment.
  • tenants can configure parameters for the relevant attributes of the user interface according to their own communication requirements between multiple resource pools.
  • the attributes of the network intercommunication service provided by the cloud vendor include but are not limited to network intercommunication model, topology strategy, routing strategy, security strategy, etc.
  • the user interface may present the relevant attribute configuration items of the network intercommunication service.
  • the relevant configuration items of the network intercommunication model are displayed on the right side of FIG12 for the tenant to select or input to complete the configuration of the relevant parameters of the network intercommunication service.
  • the service configuration information includes relevant attribute configuration items of the network intercommunication service provided by the user interface and parameters input or selected by the tenant for each attribute configuration item.
  • the infrastructure configuration items of the network interconnection model may include but are not limited to: segment configuration items, and/or, endpoint configuration items.
  • the segment configuration items include but are not limited to one or more of the following: segment name, segment description information, VxLAN network identifier (VxLAN Network Identifier, vni) for VxLAN encapsulation of cross-resource pool interconnection traffic (not shown in FIG12), etc.
  • the name of the segment and the description of the segment are edited by the tenant for easy viewing by the tenant.
  • the name of the segment is Business 1 Network
  • the description is that Business 1 Network includes 4 resource pools of cloud vendor A and cloud vendor B.
  • the VNI for VxLAN encapsulation of cross-cloud intercommunication traffic can be automatically allocated by the cloud management platform 100, such as randomly selecting an idle VNI from the VNI pool, without the need for users to fill in.
  • Resource pool types include cloud type and non-cloud type.
  • Non-cloud type can also be divided into traditional type and virtualized type, which are introduced as follows:
  • the resource pool to which the endpoint is connected is a VPC subnet on the cloud, where the cloud includes but is not limited to public cloud, private cloud, hybrid cloud, edge cloud, etc.
  • the configuration items of cloud-type endpoints include but are not limited to one or more of the following:
  • Cloud location information It can also be called cloud identification, which is used to uniquely identify a resource pool.
  • cloud location information includes information required for the global controller to communicate with the local controller in the cloud, and information required for the local controller in the cloud to communicate with the controller in the site in the cloud. It is used for the address information (such as IP address) and account information required for the global controller to send to the local controller in the cloud, and for the local controller in the cloud to send to the controller in the site in the cloud.
  • VPC information on the cloud used by the local controller in the cloud to send routing information to the site controller.
  • VPC subnet information on the cloud used by the local controller in the cloud to send routing information to the inter-domain gateway in the cloud.
  • the resource pool to which the endpoint is connected is of non-cloud type, specifically, a traditional resource pool type in the non-cloud type.
  • the configuration items of the traditional endpoint include but are not limited to one or more of the following:
  • Resource pool location information also known as resource pool identification, which is used to uniquely identify a resource pool. Specifically, the information required for the global controller to communicate with the local controller in the cloud, the address information required by the global controller to send to the local controller in the resource pool (such as the IP address of the device including the controller in the site) and the account information used for authentication.
  • Subnet information in the resource pool used by the local controller in the resource pool to send routing information to the inter-domain gateway in the resource pool.
  • Interface information and VLAN information on the inter-domain gateway connected to the resource pool used by the local controller in the resource pool to send routing information to the inter-domain gateway in the resource pool.
  • the configuration items of the virtualized endpoint include but are not limited to one or more of the following:
  • Resource pool location information used by the local controller in the resource pool to send the address information (such as IP address) and account information required for authentication to the site controller in the resource pool.
  • Subnet information in the resource pool used by the local controller in the resource pool to send routing information to the inter-domain gateway in the resource pool.
  • Interface information and VLAN information on the inter-domain gateway connected to the resource pool used by the local controller in the resource pool to send routing information to the inter-domain gateway in the resource pool.
  • the topology strategy is used to flexibly set the connectivity between any two endpoints under a segment.
  • the topology strategy includes but is not limited to one or more of the following information:
  • Endpoint Pair Specify two endpoints.
  • tenants can specify the connectivity between two endpoints on the segment, allowing the resource pools corresponding to the two connected endpoints to communicate with each other, and prohibiting the resource pools corresponding to the two connected endpoints from being interconnected, that is, having no connection and unable to communicate.
  • FIG13 is a user interface diagram of a topology strategy provided in an embodiment of the present application.
  • the tenant's communication requirements for multiple resource pools can be any two resource pools in the multiple resource pools.
  • the tenant's communication requirements for multiple resource pools Some of the multiple resource pools are interconnected, while some resource pools are not interconnected. For example, if resource pool 1 and resource pool 2 in FIG6 are interconnected, resource pool 2 and resource pool 3 are not interconnected, etc.
  • the cloud management platform 100 provides topology policies for tenants to specify whether resource pools connected to two endpoints can have a connection.
  • a tenant can add one or more endpoint pairs. Assuming that endpoint 1 and endpoint 3 are selected or input in the configuration item of one of the endpoint pairs, and the connectivity option is selected to allow, it means that endpoint 1 and endpoint 3 are interconnected, that is, resource pool 1 connected to endpoint 1 and resource pool 3 connected to endpoint 3 are allowed to be connected or have a connection. For another example, if another endpoint pair is configured as endpoint 2 and endpoint 3, and the connectivity option is prohibited, it means that endpoint 2 and endpoint 3 are not interconnected, that is, resource pool 2 connected to endpoint 2 and resource pool 3 connected to endpoint 3 are not allowed to be connected or have a connection.
  • the tenant only needs to add an endpoint pair of two endpoints that are not allowed to be connected, and set the connectivity of the endpoint pair to prohibited, without repeatedly adding the endpoint pair that is connected by default. In this way, when there are fewer endpoint pairs that are not allowed to be connected, the tenant only needs to configure a small number of topology policies, which simplifies the tenant's configuration process and saves user time. Conversely, if the endpoints on a segment are not connected by default, the tenant only needs to add an endpoint pair of two endpoints that are allowed to be connected, and set the connectivity of the endpoint pair to prohibited, without repeatedly setting the endpoint pair that is not connected by default. Optionally, it can also be set that the endpoints on the segment are connected or not connected by default.
  • Figure 14 is a user interface diagram of the routing strategy provided in an embodiment of the present application.
  • the routing strategy includes a routing table of the segment in the network interoperability model.
  • One or more routing table items can be flexibly added to the routing table.
  • Each routing table item includes a network segment and the next hop of the network segment.
  • routing table items can be automatically generated according to the endpoints configured by the tenant. For example, after the tenant configures the endpoints in Figure 12, click the OK button.
  • the cloud management platform 100 creates the routing table of the segment based on the network interconnection model configured by the tenant, or it can be manually added or modified by the tenant, as shown in the relevant introduction of Figure 8 above, which will not be repeated here.
  • the routing table items shown in Figure 14 are only examples, not all the routing table items of a segment.
  • the security policy can be used to configure the degree of traffic protection between connected resource pools to ensure traffic security between resource pools.
  • the security policy in this embodiment includes a proximal security policy and an inter-domain security policy.
  • the proximal security policy is used to protect the traffic security of computing nodes in the resource pool.
  • the inter-domain security policy is used to protect the traffic security between domains.
  • Figure 15 is a schematic diagram of a user interface of a security policy provided in an embodiment of the present application. As shown in Figure 15, after adding a security policy in the user interface, the tenant can select whether the type of the security policy is a proximal security policy or an inter-domain security policy.
  • the local security policy includes but is not limited to one or more of the following:
  • Instance Indicates the location where the local security policy takes effect, such as the IP address of a computing node within a site (such as a resource pool).
  • Message identification may include the five-tuple information of the message, such as source IP address, destination IP address, source port number (port), destination port number (port) and protocol number. Behavior includes allow and discard.
  • the tenant may input or select computing nodes/computing instances in resource pool 1 in the instance configuration item.
  • the configuration process may include that the tenant first selects resource pool 1 in the candidate list of the instance configuration item.
  • the user interface may further display information such as IP addresses of computing nodes/computing instances included in resource pool 1 for the user to continue selecting.
  • the tenant may also directly input the IP address of the computing node/computing instance in the instance configuration item.
  • the message identifier includes fields corresponding to the five-tuple information.
  • the tenant can selectively configure some or all of the fields. For example, enter the IP address of computing instance 2 in resource pool 1 (such as 1.1.0.1) in the source IP address, and select discard in the behavior configuration item. The remaining fields can be left unconfigured. Assume that the tenant selects computing instance 1 in resource pool 1 in the instance configuration item (assuming the IP address is 1.1.0.0), and computing instance 1 is located on computing node 1. In this case, when computing node 1 receives a message with a source IP address of 1.1.0.1, that is, a message from computing instance 2, computing node 1 discards the message. If the behavior configuration item is allowed, it will not be discarded, and computing node 1 will process the message normally, such as forwarding the message to computing instance 1.
  • the proximal security policy can be executed independently of the security model, that is, the proximal security policy can be deployed in the system shown in Figure 6.
  • Inter-zone security policies include but are not limited to one or more of the following:
  • Network segment Indicates the location where inter-domain security takes effect, such as segment.
  • the effective location is segment, the actual effective location includes the security gateways in some or all resource pools included in the segment.
  • message identifiers may include five-tuple information of the message, such as source IP address, The packet contains the address, destination IP address, source port number, destination port number and protocol number. The actions include allowing forwarding and discarding.
  • the tenant configures an inter-domain security policy, in which the security rules include the source IP address being the IP address of computing instance 1 in resource pool 1, and the destination IP address being the IP address of computing instance 3 in resource pool 2.
  • the inter-domain gateway will send the message to the security gateway for filtering.
  • the security gateway in resource pool 1 or the security gateway in resource pool 2 receives a message whose source IP address is the IP address of computing instance 1 and the destination IP address is the IP address of computing instance 3, the message will be discarded.
  • Configuration method 2 Configuration through API.
  • the cloud management platform 100 can also provide an API for tenants to configure. Tenants can configure service parameters similar to those included in the above user interface according to their own communication requirements between multiple resource pools.
  • the cloud management platform 100 can display the API format on a web page provided by the Internet.
  • the API format may include multiple fields and the usage of each field.
  • the API format includes: "segment_name”:, //segment name. Among them, "segment_name”:, is the field, and the related prompts after // are the usage of the field.
  • the tenant After seeing the API format presented on the web page, the tenant fills in the corresponding parameters according to the API format, for example, fill in the network of business 1 after "segment_name”:, that is, “segment_name”:”network of business 1", indicating that the name of the segment is the network of business 1.
  • the API format includes fields corresponding to all service parameters of the network intercommunication service, and the tenant enters the parameters corresponding to each field in turn.
  • the tenant can send the API with the input parameters to the cloud management platform 100 in a template via the Internet.
  • the cloud management platform 100 detects the parameters corresponding to different fields in the API, thereby obtaining the tenant's requirements for different fields of the API. Therefore, in this embodiment, the service configuration information includes the API fields and the parameters input by the tenant.
  • the service configuration information may only include the basic attribute configuration items of the network intercommunication model and the corresponding part or all parameters.
  • the service configuration information may also include the configuration items included in the topology strategy of the network intercommunication model and the corresponding part or all parameters.
  • the service configuration information may also include the configuration items included in the security policy of the network intercommunication model and the corresponding part or all parameters, etc.
  • the cloud management platform 100 can map the service configuration information to each corresponding resource pool of the tenant through the cloud service system 200, and finally take effect in each resource pool to realize automatic connection of the IaaS layer network (see the following steps).
  • Step 1102 the cloud management platform 100 sends the service configuration information to the global controller 201 .
  • the global controller 201 After receiving the service configuration information, the global controller 201 executes step 1103.
  • the global controller 201 may also write the service configuration information into a persistent storage medium such as a hard disk in the global layer to persistently store the service configuration information of the tenant.
  • the service configuration information may be deleted to release storage space in the global layer.
  • Step 1103 the global controller 201 sends the service configuration information/network configuration information (referred to as first configuration information) to the local controller corresponding to each resource pool in the multiple resource pools.
  • first configuration information the service configuration information/network configuration information
  • the specific implementation process of step 1103 may include: the global controller 201 calls the API of each local controller to send the service configuration information to the local controller corresponding to each resource pool.
  • each resource pool refers to the multiple resource pools of the tenant indicated by the service configuration information.
  • the local controller corresponding to the resource pool refers to the local controller assigned to manage the resource pool, such as the local controller 211 corresponding to resource pool 1, the local controller 212 corresponding to resource pool 2, the local controller 213 corresponding to resource pool 3, etc. in Figure 6.
  • the correspondence between the resource pool and the local controller can be pre-set in the global controller 201.
  • the correspondence between the multiple resource pools and the local controllers can be pre-set in the global controller 201 after the cloud service system 200 is created.
  • the global controller 201 can determine the local controllers corresponding to each resource pool of the tenant based on the corresponding relationship, thereby sending service configuration information to each local controller.
  • the global controller 201 calls the APIs of the local controllers 211, 212, 213 and 214 respectively to send the service configuration information to the local controllers 211, 212, 213 and 214 respectively.
  • each local controller receives the same service configuration information.
  • the global controller may also determine the network configuration information based on the service configuration information, and the network configuration information includes part of the service configuration information.
  • the global controller sends the network configuration information to each local controller. Even so, the global controller 201 still sends the network configuration information to each local controller in a variety of ways:
  • the network configuration information sent by the global controller 201 to each local controller is the same.
  • the global controller 201 sends the same network configuration information to local controllers 211, 212, 213, and 214, respectively. That is, the network configuration information received by the local controllers corresponding to each resource pool is the same.
  • the network configuration information may be part of the service configuration information, for example, the network configuration information does not include the description information of the segment, or lacks information not required by the local controller.
  • the network configuration information sent by the global controller 201 to each local controller is completely different or not completely the same.
  • the global controller 201 can send only the information related to the resource pool (i.e., the network configuration information) in the service configuration information to the local controller corresponding to the resource pool.
  • the global controller 201 can send only the proximal security policy related to resource pool 1 to the local controller corresponding to resource pool 1, and does not need to send it to the local controllers corresponding to other resource pools, so as to save network bandwidth.
  • the global controller 201 can generate or split the network configuration information corresponding to each resource pool based on the network service information.
  • the network configuration information corresponding to each resource pool includes the information required by the local controller corresponding to the resource pool to manage the resources in the resource pool, such as but not limited to: configuration items and parameters of the network intercommunication model, topology strategy related to the endpoint connected to the resource pool, routing table items, security strategy, etc. At least one item.
  • the local controller After the local controller receives the service configuration information/network configuration information, optionally, the local controller can store the service configuration information/network configuration information in a persistent storage medium within the local layer. Optionally, when the network intercommunication service subscribed by the tenant expires and is confirmed not to be renewed, the service configuration information is deleted to release storage space in the local layer.
  • Step 1104 The local controller sends network configuration information (referred to as second configuration information) to the inter-domain gateway in the resource pool.
  • second configuration information network configuration information
  • the service configuration information/network configuration information sent by the global controller to the local controller is referred to as first configuration information
  • the network configuration information sent by the local controller to the inter-domain gateway is referred to as second configuration information.
  • the local controller 211 Taking resource pool 1 as an example, the local controller 211 generates second configuration information according to the first configuration information, and sends the second configuration information to the inter-domain gateway 311 .
  • the second configuration information may include indication information for instructing the inter-domain gateway 311 to establish a VxLAN tunnel with the forwarding node 321.
  • the inter-domain gateway 311 establishes a VxLAN tunnel with the forwarding node 321 based on the second configuration information.
  • the second configuration information may also include indication information for establishing a network connection between the inter-domain gateways. For example, if the topology policy in the first configuration information indicates that resource pool 1 and resource pool 2 are interconnected, then the second configuration information includes indication information for instructing the inter-domain gateway 311 to establish a VxLAN tunnel with the inter-domain gateway 312.
  • the indication information also includes information required for establishing the VxLAN tunnel, such as the outer IP address of the inter-domain gateway 312 for VxLAN tunnel encapsulation, the VNI encapsulated by the VxLAN tunnel, etc.
  • Step 1105 The local controller in the resource pool sends routing information to the controller in the site in the resource pool.
  • the specific process of the local controller 211 executing step 1104 may include: the local controller 211 calls the API (referred to as the first API) of the site controller 301 in the resource pool for sending routing information, generates routing information, and sends the routing information to the site controller 301.
  • the API referred to as the first API
  • the first API is a standard API defined in this embodiment for sending routing information.
  • the on-site controller 301 in the resource pool installs and runs a plug-in (such as a plugin) or a software program that can implement the first API.
  • the plug-in or software program can be installed in the on-site controller 301 when building the cloud service system 200, or the local controller 211 notifies the on-site controller 301 to download and install the plug-in or software program, or the local controller 211 sends the plug-in or software program to the on-site controller 301 and instructs the on-site controller 301 to install it.
  • the specific implementation method is not limited.
  • the format of the first API includes but is not limited to one or more of the following:
  • Next hop type indicates the next hop type of data plane encapsulation, such as VxLAN or GRE, or other overlay protocols.
  • vnid used for VxLAN encapsulation of messages fill in vni for the next hop information.
  • the Vxlan network composed of every two resource pools shares a vni. For example, if resource pool 1 and resource pool 2 are interconnected, resource pool 1 and resource pool 2 share a vni. If resource pool 1 and resource pool 3 are interconnected, resource pool 1 and resource pool 3 share a vni.
  • remote_ip the outer destination IP address of the tunnel encapsulation.
  • the routing information is used by the forwarding node 321 to encapsulate the message sent by the computing instance in resource pool 1 into a VxLAN message and send it to the inter-domain gateway 311 in resource pool 1.
  • remote_ip is the IP address of the inter-domain gateway 311.
  • Router MAC address the inner destination MAC of the tunnel encapsulation.
  • the inter-domain gateway 311 is the MAC of the vtep endpoint.
  • the local controller 211 requests the in-site controller 301 to call the first API.
  • the in-site controller 301 provides the first API to the local controller 211.
  • the local controller generates routing information based on the received network configuration information and the format of the first API.
  • the local controller sends the routing information to the in-site controller 301 (see step 1105).
  • the local controller executing step 1105 may be part or all of the multiple local controllers in the service configuration information. For example, since no in-site controller is deployed in the traditional resource pool, the local controller 214 does not need to execute step 1105.
  • Step 1106 The controller in the site sends the routing information to the forwarding node in the resource pool.
  • forwarding nodes such as computing nodes
  • the local controller can send routing information to each forwarding node, which will not be repeated here.
  • forwarding node 321 After forwarding node 321 obtains the routing information, it can encapsulate the message (such as Ethernet message) sent by the computing instance into a VxLan message according to the routing information, and send the VxLan message to the inter-domain gateway 311 through the VxLan tunnel. It can be seen that there is no need to pass through an additional gateway between the forwarding node and the inter-domain gateway in the resource pool.
  • the forwarding node is a computing node, the computing node and the inter-domain gateway can be directly connected in one hop to achieve the optimal path.
  • the Ethernet message is a message sent by computing instance 1 in resource pool 1 to computing instance 3 in other resource pools such as resource pool 2.
  • inter-domain gateway 311 After receiving the VxLan message sent by forwarding node 321, inter-domain gateway 311 re-encapsulates the VxLan message, such as modifying the outer destination IP of the VxLan message to the IP address of inter-domain gateway 312, and modifying the VNI to the VNI shared by resource pool 1 and resource pool 2, etc., which will not be repeated here.
  • the re-encapsulated VxLAN message is sent to inter-domain gateway 312 through the VxLAN tunnel between inter-domain gateway 311 and inter-domain gateway 312.
  • Figure 16 exemplifies the transmission path of the message.
  • step 1104 and step 1105-step 1106 are two independent processes. There is no strict timing limitation between the two independent processes. For example, step 1104 may be executed first, and then step 1105-step 1106, or step 1105-step 1106 may be executed first and then step 1104, and so on.
  • this embodiment designs the following steps for a cloud-type resource pool:
  • Step 1107 The local controller in the resource pool sends an event subscription request to the controller in the site in the resource pool.
  • the local controller 211 calls the API (referred to as the second API) used by the in-site controller 301 in the resource pool to issue event subscriptions, generates a subscription request, and sends the subscription request to the in-site controller 301 .
  • the API referred to as the second API
  • the second API is a standard API for issuing event subscriptions defined in this embodiment.
  • the controller 301 in the site of the cloud installs and runs a plug-in (such as a plugin) or software program that can implement the second API. See the relevant description of the first API mentioned above, which will not be repeated here.
  • the plug-in that implements the first API and the plug-in that implements the second API may be the same or different, and there is no specific limitation. For convenience, it is usually implemented by the same plug-in or software program, and the following is an introduction using the plugin implementation as an example.
  • the cloud in this embodiment uses a VxLAN network
  • resource change events such as virtual machine migration, IP addition and subtraction, etc., which may affect the change of routing table entries, may occur. Therefore, in this embodiment, the local controller 211 in the cloud calls the second API in the cloud to send a routing subscription request to the site controller 301 in the cloud. When there is a routing change in the cloud, the site controller 301 reports the resource change event to the local controller.
  • the format of the second API includes but is not limited to one or more of the following:
  • the CIDR list includes the subscribed address range, that is, all IP addresses or routing addresses within the address range are subscribed. In other words, if there is a change in the IP address or routing address within the range, the controller 301 in the site needs to report the resource change event to the local controller 211.
  • Subscriber IP address that is, the IP address of the local controller 211, which is used by the in-site controller 301 to send resource change events to the local controller 211 when resource change events occur in the cloud.
  • Virtual private router ID in the cloud the virtual private router ID where the above cidr list is located.
  • IP address used for authentication when the controller 301 in the site notifies the subscriber when a resource change event occurs in the cloud.
  • Authentication User Name Authentication User Name.
  • Authentication password the account password for authentication.
  • Tenant ID tenant information within the cloud, such as tenant ID, used to uniquely identify a tenant.
  • the local controller 211 requests the in-site controller 301 to call the second API.
  • the in-site controller 301 provides the second API to the local controller 211.
  • the local controller 211 generates a subscription request in the format of the second API based on the received service configuration information/first configuration information, and sends the subscription request to the in-site controller 301.
  • the controller 301 in the site may first authenticate the local controller 211 based on the authentication information carried in the subscription request, and then execute step 1107 after the authentication is successful.
  • Step 1108 When a resource change event occurs in the cloud, the controller in the site sends a notification message to the local controller in the cloud.
  • the controller 301 in the site calls an API (ie, a third API) for updating routing table entries provided by the local controller 211 in the cloud, and generates a notification message indicating a resource change event.
  • an API ie, a third API
  • the third API is a standard northbound API defined in this embodiment, and is used by the controller 301 in the site to report a notification message indicating a resource change event.
  • the format of the third API includes but is not limited to one or more of the following:
  • vpc_id the virtual private router ID in the cloud.
  • (2) ip_address The IP address of the VPC in the cloud, including an IPv4 address or an IPv6 address. This address can be the IP address of the object where the resource changes occurred, or a newly added IP address in the cloud. It can be a network segment or the IP address of a specific host (such as a computing instance or computing node).
  • router_mac The inner destination MAC address used by the intra-cloud inter-domain gateway for VxLAN encapsulation.
  • vtep_ip The outer destination IP address used by the intra-cloud inter-domain gateway for VxLAN encapsulation.
  • a computing instance such as a virtual machine
  • the computing node where it is located changes. For example, as shown on the left side of FIG. 9, computing instance 1 is migrated from computing node 1 to computing node 2.
  • ip_address is the IP address of computing instance 1, indicating that the object where the resource change occurs is computing instance 1.
  • router_mac is the mac address of computing node 2.
  • vtep_ip is the IP address of computing node 2 as the vtep endpoint.
  • the controller 301 in the site within the cloud calls the third API of the local controller 211 in the cloud, and generates a notification message indicating the resource change event according to the format of the third API based on the resource change event.
  • Step 1108 the controller 301 in the site within the cloud sends the notification message to the local controller 211 in the cloud.
  • Step 1109 the local controller 211 in the cloud updates the routing information according to the notification message.
  • Updating the routing information includes, for example, modifying the next hop information of the object where the resource change event occurs in the routing information of the inter-domain gateway 311 to the relevant information indicated in the notification message according to the notification message, for example, modifying the next hop of computing instance 1 to the IP address of computing node 2, so that when the inter-domain gateway 311 receives the message sent to computing instance 1, it will forward the message to computing node 2, and then computing node 2 will forward the message to computing instance 1.
  • This embodiment does not limit how to update the routing information specifically.
  • the updated routing information can enable the computing instances in the cloud to communicate with the objects where the resource change event occurs.
  • the local controller 211 in the cloud sends the updated routing information to the inter-domain gateway, and the inter-domain gateway performs VxLAN encapsulation and VxLAN message transmission based on the updated routing information.
  • the cloud management platform 100 sends the security policy to the global controller corresponding to the tenant, and the global controller sends the security policy to the local controller in the corresponding resource pool, and the local controller maps the security policy to the resource pool.
  • the local controller sends the proximal security policy in the security policy to the site controller in the resource pool, and the site controller uses the existing security capabilities in the resource pool to implement the proximal security policy to ensure the security of the traffic in the pool.
  • the edge controller node sends the inter-domain security policy including the security policy to the security gateway in the resource pool, and the security gateway implements the inter-domain security policy to ensure the security of the inter-domain traffic.
  • the above design can implement traffic protection of different granularities through the security rule configuration, meeting the security and flexibility of traffic interconnection across multiple resource pools.
  • resource pool 1 uses resource pool 1 as an example to describe the information interaction process within a cloud-type resource pool. Similar operations are also performed in other cloud-type resource pools such as resource pool 2, and will not be repeated here.
  • resource pool 2 since a pure VLAN network is used in the virtualized resource pool, there is no resource change. Therefore, the above steps 1108-1109 do not need to be performed in the virtualized resource pool.
  • the traditional resource pool does not deploy a site controller and has no automated control. Therefore, the local controller in the traditional resource pool only needs to control the inter-domain gateway to access the VLAN network in step 1104, and does not need to interact with the resource pool. That is, the above steps 1105-1109 do not need to be performed in the traditional resource pool.
  • this embodiment provides a general solution that supports automatic network interconnection between resource pools of different manufacturers, supports access to resource pools of various types such as cloud types (including homogeneous clouds and heterogeneous clouds), virtualized resource pools, traditional resource pools, and supports the use of hardware devices or software for VxLAN encapsulation in resource pools, while providing the optimal traffic path on the data plane.
  • inter-domain gateways and security gateways support hardware Or software to meet the needs of various scenarios.
  • Tenants configure service configuration information on the cloud management platform, and the cloud management platform maps the service configuration information to each resource pool to manage the network and security of multiple resource pools of tenants.
  • the embodiment of the present application also provides a cloud management platform, which is used to execute the method executed by the cloud management platform in the method embodiment of Figure 11.
  • the cloud management platform 1700 includes an acquisition module 1701 and a determination module 1702; specifically, in the cloud management platform 1700, each module is connected through a communication path.
  • Acquisition module 1701 is used to obtain service configuration information configured by the tenant on the cloud management platform, and the service configuration information includes one or more of the following: network identifier, terminal node identifier, and terminal node type; wherein the network identifier is used to indicate the identifier of the network including the at least two resource pools for establishing a network connection, each terminal node corresponds to a resource pool, and the resource pool corresponds to multiple service providers, each resource pool includes multiple computing nodes, and the multiple computing nodes are used to run the tenant's business.
  • the terminal node type indicates the type of resource pool corresponding to the terminal node; please refer to the description of step 1102 for details, which will not be repeated here.
  • the determination module 1702 is used to create corresponding terminal nodes for the at least two resource pools according to the terminal node type. For details, please refer to the description of step 1102, which will not be repeated here.
  • the service configuration information further includes one or more of the following:
  • the terminal node pair includes two terminal nodes, and the connection status of the terminal node pair includes allowing connection and/or prohibiting connection.
  • the service configuration information further includes a routing rule between a network segment included in the at least one resource pool and the terminal node.
  • the service configuration information further includes a proximal security policy and an inter-domain security policy.
  • the acquisition module 1701 is also used to obtain one or more of the following configured by the tenant on the cloud management platform: the type of the resource pool, the location information of the resource pool, the private network VPC information of the resource pool, the subnet information of the resource pool, the interface information of the inter-domain gateway within the resource pool accessing the resource pool, and the virtual local area network VLAN information.
  • types of resource pools include: homogeneous cloud, heterogeneous cloud, virtualized resource pool, and traditional resource pool.
  • the following takes the determination module 1702 in the cloud management platform 1700 as an example to introduce the implementation of the determination module 1702.
  • the implementation of the acquisition module 1701 can refer to the implementation of the determination module 1702.
  • the determination module 1702 may be an application or a code block running on a computer device.
  • the computer device may be at least one of a physical host, a virtual machine, a container, and other computing devices.
  • the above-mentioned computer device may be one or more.
  • the determination module 1702 may be an application running on multiple hosts/virtual machines/containers. It should be noted that the multiple hosts/virtual machines/containers used to run the application may be distributed in the same availability zone (AZ) or in different AZs. The multiple hosts/virtual machines/containers used to run the application may be distributed in the same region or in different regions. Typically, a region may include multiple AZs.
  • multiple hosts/virtual machines/containers used to run the application can be distributed in the same virtual private cloud (VPC) or in multiple VPCs.
  • VPC virtual private cloud
  • a region can include multiple VPCs, and a VPC can include multiple AZs.
  • the determination module 1702 may include at least one computing device, such as a server, etc.
  • the determination module 1702 may also be a device implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the PLD may be a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • the multiple computing devices included in the determination module 1702 may be distributed in the same AZ or in different AZs.
  • the multiple computing devices included in the determination module 1702 may be distributed in the same region or in different regions.
  • the multiple computing devices included in the determination module 1702 may be distributed in the same VPC or in multiple VPCs.
  • the multiple computing devices may be any combination of computing devices such as servers, ASICs, PLDs, CPLDs, FPGAs, and GALs.
  • the division of modules in the embodiments of the present application is schematic and is only a logical function division. There may be other division methods in actual implementation.
  • the functional modules in the embodiments of the present application can be integrated into one module, or each module can exist physically separately, or two or more modules can be integrated into one module.
  • the first acquisition module and the second acquisition module are integrated into one module, or the first acquisition module and the second acquisition module are the same module.
  • the first determination module and the second determination module The modules are integrated into one module, or the first determining module and the second determining module are the same module.
  • the above integrated unit can be implemented in the form of hardware or in the form of software functional units.
  • the present application also provides a computing device 1800.
  • the computing device 1800 includes: a bus 1802, a processor 1804, a memory 1806, and a communication interface 1808.
  • the processor 1804, the memory 1806, and the communication interface 1808 communicate with each other through the bus 1802.
  • the computing device 1800 may be a server or a terminal device. It should be understood that the present application does not limit the number of processors and memories in the computing device 1800.
  • the bus 1802 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus may be divided into an address bus, a data bus, a control bus, etc.
  • FIG. 18 is represented by only one line, but does not mean that there is only one bus or one type of bus.
  • the bus 1802 may include a path for transmitting information between various components of the computing device 1800 (e.g., the memory 1806, the processor 1804, and the communication interface 1808).
  • Processor 1804 may include any one or more of a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP), or a digital signal processor (DSP).
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • the memory 1806 may include a volatile memory, such as a random access memory (RAM).
  • the processor 1804 may also include a non-volatile memory, such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 1806 stores executable program codes, and the processor 1804 executes the executable program codes to respectively implement the functions of the aforementioned acquisition module 1701 and determination module 1702, thereby implementing the management method for a multi-resource pool network. That is, the memory 1806 stores instructions for the cloud management platform 1700 to execute the management method for a multi-resource pool network provided in this application.
  • the communication interface 1808 uses a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 1800 and other devices or communication networks.
  • a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 1800 and other devices or communication networks.
  • the embodiment of the present application also provides a computing device cluster.
  • the computing device cluster includes at least one computing device.
  • the computing device may be a server.
  • the computing device may also be a terminal device such as a desktop computer, a laptop computer, or a smart phone.
  • the computing device cluster includes at least one computing device 1800.
  • the memory 1806 in one or more computing devices 1800 in the computing device cluster may store the same instructions for executing the resource allocation method.
  • the memory 1806 of one or more computing devices 1800 in the computing device cluster may also store some instructions for executing the resource allocation method.
  • the combination of one or more computing devices 1800 may jointly execute the instructions for executing the resource allocation method.
  • the memory 1806 in different computing devices 1800 in the computing device cluster can store different instructions, which are respectively used to execute part of the functions of the computing device. That is, the instructions stored in the memory 1806 in different computing devices 1800 can implement the functions of one or more modules in the acquisition module 1701 and the determination module 1702.
  • one or more computing devices in the computing device cluster can be connected via a network.
  • the network can be a wide area network or a local area network, etc.
  • Figure 20 shows a possible implementation. As shown in Figure 20, two computing devices 1800A and 1800B are connected via a network. Specifically, the network is connected via a communication interface in each computing device.
  • the memory 1806 in the computing device 1800A stores instructions for executing the functions of the acquisition module 1701.
  • the memory 1806 in the computing device 1800B stores instructions for executing the functions of the determination module 1702.
  • the functionality of the computing device 1800A shown in FIG20 may also be performed by multiple computing devices 1800.
  • the functionality of the computing device 1800B may also be performed by multiple computing devices 1800.
  • the embodiment of the present application also provides another computing device cluster.
  • the connection relationship between the computing devices in the computing device cluster can be similar to the connection mode of the computing device cluster described in Figures 19 and 20.
  • the difference is that the memory 1806 in one or more computing devices 1800 in the computing device cluster can store the same instructions for executing the resource management method.
  • the memory 1806 of one or more computing devices 1800 in the computing device cluster may also store partial instructions for executing the resource management method.
  • the combination of one or more computing devices 1800 may jointly execute instructions for executing the resource management method.
  • the embodiment of the present application also provides a computer program product including instructions.
  • the computer program product may be software or a program product including instructions that can be run on a computing device or stored in any available medium.
  • the at least one computing device executes the resource management method.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium can be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media.
  • the available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk).
  • the computer-readable storage medium includes instructions that instruct the computing device to execute the resource management method.
  • the computer-executable instructions in the embodiments of the present application may also be referred to as application code, which is not specifically limited in the embodiments of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integrated.
  • the available medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive (SSD)), etc.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment in combination with software and hardware. Moreover, the present application may adopt the form of a computer program product implemented in one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) that contain computer-usable program code.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种针对多资源池网络的管理方法、云管理平台及装置,其中方法应用于云管理平台,云管理平台获取租户在云管理平台配置的服务配置信息,服务配置信息包括下述一项或多项:网络标识、终端节点标识、终端节点类型;其中,网络标识用于指示包括建立网络连接的至少两个资源池的网络的标识,每一终端节点对应一个资源池,资源池对应多个服务提供方,每一资源池包括多个计算节点,多个计算节点用于运行所述租户的业务,终端节点类型表示终端节点对应的资源池的类型;云管理平台根据所述终端节点类型,为至少两个资源池创建对应的终端节点,为租户提供一种针对混合多云多池网络的统一管理工具,简化租户对混合多云多池网络的管理难度。

Description

一种针对多资源池网络的管理方法、云管理平台及装置
相关申请的交叉引用
本申请要求在2022年10月31日提交中国专利局、申请号为202211352006.6、申请名称为“一种云通信系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2023年04月28日提交中国专利局、申请号为202310484280.7、申请名称为“一种针对多资源池网络的管理方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种针对多资源池网络的管理方法、云管理平台及装置。
背景技术
随着云计算技术的发展以及企业业务的增长,企业可能使用多个云供应商提供统一的计算/存储服务,以提升云基础设施能力、控制成本。未来,混合多云多池架构是支撑企业业务的主流形态。
混合多云多池架构通常包含两个及以上的资源池,这些资源池用于共同运行业务。通常,业务要求多个资源池业务数据共享,也即多个资源池需要构建互通网络。然而,由于多个资源池之间天然的技术隔离,如该多个资源池来自不同的供应商,采用不同的网络模型和通信技术,目前实现任意多个资源池之间互通网络的配置十分复杂、低效,这是这也一直是构建及管理异构系统的痛点。如何解决混合多云多池架构中复杂的网络管理问题是目前亟待解决的问题。
发明内容
本申请提供一种针对多资源池网络的管理方法、云管理平台及装置,用于为用户提供针对混合多云多池网络的统一管理工具,简化用户对混合多云多池网络的管理难度。
第一方面,本申请提供一种针对多资源池网络的管理方法,该方法应用于云管理平台,在该方法中,云管理平台可以为租户提供一种网络互通服务,云管理平台获取租户针对网络互通服务所配置的服务配置信息,该服务配置信息包括下列的一项或多项:网络标识(如segment)、终端节点(如endpoint)标识、终端节点类型;其中,网络标识用于指示租户的全局网络的标识,该全局网络包括由至少两个资源池组件的网络,至少两个资源池中的任意两个资源池之间可建立网络连接,实现跨资源池互通;其中,该至少两个资源池可以来自多个服务提供方(或云资源提供商),比如至少两个资源池中一个来自云厂商A,另一个资源池来自云厂商B,其中每个资源池通常包括多个计算节点,该多个计算节点用于运行租户的业务;终端节点用于表示该租户的全局网络中的一个资源池,相应的,终端节点标识用于唯一标识一个资源池,终端节点类型表示终端节点标识所标识的资源池的类型;在获取到该服务配置信息之后,云管理平台根据终端节点类型为至少两个资源池创建对应的终端节点。
通过上述方法,云管理平台提供一种网络互通服务,租户可根据实际应用中业务需求的多资源池网络配置该网络互通服务的服务配置信息,如配置用于标识实际的多个资源池配置中每个资源池的终端节点标识、表示每个资源池类型的终端节点类型,云管理平台根据终端节点类型依次创建每个资源池对应的终端节点,从而构建出可表示该租户的多资源池的全局网络,其中,该多资源池可来自多个服务提供方,不再局限于同一服务提供方内的多个资源池,为租户提供一种针对混合多云多池网络的统一管理方式,简化用户对混合多云多池网络的管理难度。
在一种可能的实现方式中,任意两个终端节点默认互通,或者,提供给租户配置终端节点之间的连通状态,相应的,服务配置信息还包括下述一项或多项:终端节点对、终端节点对的连通状态;其中,终端节点对包括两个终端节点,终端节点对的连通状态包括允许连通和/或禁止连通。
通过上述方法,基于终端节点对和终端节点对的连通状态,设置该租户的多个资源池中的任意两个资源池之间是否允许连通或禁止连通,提供给租户配置资源池互通的灵活性。
在一种可能的实现方式中,服务配置信息还包括至少一个资源池包含的网段与终端节点的路由规则。
通过上述方法,提供租户配置资源池内的各网段与终端节点的路由规则的灵活性。
在一种可能的实现方式中,服务配置信息还包括近端安全策略和域间安全策略。
通过上述方法,提供给租户配置多资源池互通的流量安全策略的灵活性,提高资源池内以及资源池之间通信的流量安全。
在一种可能的实现方式中,所述方法还包括:获取所述租户在所述云管理平台配置的下列一项或多项:资源池的类型、资源池的位置信息、资源池的专用网络VPC信息、资源池的子网信息、资源池内域间网关接入资源池的接口的信息和虚拟局域网VLAN信息。
在一种可能的实现方式中,云管理平台还用于管理云服务系统,所述云服务系统包括全局控制器和至少两个本地控制器,一个本地控制器与所述至少两个资源池的其中一个资源池相对应;所述方法还包括:所述全局控制器从所述云管理平台获取所述服务配置信息;所述全局控制器向每个本地控制器发送所述服务配置信息。
通过上述方法,云管理平台通过云服务系统将租户配置的服务配置信息映射至各资源池,租户不需要关心底层网络实现,简化租户对混合多云多池网络的管理难度。
在一种可能的实现方式中,所述方法还包括:本地控制器调用对应的资源池内的站点内控制器的第一应用程序编程接口API向所述站点内控制器发送路由信息。
通过上述方法,提供并统一一种标准的第一API,本地控制器基于该第一API可以与任意类型的资源池中的站点内控制器交互路由信息,满足租户多种业务需求。
在一种可能的实现方式中,路由信息包括下列中的部分或全部:
下一跳的类型、网络虚拟化技VxLAN隧道封装的vnid、所述VxLAN隧道封装的外层目的互联网协议IP地址、所述VxLAN隧道封装的外层目的局域网地址mac地址。
通过上述方法,该路由信息可用于资源池内的计算节点做VxLAN报文封装,使计算节点和域间网关之间一跳直达,实现数据面路径最优。
在一种可能的实现方式中,该方法还包括:所述本地控制器调用对应的资源池内的站点内控制器的第二应用程序编程接口API向所述站点内控制器发送订阅请求,所述订阅请求用于请求订阅所述资源池内的资源变更事件。
在一种可能的实现方式中,资源池的类型包括:同构云、异构云、虚拟化资源池、传统资源池。
通过上述方法,满足租户对不同类型资源池的组网需求,提供针对各种类型的混合多云多池网络的同一管理方式。
第二方面,本申请还提供了一种云管理平台,该云管理平台具有实现上述第一方面的方法实例中云管理平台相应的功能,有益效果可以参见第一方面的描述此处不再赘述。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,装置的结构中包括获取模块、创建模块。在一个可能的设计中,第一获取模块和第二获取模块还可以是同一个模块,第一确定模块和第二确定模块可以是同一个模块。这些模块可以执行上述第一方面方法示例中云管理平台相应的功能,具体参见方法示例中的详细描述,此处不做赘述。
第三方面,本申请还提供了一种计算设备集群,该计算设备集群包括至少一个计算设备,该至少一个计算设备具有实现上述第一方面的方法实例中云管理平台的相应功能,有益效果可以参见第一方面的描述此处不再赘述。每个计算设备的结构中包括处理器和存储器,处理器被配置为支持计算设备执行上述第一方面方法中云管理平台相应的部分或全部功能。存储器与处理器耦合,其保存计算设备必要的程序指令和数据。计算设备的结构中还包括通信接口,用于与其他设备进行通信。
第四方面,本申请还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的设计中的方法。
第五方面,本申请还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面以及第一方面的各个可能的设计中的方法。
第六方面,本申请还提供一种计算机芯片,芯片与存储器相连,芯片用于读取并执行存储器中存储的软件程序,执行上述第一方面以及第一方面的各个可能的实现方式中的方法。
附图说明
图1为一种VxLAN的网络模型架构示意图;
图2为一种数据中心的架构示意图;
图3为一种网络模型的架构示意图之一;
图4为一种网络模型的架构示意图之二;
图5为一种网络模型的架构示意图之三;
图6为本申请实施例提供的一种可能的系统架构示意图;
图7A为本申请实施例提供的一种可能的网络互通服务的结构示意图;
图7B为本申请实施例提供的另一种可能的网络互通服务的结构示意图;
图8为本申请实施例提供的多种路由策略配置方式示意图;
图9为本申请实施例提供的转发节点的架构示意图;
图10为本申请实施例提供的另一种可能的系统架构示意图;
图11为本申请实施例提供的一种实现多资源池通信的方法流程示意图;
图12为本申请实施例提供的一种用户界面示意图之一;
图13为本申请实施例提供的一种用户界面示意图之二;
图14为本申请实施例提供的一种用户界面示意图之三;
图15为本申请实施例提供的一种用户界面示意图之四;
图16为本申请实施例提供的一种多资源池互通网络的数据面架构示意图;
图17为本申请实施例提供的一种计算装置的结构示意图;
图18为本申请实施例提供的一种计算设备的结构示意图;
图19为本申请实施例提供的一种计算设备集群的结构示意图;
图20为本申请实施例提供的另一种计算设备集群的结构示意图。
具体实施方式
为了更好地理解本申请实施例的方案,下面先对本申请实施例可能涉及的一些相关术语和概念进行介绍。
1、叠加网络(overlay);
Overlay是指将一个逻辑网络建立在一个实体网络之上。Overlay网络是建立在underlay网络上的逻辑网络,underlay网络是Overlay网络的底层物理基础。Overlay网络有着各种网络协议和标准,如包括虚拟扩展局域网(virtual extensible local area network,VxLAN)、通用路由封装(generic routing encapsulation,GRE)等。其中,VxLAN目前是overlay网络中的常见协议标准。
2、虚拟局域网(Virtual Local Area Network,VLAN);
VLAN,是一种网络隔离技术,将一个物理的LAN在逻辑上划分成多个广播域的技术,其中,LAN指定是局域网。具体的,VLAN技术通过将一个大的物理二层域划分成许多小的逻辑二层域,这种逻辑二层域被称为VLAN。同一个VLAN内的设备可以进行二层通信,不同VLAN之间是二层隔离的。换言之,一个物理局域网可被划分为多个VLAN,同一个VLAN中所有设备都是在同一个广播域内,广播不能跨越VLAN传播。
总的来说,VLAN是一种网络隔离技术,可将一个数据中心的物理局域网逻辑上划分为多个VLAN,VLAN之间通过VLAN编号进行区分,标准中定义VLAN编号的地址位只有12比特(bit),也即可用的VLAN编号的取值范围为1到4094,其可以应对传统数据中心的需求。然而随着云计算技术的发展,数据中心中虚拟机的数量比原有物理机发生了数量级的增长,VLAN的隔离能力对于公有云或其他大型虚拟化云计算服务这种动辄上万甚至更多租户的场景而言,显然已力不从心,因此,VxLAN应运而生。
3、VxLAN;
VxLAN,是对VLAN的一种扩展,通过网络虚拟化技术在一套物理网络设备上虚拟出多个二层网络。具体的,VxLAN采用隧道技术,在三层网络的基础上建立二层以太网网络隧道,从而实现跨地域的二层互连。换言之,VxLAN能够创建跨越物理三层网络的虚拟二层子网或分段,每个二层子网都有对流量进行分段的VxLAN网络标识符(VNI)唯一标识。其中,VNI的长度为24bit,最大支持超过1600万个虚拟网络,能够满足云及其他大型虚拟化网络的超多租户多实例场景。
参见图1为一种VxLAN的网络模型示意图,该网络模型包括网络设备101、102、200、主机1-8。其中,网络设备可以是一台独立的网络设备,如交换机、路由器、网关等,也可以是虚拟机所在的服务器。其中不同的网络设备可能具有不同的功能,为便于说明,如下将网络设备101、网络设备102统称为二层网络设备,将网络设备200称为三层网络设备。应注意,图1是以二层网络设备是交换机、三层 网络设备是路由器为例示出的,本申请对此不做限定。
在功能层面,二层网络设备可以接入一个或多个主机,即与一个或多个主机建立连接,以形成一个区域网络,该区域网络可以是LAN或VLAN,可以理解为一个子网或广播域。其中,主机可以是服务器也可以是服务器中运行的计算实例,如虚拟机、容器等。二层网络设备可以为该局域网内的任意两个主机提供数据通路,以实现该局域网内任意两个主机之间的通信,如图1中,主机1(源主机)向主机4(目的主机)发送报文,网络设备101用于接收主机1发送的报文,并将该报文转发至主机4。三层网络设备可以将多个局域网连接起来,以实现不同局域网内的主机之间的通信。该多个局域网所组建的更大的网络也可以称为三层网络。
VxLAN技术可通过建立VxLAN隧道在三层网络架构上创建多个虚拟二层网络,如在两个二层网络设备之间建立一条VxLAN隧道,这种情况下,二层网络设备也可以称VxLAN隧道端点(VxLAN tunnel endpoints,VTEP)设备(简称VTEP),包括VxLAN隧道的起点(也称源VTEP)或终点(也称目的VTEP)。其中,不同的虚拟二层网络使用VNI进行标识。可以理解为,一个VNI代表了一个租户,同一VNI内的IP地址唯一,即具有相同VNI的主机之间的IP地址不同,而属于不同VNI的主机可以具有相同的IP地址。一个网络设备下接入的多个主机之间可以具有不同的VNI。VxLAN隧道是指建立在两个网络设备之间的虚拟通道,用于传输VxLAN报文。
4、资源池;
资源池是一种配置机制,是灵活管理资源的逻辑抽象,用于对主机资源进行分区。换言之,一个资源池包括一个或多个主机,或者,资源池也可以按照计算实例进行划分,计算实例包括虚拟机、容器等,比如,一个资源池包括多个虚拟机。
5、本申请实施例中的术语“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。“以下一项(个)或多项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的一项(个)或多项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
图2是一种数据中心的架构示意图。该数据中心的物理架构包括一个或多个传统虚拟机资源池和一个或多个传统物理机资源池,通常,一个传统虚拟机资源池包括多个虚拟机,一个传统物理机资源池包括多个物理服务器。虚拟机和物理服务器可以统称为计算资源,这些计算资源用于运行企业业务。
企业业务运行在传统数据中心时,网络互通和网络安全由数据中心的硬件设备提供,如图2所示,该数据中心的物理组网中的东西向流量(指横向流量,如虚拟机资源池内的虚拟机与物理机资源池内的物理服务器之间交互的流量)的二、三层转发由spine/leaf节点完成。其中,leaf节点可以是负责二层网络流量转发的交换机,spine节点可以是负责三层网络流量转发的路由器等。东西向流量的安全防护由旁挂在borderleaf的防火墙完成。南北向流量(指竖向流量,如互联网用户通过互联网访问虚拟机资源池内的虚拟机,虚拟机资源池内的虚拟机向互联网用户发送反馈消息等)的转发集中到数据中心出口接入区完成,出口接入区的设备除了出口路由器做流量转发之外,还会部署一些安全设备如防火墙/WAF等来保证数据中心内的业务安全。
当企业业务上云,即企业客户使用云供应商的云计算资源在本地自建云后,从整个数据中心组网来看,云是以一个独立的资源池集成到企业客户数据中心的全局网络中的,是数据中心的一部分。而企业业务是部署在各个资源池上的,因此,云一定会面临跨资源池网络互通的诉求,以某个企业客户的办公应用为例,办公应用分散部署在不同的资源池上,当需要后端数据共享时,企业客户对数据中心的全局网络的诉求就是支持跨资源池跨云专线(direct connect,DC)的互通网络。
如下介绍现有的几种跨资源池互通的网络模型:
图3为一种跨资源池互通的数据面架构示意图,如图3所示,该云系统包括两朵云,每朵云包括多个计算节点(如物理服务器),每个计算节点上可创建多个云服务器(如虚拟机),云服务器用于处理租户的业务。其中,域间网关集群采用x86服务器实现域间互通,云内采用overlay架构,如VxLAN架构,每个计算节点内可采用vSwitch(软件)做vtep封装,计算节点和域间网关集群一跳直达。
该方案中,由于域间网关是采用x86服务器实现的,存在成本高的问题,并且完全由x86服务器的CPU提供算力,导致CPU容易成为瓶颈,在流量压力大的时候,域间通信的时延也会变大,稳定性也随之变差。并且,该方案采用的架构只能应用于同构云,不支持异构云,并且也不支持传统虚拟机资源 池和传统物理机资源池中。其中,同构云是指多朵云为同一个云供应商,比如图3中的两朵云均为同一个云供营商时,两朵云为同构云,反之,若两朵云来自不同的云供应商则两朵云为异构云。
图4为又一种跨资源池互通的数据面架构示意图,如图4所示,云内使用硬件设备(如TOR-NVE)做overlay封装,域间网关也采用硬件实现,域间网关和TOR-NVE都是通过SDN控制器(图4未示出)来自动下发配置,能够实现计算节点和域间网关的路径最优。
该方案通常由拥有硬件SDN控制器的网络设备厂家采用,通过SDN控制器统一管理资源池内的设备(如TOR-NVE)和域间网关,但不支持域内采用当前主流的vSwitch做overlay的方案,并且该方案采用的架构也只能应用于同构云,而不支持异构云。
图5为另一种跨资源池互通的数据面架构示意图,该方案依赖资源池对外提供的专线能力,支持第三方通过域间网关接入资源池内的专线网关,多朵云可接入同一个第三方的域间网关集群实现跨资源池互通。参见图5所示,该架构中,云内采用overlay架构,域内可采用vSwitch做vtep封装,域间网关通过专线网关域内网络实现互通。
该方案既支持同构云也支持异构云,兼容性更强,然而该方案依赖域内提供的专线能力,而专线配置复杂,并且数据面路径长,时延大。
可见,当前多云多池网络差异大,配置复杂、低效,不同资源池内的网络功能类似,但网络模型抽象各有不同,多站点网络形成孤岛,无法集中统一管控。如果没有统一的管理工具,网络管理员需要人工把不同类型的资源池的网络错综复杂的拼凑在一起,配置难度和管理难度是非常大的。企业客户迫切希望解决跨资源池网络的统一配置、统一管理、统一运维的问题,然而,目前没有一个厂商可提供针对混合多云多池部署的统一的管理工具,能够解决企业客户复杂的网络管理问题。
鉴于此,本申请实施例提供了一种通用的针对混合多云多池部署的网络互通服务,该网络互通服务可实现在平台即服务(platform as a service,PaaS)层面支持多云,基础设施即服务(infrastructure as a service,IaaS)层面网络自动打通,租户通过租赁该服务实现对多资源池网络的同一配置、管理及运维。
下面将结合具体的附图,对本申请实施例中的技术方案进行详细的介绍。
图6为本申请实施例提供的一种可能的应用场景示意图。该应用场景中,云厂商提供云服务,例如云服务包括但不限于网络互通服务、云计算服务等。其中,支持网络互通服务的系统包括云管理平台100和云服务系统200。
云管理平台100的功能包括:提供访问接口(如界面或API)。租户(如上述企业客户或其他具有构建多个资源池互通需求的用户)可操作客户端远程接入访问接口在云管理平台100注册云账号和密码,并登录云管理平台100,云管理平台100对云账号和密码鉴权成功后,租户可进一步在云管理平台100付费选择并购买云服务和/云上资源,在购买成功后,云管理平台100通知云上资源为租户提供服务。
举例而言,云服务和/或云上资源例如为一个虚拟机,租户可在云管理平台100选择虚拟机的规格(内存、处理器和磁盘)及数量,云管理平台100在租户付费成功后,通知云上计算节点创建具有对应数量、对应规格的虚拟机,并开放这些虚拟机的远程桌面,云管理平台100将每个远程桌面的连接账号和密码提供给租户,使得租户可通过账号密码远程登录虚拟机。应注意,云服务和/或云上资源也可以例如为容器、裸金属服务器、弹性IP(EIP)等各种云服务,本申请实施例对云服务类型不作限定。租户可通过购买该云厂商的云上资源来组建资源池,以运行租户的业务。当然,本申请中,租户也可以租赁其他云厂商的云上资源来构建资源池,对此本申请不做具体限定。
再举例而言,云服务例如是网络互通服务,租户具有构建多个资源池互通的诉求时,可在云管理平台100购买网络互通服务并配置网络互通服务所包括的服务配置信息,配置完成后所得的服务配置信息可用于表示租户对多个资源池之间的通信诉求,例如,租户的通信诉求可包括租户的业务部署在哪些资源池上,其中的哪些资源池之间需要互通(即建立网络连接)等。具体例如是租户的业务部署在图6所示的4个资源池中,该4个资源池中的任意两个资源池之间均需要互通,或者是,该4个资源池中部分资源池互通,如资源池1和资源池2需要互通,资源池2和资源池3需要互通,资源池1和资源池3不互通等。
具体的,本申请实施例提供一种网络互通模型,本实施例中的网络互通服务基于该网络互通模型实施,请结合图7A和图7B理解,具体的,网络互通服务包括网络互通模型以及应用于网络互通模型的各种策略,如拓扑策略、安全策略和路由策略等。
图7A示例性示出本实施例提供的一种网络互通模型的示意图。如图7A所示,该网络互通模型包 括网络分段(segment)和端点(endpoint)(也可以称为终端节点)。其中,Segment是一种提供全局路由互通的对象,可用于表示由多个资源池组成的通信系统的全局网络。segment上可添加多个端点,每个端点表示一个资源池,此处的“端点”也可以理解一种连接,端点的一端连接segment,另一端连接不同类型的资源池的网段。租户可通过添加端点来配置相应的资源池的信息,其中,资源池的信息包括资源池的类型、子网网段、VPC等信息。
该网络互通模型还包括端点与端点之间的连接,具有连接的两个端点所表示的资源池互通,不具有连接的两个端点所表示的资源池不互通,在一种可选的设计中,segment上的任意两个端点之间天然具有连接,这无需租户配置,换言之,在租户配置了端点所连接的资源池之后,该Segment上任意两个端点所表示的资源池默认互通。在另一种可选的设计中,segment上的endpoint之间默认全部有连接关系或全部没有连接关系,该Segment上的两个端点之间是否具有连接可由租户进行指定。如云管理平台100提供拓扑策略等供租户指定两个endpoint所连接的资源池之间是否可以具有连接。可见,该网络互通模型上的端点与资源池的类型无关,可以理解为多种类型的资源池的抽象,因此,基于该网络互通模型可构建任意类型的资源池之间的网络。
具体的,拓扑策略用于灵活设置segment下任意两个endpoint之间的连通性,即两个端点之间是否可以互通。如两个端点具有连通性则表示可以互通,不具有连通性则表示不可以互通。例如,以添加图6所示的资源池1-4为例,租户可在该segment上添加4个端点,分别为端点1-端点4,端点1-端点4分别表示资源池1-资源池4,租户在拓扑策略添加端点对如端点1和端点3,连通性设置为禁止,表示资源池1和资源池3不互通。下文会对具体的配置方式进行介绍,此处不做赘述。
在一种实施例中,本实施例还提供一种应用于网络互通模型的路由策略,具体的,路由策略是指,每个segment有一张路由表,该路由表可包括多个表项,每个表项包括目的网段和下一跳,租户可通过路由表配置迭代路由,灵活指定访问某个目的网段的下一跳。
举例来说,假设资源池1的网段为子网1,资源池2的网络为子网2,该segment的路由表包括,表项1:目的网段为子网1以及子网1的下一跳为endpoint1,表示子网1与endpoint1连接。表项2:目的子网2以及子网2的下一跳为endpoint2,表示子网2与endpoint2连接。
当租户想在全局网络中增加资源池3的子网(假设为子网3)时,存在两种配置方式,一种方式为,参见图8的(a)所示,租户在该segment上添加新的端点,如endpoint3,该endpoint3连接该子网3。另一种方式为,租户直接配置该segment的路由表,而不需要在segment上添加endpoint,比如,参见图8的(b)所示,该子网3可被连接到该segment已有的一个端点上,如endpoint1或endpoint2上,相应的,租户只需在该segment的路由表中添加表项3:目标网段为子网3,以及子网3的下一跳endpoint1。
在一种实施例中,本实施例还提供一种应用于网络互通模型的安全策略,安全策略可用于配置具有连接的资源池之间的流量防护程度,以保证资源池之间的流量安全。具体的,本实施例中的安全策略包括近端安全策略和域间安全策略。其中,近端安全策略,用于防护资源池内计算节点的流量安全。域间安全策略,用于防护域间的流量安全。其中,域间安全策略应用于本实施例提供的一种安全模型中,下文会对此安全模型的架构进行介绍。
可见,该网络互通模型是多个资源池组成的全局网络的抽象表达,与单独的资源池的网络实现无关,也即网络互通服务可屏蔽不同资源池内的网络模型差异,供租户实现混合多云多池网络的统一管理。
在租户对服务配置信息配置完成且付费成功后,云管理平台100通知云服务系统200为租户提供服务,如云服务系统200根据服务配置信息对多个资源池进行管理,管理范畴包括根据服务配置信息建立多个资源池之间的网络连接,以及将租户配置的各项策略映射至资源池的网络模型中,最终在资源池生效。在为租户提供服务期间,云服务系统200保证该多个资源池之间具有满足租户诉求的网络连接,并按照租户配置的策略运行。当然,后续租户也可以根据新的通信诉求来修改服务配置信息,从而调整多个资源池之间的网络连接状态。
如下结合图6所示的云服务系统200介绍多资源池互通的网络实现层面的架构。
返回图6,云服务系统200包括全局控制器201和多个本地控制器(如图6中的本地控制器211-214)。
全局控制器201,负责全局的资源处理和配置下发,如全局控制器201从云管理平台100获取服务配置信息,并向本地控制器211-214发送与服务配置信息(或与服务配置信息相关的网络配置信息)。之后,由本地控制器211-214根据服务配置信息/网络配置信息促成该租户的多个资源池之间的互通。
本地控制器211-214,用于与全局控制器201通信,如接收全局控制器发送的服务配置信息/网络配 置信息,并根据服务配置信息/网络配置信息建立跨资源池之间的网络连接。又如,服务配置信息/网络配置信息中所包括的租户配置的一项或多项策略映射至资源池内。
在一种示例中,本地控制器与资源池的关系为1:1,即一个资源池被分配一个本地控制器,或者说一个本地控制器负责管理一个资源池。如图6中,本地控制器211用于管理资源池1,本地控制器212用于管理资源池2,本地控制器213用于管理资源池3,本地控制器214用于管理资源池4。在另一种示例中,本地控制器与资源池的关系为n:1,n取正整数,即一个资源池可被分配多个本地控制器,多个本地控制器互为热备,或者,多个本地控制器共同管理站点。为便于说明,如下以一个资源池被分配一个本地控制器为例进行介绍。
接下来对资源池进行介绍。
本实施例适用的资源池的类型包括云类型和非云类型。云类型包括但不限于私有云、公有云、混合云和边缘云等,非云类型包括但不限于虚拟化资源池、传统资源池等。本申请实施例对同一租户的多个资源池的类型没有限定。例如,图6中的资源池1可以是私有云,资源池2为公有云,资源池3为虚拟化资源池,资源池4为传统资源池。不同类型的资源池可能具有不同的架构,首先以私有云为例来介绍本实施例内云类型的资源池的架构。
如图6所示,私有云内包括站点内控制器301、域间网关311、转发节点321和多个计算实例。
其中,计算实例,用于运行租户的业务,包括但不限于虚拟机、容器、裸金属服务器等。或通过虚拟化技术可以在一台物理计算节点上创建多个计算实例。计算节点可以是如服务器、台式计算机等。
该私有云所包括的计算实例可以是租户从任何云厂商处租赁的,如可以是从提供该网络互通服务的云厂商处租赁,或者是从其他云厂商处租赁的,本申请对此没有限定。当该私有云和云管理平台100由同一个云厂商提供时,该私有云可称为同构云。当该私有云和云管理平台100由不同云厂商提供时,该私有云为异构云。也即本申请实施例中的多个资源池既支持同构云也支持异构云。比如,图6中的私有云为同构云,图6中的公有云为异构云,或者,图6中的私有云和公有云均为同构云,或均为异构云,等等。
转发节点321,用于转发报文,如接收计算实例发送的报文并转发至下一跳,或者向计算实例转发报文。具体的,转发节点用于对同一子网内的计算实例之间的报文进行转发,如将资源池1的同一子网内的计算实例1的报文转发至该子网内的计算实例2。本实施例还支持资源池内采用overlay架构,如VxLAN网络,转发节点321作为vtep端点时,具体用于对计算节点的报文进行VxLAN封装,并将封装后的报文发送至另一个vtep端点(如域间网关311)。
如图9所示,本申请中,转发节点可以是软件,如vSwitch,vSwitch可部署于计算节点内,当一个计算节点创建有多个计算实例时,该计算节点即为该多个计算实例的转发节点。或者,转发节点还可以是硬件,如TOR-NVE等具有网络通信功能及数据处理功能的任何转发设备,满足多场景需求。
域间网关311,用于实现跨资源池的通信,即将域间网关311所在资源池内的计算实例的报文转发至另一个资源池内的域间网关,如域间网关312。以资源池2为例,在一种场景中,资源池1和资源池2采用overlay架构,域间网关311和域间网关312之间建立VxLAN隧道,域间网关311作为vtep端点,具体用于对来自资源池1内的计算实例的报文进行VxLAN封装,并将封装后的VxLAN报文转发给另一个vtep端点,如资源池2中的域间网关312,从而实现跨资源池的流量互通。
具体的,本实施例中的域间网关可以是硬件网关也可以是软件网关,有组网条件的,可以部署硬件网关,硬件域间网关能够支持大规模和高性能的通信网络,满足企业客户设备利旧,高性能的诉求。无组网条件的,可以采用软件网关部署在计算节点上。可选的,本地控制器211具有标准的南向接口,支持对接不同厂商的硬件设备,例如域间网关等。因此,本实施例内的域间网关311可以是提供网络互通服务的云厂商的设备(可称为一方域间网关),或者还可以其他厂商的设备(可称为三方域间网关)。当域间网关为一方域间网关时,相应的,云服务系统200还包括域间网关(如图6中的域间网关311-314中的至少一个)。
站点内控制器301,用于管理私有资源池内的路由以及资源变更,如虚拟机的迁移、IP迁移、IP增减等。值得注意的是,站点内控制器301为私有云内原始存在的管控面设备,用于向转发节点下发路由信息,管理资源池内VxLAN网络中的资源变更等。应注意,为了兼容现有资源池原有的硬件架构,并非是所有资源池都必须部署的,如图6所示,传统资源池内没有站点内控制器。
上文以私有云为例介绍了云类型的资源池的架构,与云类型的资源池不同的是,虚拟化资源池内没 有采用overlay架构,属于纯VLAN网络,因此,不存在虚拟机迁移等资源变更,以及域间网关只需要提供VLAN接入能力即可。传统资源池内未部署站点内控制器,因此无自动化控制,本地控制器控制域间网关即可,不需要和资源池交互。针对不同类型的资源池,本地控制器可采用不同的管理策略,下文会对此进行详细说明。
值得注意的是,云服务系统200可以是云厂商在获知到租户的通信诉求后,专门为满足租户的通信诉求而搭建的,用于为该租户提供网络互通服务,并非原始存在。其中,云服务系统200可以是被自动创建的,如云管理平台100通知云内计算节点创建(如安装)全局控制器201和多个本地控制器,或者,云服务系统200也可以是云厂商人员人工手动创建的,具体不做限定。
具体的,全局控制器201和本地控制器211-214可以是分布式软件系统,或者,全局控制器201和本地控制器211-214还可以是实现上述软件功能的分布式硬件系统。以分布式软件系统为例,其中,全局控制器201可安装于用于运行云管理平台100的计算节点中,或云管理平台100之外独立的计算节点中。本地控制器可以部署于靠近全局控制器侧,或者部署于资源池内。考虑到为了节省资源池内通信的网络带宽,通常将本地控制器部署于对应的资源池内,如部署在资源池内的一个计算节点中,或专用于运行本地控制器的服务器中,具体不做限定。从而云服务系统200中的global层和local层管控分离,本地控制器211就近管控资源池,对全局控制器201屏蔽不同资源池的差异。
本实施例提供的网络互通服务是租户构建及管理多个资源池互通网络的统一工具,租户只需要在云管理平台100配置网络互通服务来表达通信诉求,云管理平台100通过云服务系统200将租户的通信诉求映射至多个资源池,最后在各个资源池生效,从而实现多个资源池网络自动打通。并且,该网络互通服务支持租户的任意类型的资源池,满足多种场景需求,租户不需要关心底层网络实现层面的问题。后续,租户可继续通过该网络互通服务对该多个资源池的互通网络进行统一管理及运维,使租户对混合多云多池互通网络的创建、管理等方式更加简单、便捷、高效,解决了企业客户复杂的网络管理问题。
需要说明的是,为保持简洁,图6仅示出少量的设备,如实际应用中,图6中的域间网关可以是一个单独的域间网关,也可以替换为域间网关集群等。另外需要说明的是,图6所示的架构仅是一种可能的示例,但是本领域的普通技术人员应该理解,实际应用的系统架构还可以包括比图示更多、更少或不同的组件,且所示出的组件可以按任意种的方式进行组合或划分,本申请对此不做具体限定。如图10所示,本实施例还提供一种安全模型的架构示意图,图10在图6的基础上,云服务系统200还包括在租户的各资源池内部署的安全网关,安全网关可以是硬件网关也可以是软件网关,用于对域间(资源池与资源池之间)流量进行安全防护,通如用于对域间网关接收到的报文进行过滤,提高域间流量的安全性。
上文介绍了为租户提供网络互通服务的相关内容,下面对通过网络互通服务构建多资源池互通网络的具体实施例过程进行介绍。
如下以应用于图6或图10所示的系统为例,对本申请实施例提供的针对多资源池网络的管理方法进行介绍。图11示例性示出本申请实施例提供的一种针对多资源池网络的管理方法的方法流程示意图。为保持简洁,图11仅示出一个资源池内的信息交互流程,如图11所示,该方法可包括如下步骤:
步骤1101,云管理平台100获取租户在云管理平台100输入或选择的服务配置信息。
如前所述,云管理平台100可提供界面或API等访问接口供租户配置服务参数,如下对这两种配置方式进行详细介绍。
配置方式一:通过用户界面进行配置。
云管理平台100可提供控制台(console)用户界面以供租户进行配置。
图12为本实施例提供的一种console用户界面(简称用户界面)的示意图,如图12所示,租户可根据自身对多个资源池之间的通信需求在该用户界面的相关属性配置参数。示例性的,参见图12的左侧菜单栏,该云厂商提供的网络互通服务的属性包括但不限于网络互通模型、拓扑策略、路由策略、安全策略等。租户选择上述任一个具体的属性后,该用户界面可呈现该网络互通服务的相关属性配置项,如图12所示,租户选择网络互通模型后,图12右侧显示该网络互通模型的相关配置项,以供租户进行选择或输入,完成该项网络互通服务的相关参数的配置。
因此,服务配置信息包括用户界面提供的网络互通服务的相关属性配置项及租户针对每个属性配置项输入或选择的参数。
如下分别对上述属性及相关配置项进行介绍:
一、网络互通模型的基础架构。
继续参见图12,网络互通模型的基础架构配置项可包括但不限于:segment配置项,和/或,endpoint配置项。其中,segment配置项包括但不限于下列中的一项或多项:segment的名称、segment的描述信息、用于跨资源池互通流量做VxLAN封装的VxLAN网络标识(VxLAN Network Identifier,vni)(图12未示出)等。
其中,segment的名称和segment的描述信息是租户编辑的,方便租户查看,比如segment的名称为业务1网络,描述信息为业务1网络包括云厂商A和云厂商B的4个资源池。当然,此处仅是一个示例,其名称和描述信息还可以是其他内容。跨云互通流量做VxLAN封装的vni,可以是云管理平台100自动分配的,如从vni池中随机选择一个空闲的vni,不需要用户填写。
endpoint,不同的资源池类型具有不同的endpoint配置项。其中,资源池类型包括云类型和非云类型,其中,非云类型还可以划分为传统类型和虚拟化类型,如下分别进行介绍:
(1)云类型:
表示该endpoint所连接的资源池是云上的VPC子网,其中云包括但不限于公有云、私有云、混合云、边缘云等。云类型的endpoint配置项包括但不限于下列中的一项或多项:
1,云位置信息:也可以称为云标识,用于唯一标识一个资源池。具体的,云位置信息包括全局控制器与该云内的本地控制器通信所需的信息,以及,该云内的本地控制器与该云内的站点内控制器通信所需的信息,用于全局控制器下发给该云内的本地控制器,该云内的本地控制器下发给该云内的站点内控制器所需的地址信息(如IP地址)和账号信息。
2,云上VPC信息:用于该云内的本地控制器给站点控制器下发路由信息等。
3,云上VPC子网信息:用于该云内的本地控制器向该云内的域间网关下发路由信息等。
(2)传统类型:
表示该endpoint所连接的资源池的类型为非云类型,具体为非云类型中的传统资源池类型,传统类型的endpoint配置项包括但不限于下列中的一项或多项:
1,资源池位置信息:也可以称为资源池标识,用于唯一标识一个资源池。具体的,全局控制器与该云内的本地控制器通信所需的信息,用于全局控制器下发给资源池内的本地控制器所需的地址信息(如包括站点内控制器等设备的IP地址)和用于鉴权使用的账号信息。
2,资源池内的子网信息:用于该资源池内的本地控制器向该资源池内的域间网关下发路由信息等。
3,域间网关上连接资源池的接口信息和VLAN信息:用于该资源池内的本地控制器向该资源池内的域间网关下发路由信息等。
(3)虚拟化类型:
表示该endpoint所连接的资源池的类型为虚拟化资源池,虚拟化类型的endpoint配置项包括但不限于下列中的一项或多项:
1,资源池位置信息:用于资源池内的本地控制器下发给该资源池内的站点内控制器所需的地址信息(如包括IP地址)和用于鉴权使用的账号信息。
2,资源池内的子网信息:用于该资源池内的本地控制器向该资源池内的域间网关下发路由信息等。
3,域间网关上连接资源池的接口信息和VLAN信息:用于该资源池内的本地控制器向该资源池内的域间网关下发路由信息等。
在输入或选择完成上述配置项的信息后,租户点击确定按键,云管理平台在该segment上创建各资源池对应的端点。
二、拓扑策略。
如前所述,拓扑策略用于灵活设置segment下任意两个endpoint之间的连通性。示例性的,拓扑策略包括但不限于下列信息中的一项或多项:
端点对:指定两个端点。
连通性:允许还是禁止。
基于拓扑策略,租户可指定segment上哪两个端点之间的连通性,允许连通的两个端点对应的资源池互通,禁止连通的两个端点对应的资源池不互通,即不具有连接,无法通信。
举例来说,图13为本申请实施例提供的一种拓扑策略的用户界面示意图,本实施例中,租户对多个资源池的通信需求可以是多个资源池中的任意两个资源池互通,或者,租户对多个资源池的通信需求 为多个资源池中的部分资源池互通,部分资源池之间不可互通。比如,若图6中资源池1和资源池2互通,资源池2和资源池3不互通,等等。相应的,云管理平台100提供拓扑策略等供租户指定两个endpoint所连接的资源池之间是否可以具有连接。
相应的,如图13所示,租户可添加一个或多个端点对,假设在其中一个端点对的配置项选择或输入端点1和端点3,在连通性选项选择允许,则表示端点1和端点3互通,也即端点1连接的资源池1和端点3连接的资源池3之间允许连接或者说具有连接。又例如,另一个端点对被配置为端点2和端点3,连通性选项为禁止,则表示端点2和端点3不互通,也即端点2连接的资源池2和端点3连接的资源池3之间不允许连接或者说不具有连接。
在一种可选的实施方式中,若segment上的端点之间默认具有连接时,租户只需要添加不允许连接的两个端点的端点对,并将该端点对的连通性设置为禁止,不需要重复添加已默认具有连接的端点对。如此,当不允许连接的端点对较少时,租户只需要配置少量的拓扑策略即可,简化租户的配置流程,节约用户的时间。反之,若segment上的端点之间默认不具有连接时,租户只需要添加允许连接的两个端点的端点对,并将该端点对的连通性设置为禁止,不需要对已默认不具有连接的端点对进行重复设置。可选的,segment上的端点之间默认具有连接或不具有连接也可以进行设置。
三、路由策略。
图14为本申请实施例提供的路由策略的用户界面示意图,该路由策略包括网络互通模型中segment的路由表,该路由表中可灵活添加一条或多条路由表项,每条路由表项包括网段和该网段的下一跳。
值得注意的是,上述路由表项可根据租户配置的端点自动生成,例如,租户在图12配置完端点后,点击确定按键,一方面,云管理平台100基于该租户配置的网络互通模型创建segment的路由表,或者也可以由租户手动添加或修改,如参见前述图8的相关介绍,此处不再赘述。另外,图14所示的路由表项仅为示例,并非是一个segment的全部路由表项。
四、安全策略。
安全策略,可用于配置具有连接的资源池之间的流量防护程度,以保证资源池之间的流量安全。示例性的,本实施例中的安全策略包括近端安全策略和域间安全策略。其中,近端安全策略,用于防护资源池内计算节点的流量安全。域间安全策略,用于防护域间的流量安全。
图15为本申请实施例提供的一种安全策略的用户界面示意图,如图15所示,租户可在该用户界面中添加一条安全策略后,选择安全策略的类型为近端安全策略还是域间安全策略。
其中,近端安全策略包括但不限于下列中的一项或多项:
(1)Instance:表示近端安全策略生效的位置,如站点(如资源池)内的某个计算节点的ip地址。
(2)安全规则:匹配的报文标识和行为。报文标识可包括报文的五元组信息,如源ip地址,目的ip地址,源端口号(port),目的端口号(port)和协议号。行为包括允许和丢弃。
举例而言,继续参见图15,租户可在instance配置项输入或选择:资源池1内的计算节点/计算实例等。例如,该配置过程可包括,租户首先在instance配置项的候选列表选择资源池1,在选中资源池1后,用户界面可进一步显示资源池1内包括的计算节点/计算实例的信息如IP地址等供用户继续选择。或者,租户还可在instance配置项直接输入计算节点/计算实例的IP地址。
在安全规则配置项中,报文标识包括五元组信息分别对应的字段,租户可选择性配置其中的部分或全部字段,比如,在源ip地址输入资源池1内的计算实例2的ip地址(如1.1.0.1),并在行为配置项选择丢弃,其余字段可不配置。假设租户在instance配置项选择资源池1内的计算实例1(假设IP地址为1.1.0.0),且计算实例1位于计算节点1上,这种情况下,当计算节点1接收到源ip地址为1.1.0.1的报文即接收到来自计算实例2的报文时,计算节点1将该报文丢弃。若该行为配置项为允许时,则不会丢弃,计算节点1正常处理该报文,如将该报文转发至计算实例1。
应注意,上述仅是一个例子,租户可以仅设置五元组信息中的其余一个或多个字段或全部字段,本申请对此不做限定。另外需要说明的是,由于近端安全策略不依赖安全网关,因此近端安全策略可不依赖安全模型执行,即在图6所示的系统内可以部署近端安全策略。
域间安全策略包括但不限于下列中的一项或多项:
(1)网络分段:表示域间安全生效的位置,如segment,当生效位置为segment时,实际生效位置包括该segment所包括的部分或全部资源池内的安全网关。
(2)安全规则:匹配的报文标识和行为。类似的,报文标识可包括报文的五元组信息,如源ip地 址,目的ip,源port,目的port和协议号。行为包括允许转发和丢弃。
举例来说,继续参见图15,租户配置了一条域间安全策略,其中的安全规则包括源ip地址为资源池1内的计算实例1的ip地址,目的ip地址为资源池2内的计算实例3的ip地址,应理解,域间网关在接收到报文后会将报文发送给安全网关进行过滤,当资源池1内的安全网关或者资源池2内的安全网关接收到源ip地址为计算实例1的ip地址,且目的ip为计算实例3的ip地址的报文时,将该报文丢弃。
需要说明的是,上文只是示例性列举了几种参数,本申请实施例中的网络互通服务所包括的参数并不限定于此,实际应用中,可包括更多或更少的参数,任何与管理跨资源池通信相关的功能所包含的参数均适用于本实施例。
配置方式二:通过API进行配置。
除提供用户界面之外,云管理平台100还可以提供API供租户进行配置,租户可根据自身对多个资源池之间的通信需求配置与上述用户界面所包括的类似的服务参数。示例性的,云管理平台100可在互联网提供的网页上显示API格式,该API格式可包括多个字段,以及每个字段的用法,例如,该API格式包括:"segment_name":,//segment的名称。其中,"segment_name":,为字段,//后面的相关提示为字段的用法。租户在看到网页呈现的API格式之后,根据API格式填入相应的参数,例如在"segment_name":后面填入业务1的网络,即"segment_name":"业务1的网络",表示该segment的名称为业务1的网络。应理解,API格式包括该网络互通服务的全部服务参数对应的字段,租户依次输入每个字段对应的参数。
租户可将上述输入了参数的API以模板方式通过互联网发送至云管理平台100,云管理平台100检测API中不同字段对应的参数,从而获取租户针对API不同字段对应的需求。因此,在本实施例中,服务配置信息包括API字段和租户输入的参数。
需要说明的是,不论是用户界面还是API,在网络互通服务中,除基础属性之外,其余各配置项并非是必须的配置项,或者,租户并不一定对所有的参数进行配置。比如,服务配置信息可仅包括网络互通模型的基础属性配置项及对应的部分或全部参数。在此基础上,服务配置信息还可以包括网络互通模型的拓扑策略所包括的配置项及对应的部分或全部参数。或,服务配置信息还可以包括网络互通模型的安全策略所包括的配置项及对应的部分或全部参数等。
在配置完成服务配置信息后,云管理平台100可通过云服务系统200将服务配置信息映射到租户的各个对应的资源池中,最后在各个资源池生效,实现IaaS层面网络自动打通(参见下列步骤)。
步骤1102,云管理平台100将该服务配置信息发送给全局控制器201。
在全局控制器201在接收到服务配置信息后,全局控制器201执行步骤1103,可选的,全局控制器201还可以将该服务配置信息写入global层内的如硬盘等持久化存储介质中,以持久化存储该租户的服务配置信息。可选的,当该租户订购的网络互通服务到期且确认不续费后,可将该服务配置信息删除,以释放global层的存储空间。
步骤1103,全局控制器201将服务配置信息/网络配置信息(记为第一配置信息)发送给多个资源池中的每个资源池对应的本地控制器。
步骤1103的具体实施流程可包括:全局控制器201调用各本地控制器的API,以将服务配置信息发送给各个资源池对应的本地控制器。此处各个资源池是指服务配置信息所指示的租户的多个资源池。资源池对应的本地控制器是指被分配用于管理该资源池的本地控制器,如图6中,资源池1对应的本地控制器211,资源池2对应的本地控制器212,资源池3对应的本地控制器213,等。其中,资源池与本地控制器的对应关系可以是预置于全局控制器201中,继续以图6所示的多个资源池为例,该多个资源池与本地控制器的对应关系可以是在云服务系统200被创建完成之后,预置于全局控制器201中的。如此,全局控制器201可根据该对应关系确定该租户的各个资源池对应的本地控制器,从而向各本地控制器发送服务配置信息。具体的,在执行步骤1103时,全局控制器201分别调用本地控制器211、212、213和214的API,以将服务配置信息分别发送给本地控制器211、212、213和214。此时,各本地控制器接收到的是相同的服务配置信息。
在一种替代步骤1103的方案中,全局控制器还可以基于服务配置信息确定网络配置信息,网络配置信息包括服务配置信息中的部分信息。全局控制器向各本地控制器发送网络配置信息。即便如此,全局控制器201向各个本地控制器发送网络配置信息的方式仍包括多种:
在一种实施方式中,全局控制器201发送给每个本地控制器的网络配置信息均相同,如在图6中,全局控制器201将相同的网络配置信息分别发送给本地控制器211、212、213和214。也即各资源池对应的本地控制器接收到的网络配置信息相同。其中,网络配置信息可以是服务配置信息中的部分信息,比如,网络配置信息不包括segment的描述信息,或缺少本地控制器不需要的信息。
在另一种实施方式中,全局控制器201发送给各个本地控制器的网络配置信息完全不同或不完全相同,比如,以一个资源池为例,全局控制器201可将服务配置信息中仅与该资源池相关的信息(即网络配置信息)发送给该资源池对应的本地控制器即可,例如全局控制器201将只与资源池1相关的近端安全策略发送给资源池1对应的本地控制器即可,不需要发送给其他资源池对应的本地控制器,以节省网络带宽。在这种情况下,全局控制器201可基于网络服务信息生成或拆分出各资源池对应的网络配置信息,每个资源池对应的网络配置信息包括该资源池对应的本地控制器管理该资源池内资源所需的信息,如包括但不限于:网络互通模型的配置项及参数、该资源池所连接的endpoint相关的拓扑策略、路由表项、安全策略等至少一项。
本地控制器接收到服务配置信息/网络配置信息后,可选的,本地控制器可以将该服务配置信息/网络配置信息存储在local层内的持久性存储介质中,可选的,当该租户订购的网络互通服务到期且确认不续费后,将该服务配置信息删除,以释放local层的存储空间。
步骤1104,本地控制器向该资源池内的域间网关发送网络配置信息(记为第二配置信息)。
为便于区分,如下将全局控制器向本地控制器发送的服务配置信息/网络配置信息称为第一配置信息,将本地控制器发送给域间网关的网络配置信息称为第二配置信息。
以资源池1为例,本地控制器211根据第一配置信息生成第二配置信息,并向域间网关311发送第二配置信息。
具体的,该第二配置信息可包括指示域间网关311与转发节点321建立VxLAN隧道的指示信息。域间网关311基于该第二配置信息与转发节点321建立VxLAN隧道。可选的,该第二配置信息还可以包括建立域间网关之间的网络连接的指示信息,比如,第一配置信息中的拓扑策略指示资源池1和资源池2互通,则该第二配置信息包括用于指示域间网关311与域间网关312建立VxLAN隧道的指示信息,示例性的,该指示信息还包括用于建立该VxLAN隧道所需的信息,如域间网关312做VxLAN隧道封装外层IP地址、VxLAN隧道封装的VNI等。
步骤1105,资源池内的本地控制器向该资源池内的站点内控制器发送路由信息。
为统一、简化资源池的管理方式,本申请实施例提供了几种API,下文将分别对这几种进行详细介绍。
如下以本地控制器211为例,本地控制器211执行步骤1104的具体流程可包括:本地控制器211调用该资源池内的站点内控制器301用于下发路由信息的API(记为第一API),生成路由信息,并将路由信息发送至站点内控制器301。
该第一API为本实施例定义的用于下发路由信息的标准API,在具体的实施方式中,该资源池内的站点内控制器301安装并运行可实现该第一API的插件(如plugin)或软件程序,该插件或软件程序可以是在搭建云服务系统200时安装于该站点内控制器301的,或者是本地控制器211通知站点内控制器301下载并安装该插件或软件程序,或本地控制器211将该插件或软件程序发送给站点内控制器301,并指示站点内控制器301进行安装,具体实施方式不做限定。
示例性的,该第一API的格式包括但不限于下列中的一项或多项:
(1)下一跳类型:表示数据面封装的下一跳类型,如可以是VxLAN或者gre,或其他overlay协议。
(2)用于对报文进行VxLAN封装的vnid:下一跳信息填vni。本实施例中,每两个资源池组成的Vxlan网络共用一个vni,比如,资源池1和资源池2互通,则资源池1和资源池2共用一个vni。资源池1和资源池3互通,资源池1和资源池3共用一个vni。
(3)remote_ip:隧道封装的外层目的ip地址,如资源池1内转发节点321和域间网关311之间建立VxLAN隧道,该路由信息用于转发节点321将资源池1内的计算实例发送的报文封装为VxLAN报文后发送至资源池1内的域间网关311,此时,remote_ip为域间网关311的ip地址。
(4)router mac地址:隧道封装的内层目的mac,如上述示例中,域间网关311作为vtep端点的mac。
本地控制器211向站点内控制器301请求调用第一API,站点内控制器301将第一API提供至本地控制器211,本地控制器根据接收到的网络配置信息以及第一API的格式生成路由信息,本地控制器将该路由信息发送至站点内控制器301(参见步骤1105)。
应注意,执行步骤1105的本地控制器可能是服务配置信息中的多个本地控制器中的部分或全部本地控制器。比如,由于传统资源池中未部署站点内控制器,因此,本地控制器214不需要执行步骤1105。
步骤1106,站点内控制器将路由信息发送至资源池内的转发节点。
结合图9理解,同一个资源池内可能存在多个转发节点(如计算节点),此处为保持简洁,仅示出了对一个转发节点的路由信息下发过程,实际上,本地控制器可针对每个转发节点下发路由信息,此处不再赘述。
转发节点321获得该路由信息之后,便可以根据该路由信息将计算实例发送的报文(如以太网报文)封装为VxLan报文,并通过VxLan隧道将该VxLan报文发送至域间网关311。可以看出,资源池内转发节点和域间网关中间不需要经过额外网关,当转发节点为计算节点时,可以实现计算节点和域间网关一跳直达,实现路径最优。
在一种可能的情况中,该以太网报文为资源池1内的计算实例1向其他资源池如资源池2内的计算实例3发送的报文,则域间网关311在接收到转发节点321发送的VxLan报文之后,再对该VxLan报文进行再次封装,如修改该VxLan报文的外层目的IP为域间网关312的IP地址,VNI修改为资源池1和资源池2共享的VNI等,此处不再赘述。之后,将再次封装后的VxLAN报文通过域间网关311与域间网关312之间的VxLAN隧道发送至域间网关312,图16示例性示出了该报文的传输路径。同理,与非云类型的资源池如虚拟化资源池之间的报文传输方式也是如此,此处不再赘述。应理解,当资源池1或segment被配置了安全策略时,图16所示的报文为经过转发节点或安全网关过滤后的报文,参见前述的说明,此处不再赘述。
需要说明的是,步骤1104与步骤1105-步骤1106是两个独立的过程,这两个独立的过程之间没有严格的时序限定,比如,可以是先执行步骤1104,再执行步骤1105-步骤1106,或者,先执行步骤1105-步骤1106再执行步骤1104,等等。
有鉴于云类型的资源池可能会发生资源变更,本实施例针对云类型的资源池设计了如下步骤:
步骤1107,资源池内的本地控制器向该资源池内的站点内控制器发送事件订阅请求。
本地控制器211调用该资源池内的站点内控制器301用于下发事件订阅的API(记为第二API),生成订阅请求,并将订阅请求发送至站点内控制器301。
该第二API为本实施例定义的标准的用于下发事件订阅的API,在具体的实施方式中,该云内的站点内控制器301安装并运行可实现该第二API的插件(如plugin)或软件程序,参见前述第一API的相关说明,此处不再赘述,需要说明的是,实现第一API和实现第二API的插件可以是同一个,也可以不是,具体不做限定。为了便捷通常由同一个插件或软件程序实现,如下以plugin实现为例进行介绍。
考虑到本实施例中云内采用VxLAN网络,可能发生虚拟机迁移、IP增减等会影响到路由表项变更的资源变化事件。因此,本实施例中,云内的本地控制器211调用该云内的第二API,向云内的站点内控制器301下发路由订阅请求,当云内存在路由变更时,站点内控制器301向本地控制器上报资源变化事件。
示例性的,该第二API的格式包括但不限于下列中的一项或多项:
(1)cidr列表:cidr列表包括被订阅的地址范围,即在该地址范围内的IP地址或路由地址都被订阅,换句话说,该范围内的IP地址或路由地址出现变化,站点内控制器301均需要将资源变化事件上报给本地控制器211。
(2)订阅者IP地址:即本地控制器211的IP地址,用于在云内发生资源变化事件时,站点内控制器301将资源变化事件发送给该本地控制器211。
(3)云内虚拟私有路由器id:上述cidr列表所在的虚拟私有路由器id。
(4)鉴权的ip地址:用于在云内发生资源变化事件时,站点内控制器301通知订阅者时鉴权用的ip地址。
(5)鉴权用户名:鉴权的用户名。
(6)鉴权密码:鉴权的账号密码。
(7)租户id:云内租户信息,如租户标识,用于唯一标识一个租户。
本地控制器211向站点内控制器301请求调用第二API,站点内控制器301将第二API提供至本地控制器211,本地控制器211根据接收到的服务配置信息/第一配置信息按照第二API的格式生成订阅请求,并将该订阅请求发送至站点内控制器301。
可选的,站点内控制器301在接收到订阅请求后,可首先基于订阅请求中携带的鉴权用信息对本地控制器211进行鉴权,鉴权通过再后可执行步骤1107。
步骤1108,当云内发生资源变化事件时,站点内控制器向该云内的本地控制器发送通知消息。
具体的,站点内控制器301调用该云内的本地控制器211提供的用于路由表项更新的API(即第三API),生成指示资源变化事件的通知消息。
该第三API为本实施例定义的标准的北向API,用于站点内控制器301上报指示资源变化事件的通知消息。
示例性的,该第三API的格式包括但不限于下列中的一项或多项:
(1)vpc_id:云内虚拟私有路由器id。
(2)ip_address:云内VPC下的ip地址,包括ipv4地址或者ipv6地址,该地址可以是发生资源变化的对象的IP地址,或云内新增的IP地址等,可以是网段,也可以是具体的主机(如计算实例或计算节点)的ip地址。
(3)router_mac:用于该云内域间网关做VxLAN封装时所使用的内层目的mac地址。
(4)vtep_ip:用于该云内域间网关做VxLAN封装时所使用的外层目的ip地址。
比如,当计算实例(如虚拟机)迁移后,其所在的计算节点生变化,比如结合图9左侧所示,计算实例1由计算节点1迁移至计算节点2,此时,ip_address为该计算实例1的IP地址,指示发生资源变化的对象为计算实例1。router_mac为计算节点2的mac地址。vtep_ip为计算节点2作为vtep端点的IP地址。
该云内站点内控制器301调用该云内的本地控制器211的第三API,并根据资源变更事件按照第三API的格式生成指示该资源变更事件的通知消息。
步骤1108,该云内站点内控制器301将该通知消息发送给该云内的本地控制器211。
步骤1109,该云内的本地控制器211根据通知消息更新路由信息。
更新路由信息包括如根据通知消息将该域间网关311的路由信息中发生资源变更事件的对象的下一跳的信息修改为通知消息中指示的相关信息,比如,将计算实例1的下一跳修改为计算节点2的IP地址,这样当域间网关311接收到发送给计算实例1的报文后,会将该报文转发至计算节点2,再由计算节点2将报文转发给计算实例1。具体如何更新路由信息本实施例对此不做限定,总之,更新后的路由信息可以实现云内的计算实例与发生资源变化事件的对象进行通信。
后续,该云内的本地控制器211将更新后的路由信息发送至域间网关,域间网关基于该更新后的路由信息做VxLAN封装及VxLAN报文传输。
进一步,当租户配置了安全策略后,云管理平台100将该安全策略发送至为该租户对应的全局控制器,全局控制器将安全策略下发至对应资源池内的本地控制器,本地控制器将该安全策略映射至资源池内,如本地控制器将该安全策略内的近端安全策略发送至资源池内的站点内控制器,站点内控制器使用资源池内已有的安全能力实施该近端安全策略,保障池内流量安全。又如,边缘控制器节点将该安全策略在内的域间安全策略发送至该资源池内的安全网关,安全网关实施该域间安全策略,保证域间流量安全。
上述设计,通过该安全规则配置可实现不同粒度的流量防护,满足跨多个资源池互通的流量安全性和灵活性。
应注意,上述以资源池1为例来描述云类型的资源池内的信息交互流程,其他云类型的资源池如资源池2内也执行上述类似的操作,此处不再一一赘述。另外,由于虚拟化资源池内采用纯VLAN网络,不存在资源变更,因此,虚拟化资源池内不需要执行上述步骤1108-步骤1109。而传统资源池内未部署站点内控制器,无自动化控制,因此传统资源池内的本地控制器在步骤1104控制域间网关接入VLAN网络即可,不需要和资源池交互。即传统资源池内不需要执行上述步骤1105-步骤1109。
综上,本实施例提供了一种通用的支持不同厂商资源池之间的网络自动互通的方案,支持各类型如云类型(包括同构云、异构云)、虚拟化资源池、传统资源池等资源池接入,并且支持资源池内采用硬件设备或软件做VxLAN封装,同时数据面提供最优的流量路径。另外,域间网关、安全网关支持硬件 或软件,满足多种场景需求。租户在云管理平台配置服务配置信息,云管理平台将服务配置信息映射至各资源池,实现对租户的多个资源池的网络、安全等的管理,通过该方案能够实现租户对多资源池互通网络的统一部署、统一管理和统一运维,简化租户对多云多资源池系统的使用需求。
基于与方法实施例同一发明构思,本申请实施例还提供了一种云管理平台,该云管理平台用于执行上述图11的方法实施例中云管理平台执行的方法。如图17所示,云管理平台1700包括获取模块1701、确定模块1702;具体地,在云管理平台1700中,各模块之间通过通信通路建立连接。
获取模块1701,用于获取租户在所述云管理平台配置的服务配置信息,所述服务配置信息包括下述一项或多项:网络标识、终端节点标识、终端节点类型;其中,所述网络标识用于指示包括建立网络连接的所述至少两个资源池的网络的标识,每一终端节点对应一个资源池,所述资源池对应多个服务提供方,每一资源池包括多个计算节点,所述多个计算节点用于运行所述租户的业务,所述终端节点类型表示所述终端节点对应的资源池的类型;具体可参见步骤1102的描述,此处不再赘述。
确定模块1702,用于根据所述终端节点类型,为所述至少两个资源池创建对应的终端节点,具体可参见步骤1102的描述,此处不再赘述。
在一种可能的实现方式中,所述服务配置信息还包括下述一项或多项:
终端节点对、所述终端节点对的连通状态;
其中,所述终端节点对包括两个终端节点,所述终端节点对的连通状态包括允许连通和/或禁止连通。
在一种可能的实现方式中,所述服务配置信息还包括所述至少一个资源池包含的网段与所述终端节点的路由规则。
在一种可能的实现方式中,所述服务配置信息还包括近端安全策略和域间安全策略。
在一种可能的实现方式中,获取模块1701还用于获取所述租户在所述云管理平台配置的下列一项或多项:所述资源池的类型、所述资源池的位置信息、所述资源池的专用网络VPC信息、所述资源池的子网信息、所述资源池内域间网关接入所述资源池的接口的信息和虚拟局域网VLAN信息。
在一种可能的实现方式中,资源池的类型包括:同构云、异构云、虚拟化资源池、传统资源池。
示例性的,接下来以云管理平台1700中的确定模块1702为例,介绍确定模块1702的实现方式。类似的,获取模块1701的实现方式可以参考确定模块1702的实现方式。
当通过软件实现时,确定模块1702可以是运行在计算机设备上的应用程序或代码块。其中,计算机设备可以是物理主机、虚拟机、容器等计算设备中的至少一种。进一步地,上述计算机设备可以是一台或者多台。例如,确定模块1702可以是运行在多个主机/虚拟机/容器上的应用程序。需要说明的是,用于运行该应用程序的多个主机/虚拟机/容器可以分布在相同的可用区(availability zone,AZ)中,也可以分布在不同的AZ中。用于运行该应用程序的多个主机/虚拟机/容器可以分布在相同的区域(region)中,也可以分布在不同的region中。其中,通常一个region可以包括多个AZ。
同样,用于运行该应用程序的多个主机/虚拟机/容器可以分布在同一个虚拟私有云(virtual private cloud,VPC)中,也可以分布在多个VPC中。其中,通常一个region可以包括多个VPC,而一个VPC中可以包括多个AZ。
当通过硬件实现时,确定模块1702中可以包括至少一个计算设备,如服务器等。或者,确定模块1702也可以是利用专用集成电路(application-specific integrated circuit,ASIC)实现、或可编程逻辑器件(programmable logic device,PLD)实现的设备等。其中,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD)、现场可编程门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合实现。
确定模块1702包括的多个计算设备可以分布在相同的AZ中,也可以分布在不同的AZ中。确定模块1702包括的多个计算设备可以分布在相同的region中,也可以分布在不同的region中。同样,确定模块1702包括的多个计算设备可以分布在同一个VPC中,也可以分布在多个VPC中。其中,所述多个计算设备可以是服务器、ASIC、PLD、CPLD、FPGA和GAL等计算设备的任意组合。
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中,比如,第一获取模块和第二获取模块集成在一个模块中,或者第一获取模块和第二获取模块为同一个模块。类似的,第一确定模块和第二确定 模块集成在一个模块中,或者第一确定模块和第二确定模块为同一个模块。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请还提供一种计算设备1800。如图18所示,计算设备1800包括:总线1802、处理器1804、存储器1806和通信接口1808。处理器1804、存储器1806和通信接口1808之间通过总线1802通信。计算设备1800可以是服务器或终端设备。应理解,本申请不限定计算设备1800中的处理器、存储器的个数。
总线1802可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图18中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线1802可包括在计算设备1800各个部件(例如,存储器1806、处理器1804、通信接口1808)之间传送信息的通路。
处理器1804可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器1806可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器1804还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。
存储器1806中存储有可执行的程序代码,处理器1804执行该可执行的程序代码以分别实现前述获取模块1701、确定模块1702的功能,从而实现针对多资源池网络的管理方法。也即,存储器1806上存有云管理平台1700用于执行本申请提供的针对多资源池网络的管理方法的指令。
通信接口1808使用例如但不限于网络接口卡、收发器一类的收发模块,来实现计算设备1800与其他设备或通信网络之间的通信。
本申请实施例还提供了一种计算设备集群。该计算设备集群包括至少一台计算设备。该计算设备可以是服务器。在一些实施例中,计算设备也可以是台式机、笔记本电脑或者智能手机等终端设备。
如图19所示,该计算设备集群包括至少一个计算设备1800。计算设备集群中的一个或多个计算设备1800中的存储器1806中可以存有相同的用于执行资源分配方法的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备1800的存储器1806中也可以分别存有用于执行资源分配方法的部分指令。换言之,一个或多个计算设备1800的组合可以共同执行用于执行资源分配方法的指令。
需要说明的是,计算设备集群中的不同的计算设备1800中的存储器1806可以存储不同的指令,分别用于执行计算装置的部分功能。也即,不同的计算设备1800中的存储器1806存储的指令可以实现获取模块1701、确定模块1702中的一个或多个模块的功能。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图20示出了一种可能的实现方式。如图20所示,两个计算设备1800A和1800B之间通过网络进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。在这一类可能的实现方式中,计算设备1800A中的存储器1806中存有执行获取模块1701的功能的指令。同时,计算设备1800B中的存储器1806中存有执行确定模块1702的功能的指令。
应理解,图20中示出的计算设备1800A的功能也可以由多个计算设备1800完成。同样,计算设备1800B的功能也可以由多个计算设备1800完成。
本申请实施例还提供了另一种计算设备集群。该计算设备集群中各计算设备之间的连接关系可以类似的参考图19和图20所述计算设备集群的连接方式。不同的是,该计算设备集群中的一个或多个计算设备1800中的存储器1806中可以存有相同的用于执行资源管理方法的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备1800的存储器1806中也可以分别存有用于执行资源管理方法的部分指令。换言之,一个或多个计算设备1800的组合可以共同执行用于执行资源管理方法的指令。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在计算设备上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算设备上运行时,使得至少一个计算设备执行资源管理方法。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是计算设备能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示计算设备执行资源管理方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的保护范围。

Claims (19)

  1. 一种针对多资源池网络的管理方法,其特征在于,应用于云管理平台,所述云管理平台用于管理多个资源池,所述方法包括:
    所述云管理平台获取租户在所述云管理平台配置的服务配置信息,所述服务配置信息包括下述一项或多项:
    网络标识、终端节点标识、终端节点类型;
    其中,所述网络标识用于指示包括建立网络连接的所述至少两个资源池的网络的标识,每一终端节点对应一个资源池,所述资源池对应多个服务提供方,每一资源池包括多个计算节点,所述多个计算节点用于运行所述租户的业务,每个终端节点标识用于标识所述至少一个资源池的其中一个资源池,所述终端节点类型表示所述终端节点标识所指示的资源池的类型;
    所述云管理平台根据所述终端节点类型,为所述至少两个资源池创建对应的终端节点。
  2. 如权利要求1所述的方法,其特征在于,所述服务配置信息还包括下述一项或多项:
    终端节点对、所述终端节点对的连通状态;
    其中,所述终端节点对包括两个终端节点,所述终端节点对的连通状态包括允许连通和/或禁止连通。
  3. 如权利要求1或2所述的方法,其特征在于,所述服务配置信息还包括所述至少一个资源池包含的网段与所述终端节点的路由规则。
  4. 如权利要求1至3中任一所述的方法,其特征在于,所述服务配置信息还包括近端安全策略和域间安全策略。
  5. 如权利要求1至4中任一所述的方法,其特征在于,所述方法还包括:
    获取所述租户在所述云管理平台配置的下列一项或多项:
    所述资源池的类型、所述资源池的位置信息、所述资源池的专用网络VPC信息、所述资源池的子网信息、所述资源池内域间网关接入所述资源池的接口的信息和虚拟局域网VLAN信息。
  6. 如权利要求2所述的方法,其特征在于,所述云管理平台还用于管理云服务系统,所述云服务系统包括全局控制器和至少两个本地控制器,一个本地控制器与所述至少两个资源池的其中一个资源池相对应;
    所述方法还包括:
    所述全局控制器从所述云管理平台获取所述服务配置信息;
    所述全局控制器向每个本地控制器发送所述服务配置信息。
  7. 如权利要求6所述的方法,其特征在于,所述方法还包括:
    所述本地控制器调用对应的资源池内的站点内控制器的第一应用程序编程接口API向所述站点内控制器发送路由信息。
  8. 如权利要求7所述的方法,其特征在于,所述路由信息包括下列中的部分或全部:
    下一跳的类型、网络虚拟化技VxLAN隧道封装的vnid、所述VxLAN隧道封装的外层目的互联网协议IP地址、所述VxLAN隧道封装的外层目的局域网地址mac地址。
  9. 如权利要求6-8任一项所述的方法,其特征在于,所述方法还包括:
    所述本地控制器调用对应的资源池内的站点内控制器的第二应用程序编程接口API向所述站点内控制器发送订阅请求,所述订阅请求用于请求订阅所述资源池内的资源变更事件。
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    所述资源池内的站点内控制器调用所述资源池对应的本地控制器的第三API向所述本地控制器发送通知消息,所述通知消息指示所述资源池内的资源变更事件。
  11. 如权利要求1-10任一项所述的方法,其特征在于,所述资源池的类型包括:同构云、异构云、虚拟化资源池、传统资源池。
  12. 一种云管理平台,其特征在于,所述云管理平台包括:
    获取模块,用于获取租户在所述云管理平台配置的服务配置信息,所述服务配置信息包括下述一项或多项:
    网络标识、终端节点标识、终端节点类型;
    其中,所述网络标识用于指示包括建立网络连接的所述至少两个资源池的网络的标识,每一终端节 点对应一个资源池,所述资源池对应多个服务提供方,每一资源池包括多个计算节点,所述多个计算节点用于运行所述租户的业务,所述终端节点类型表示所述终端节点对应的资源池的类型;
    创建模块,用于根据所述终端节点类型,为所述至少两个资源池创建对应的终端节点。
  13. 如权利要求12所述的云管理平台,其特征在于,所述服务配置信息还包括下述一项或多项:
    终端节点对、所述终端节点对的连通状态;
    其中,所述终端节点对包括两个终端节点,所述终端节点对的连通状态包括允许连通和/或禁止连通。
  14. 如权利要求12或13所述的云管理平台,其特征在于,所述服务配置信息还包括所述至少一个资源池包含的网段与所述终端节点的路由规则。
  15. 如权利要求12至14中任一所述的云管理平台,其特征在于,所述服务配置信息还包括近端安全策略和域间安全策略。
  16. 如权利要求12至15中任一所述的云管理平台,其特征在于,所述获取模块,还用于获取所述租户在所述云管理平台配置的下列一项或多项:
    所述资源池的类型、所述资源池的位置信息、所述资源池的专用网络VPC信息、所述资源池的子网信息、所述资源池内域间网关接入所述资源池的接口的信息和虚拟局域网VLAN信息。
  17. 一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;
    所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使得所述计算设备集群执行如权利要求1至11任一项所述的方法。
  18. 一种包含指令的计算机程序产品,其特征在于,当所述指令被计算设备集群运行时,使得所述计算设备集群执行如权利要求的1至11任一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,所述计算设备集群执行如权利要求1至11任一项所述的方法。
PCT/CN2023/104303 2022-10-31 2023-06-29 一种针对多资源池网络的管理方法、云管理平台及装置 WO2024093315A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202211352006.6 2022-10-31
CN202211352006 2022-10-31
CN202310484280.7A CN117997734A (zh) 2022-10-31 2023-04-28 一种针对多资源池网络的管理方法及系统
CN202310484280.7 2023-04-28

Publications (1)

Publication Number Publication Date
WO2024093315A1 true WO2024093315A1 (zh) 2024-05-10

Family

ID=90887661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104303 WO2024093315A1 (zh) 2022-10-31 2023-06-29 一种针对多资源池网络的管理方法、云管理平台及装置

Country Status (2)

Country Link
CN (1) CN117997734A (zh)
WO (1) WO2024093315A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062248A (zh) * 2017-12-08 2018-05-22 华胜信泰信息产业发展有限公司 异构虚拟化平台的资源管理方法、系统、设备及存储介质
US20180152392A1 (en) * 2015-07-10 2018-05-31 Hewlett Packard Enterprise Development Lp Hybrid cloud management
CN109347676A (zh) * 2018-11-02 2019-02-15 杭州云霁科技有限公司 一种异构、一体化的混合云资源管理平台
CN112637304A (zh) * 2020-12-16 2021-04-09 北京中电普华信息技术有限公司 一种跨云资源处理系统和资源管理方法
CN114024886A (zh) * 2021-10-25 2022-02-08 济南浪潮数据技术有限公司 跨资源池的网络互通方法、电子设备及可读存储介质
US20220174096A1 (en) * 2019-06-11 2022-06-02 Net-Thunder, Llc Automatically Deployed Information Technology (IT) System and Method with Enhanced Security

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180152392A1 (en) * 2015-07-10 2018-05-31 Hewlett Packard Enterprise Development Lp Hybrid cloud management
CN108062248A (zh) * 2017-12-08 2018-05-22 华胜信泰信息产业发展有限公司 异构虚拟化平台的资源管理方法、系统、设备及存储介质
CN109347676A (zh) * 2018-11-02 2019-02-15 杭州云霁科技有限公司 一种异构、一体化的混合云资源管理平台
US20220174096A1 (en) * 2019-06-11 2022-06-02 Net-Thunder, Llc Automatically Deployed Information Technology (IT) System and Method with Enhanced Security
CN112637304A (zh) * 2020-12-16 2021-04-09 北京中电普华信息技术有限公司 一种跨云资源处理系统和资源管理方法
CN114024886A (zh) * 2021-10-25 2022-02-08 济南浪潮数据技术有限公司 跨资源池的网络互通方法、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN117997734A (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
US11973686B1 (en) Virtual performance hub
CN111756612B (zh) 用于虚拟化计算基础设施的方法和系统
US11470001B2 (en) Multi-account gateway
US11588683B2 (en) Stitching enterprise virtual private networks (VPNs) with cloud virtual private clouds (VPCs)
US11405427B2 (en) Multi-domain policy orchestration model
US10708125B1 (en) Gateway configuration using a network manager
KR101714279B1 (ko) 폴리시 기반 데이터센터 네트워크 자동화를 제공하는 시스템 및 방법
EP3152865B1 (en) Provisioning and managing slices of a consumer premises equipment device
US9294351B2 (en) Dynamic policy based interface configuration for virtualized environments
US20170302535A1 (en) Secure cloud fabric to connect subnets in different network domains
CA2856086C (en) Virtual network interface objects
WO2020106453A2 (en) Extending center cluster membership to additional compute resources
US20120294192A1 (en) Method and apparatus of connectivity discovery between network switch and server based on vlan identifiers
US20150229641A1 (en) Migration of a security policy of a virtual machine
US11469998B2 (en) Data center tenant network isolation using logical router interconnects for virtual network route leaking
WO2014166247A1 (zh) 一种虚拟网络管理的实现方法和系统
US20130297752A1 (en) Provisioning network segments based on tenant identity
US11902245B2 (en) Per-namespace IP address management method for container networks
WO2022028092A1 (zh) 一种vnf实例化的方法和装置
WO2021147358A1 (zh) 一种网络接口的建立方法、装置及系统
US8117321B2 (en) Network connection management using connection profiles
WO2024093315A1 (zh) 一种针对多资源池网络的管理方法、云管理平台及装置
WO2023133797A1 (en) Per-namespace ip address management method for container networks
CN106506238A (zh) 一种网元管理方法及系统
WO2022193897A1 (zh) 一种业务的部署方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23884254

Country of ref document: EP

Kind code of ref document: A1