WO2014128948A1 - 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 - Google Patents
仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 Download PDFInfo
- Publication number
- WO2014128948A1 WO2014128948A1 PCT/JP2013/054655 JP2013054655W WO2014128948A1 WO 2014128948 A1 WO2014128948 A1 WO 2014128948A1 JP 2013054655 W JP2013054655 W JP 2013054655W WO 2014128948 A1 WO2014128948 A1 WO 2014128948A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- physical
- instance
- network
- server
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present invention generally relates to a computer system, and more specifically, to a method and apparatus for managing the configuration of resources including a network in a computer system in which virtual servers and physical servers are mixed.
- Server virtualization technology has become widespread, and it has become common to integrate multiple virtual servers for building enterprise information systems on a single piece of hardware (single physical server).
- the physical resources (CPU, memory, etc.) of the physical server that has been related to the physical server in a one-to-one manner are divided into multiple server resources, and the virtual server operates independently for each server resource. Can be used effectively.
- the allocation amount of physical resources to a virtual server can be flexibly changed, or a virtual server can be changed to another physical server (a physical server that can operate a plurality of virtual servers by financing a virtualization function).
- By moving the physical server to the “virtual server host”) it is possible to allocate resources according to the demand for the service provided by the application on the virtual server.
- a virtual server shares resources provided by a single virtual server host, it is affected by performance from other virtual servers on the same virtual server host.
- a physical server that does not have a hypervisor software having a function of operating a plurality of virtual servers on a single physical server
- a physical server that does not have a hypervisor occupies the physical resources of a single hardware device (physical server), so all processing performance can be used, and stable operation is not affected by other servers. Can do.
- these physical servers are called non-virtual servers or bare metal servers. As described above, the non-virtual server has an advantage in performance, but lacks flexibility in system construction as compared with a virtual server host capable of operating a plurality of virtual servers.
- a tenant associates resources and service menus provided by the cloud for each specific user group or organization. Multiple tenants can share the same cloud platform to increase the overall platform usage efficiency. At this time, a mechanism for protecting security is indispensable so that other tenants cannot illegally access their tenant resources. In a general cloud system, security is ensured for each tenant by user authentication and network division.
- a management device for setting a network policy is arranged on the network, and controls permission / denial of communication between servers according to the use of a tenant, a user, and a virtual server. Such a network configuration management device needs to be able to be created and changed flexibly according to the demands of tenants and virtual servers, and is realized as a virtual server called a network appliance.
- a system that operates stably without being affected by the operating state of a business system that operates in another tenant is necessary.
- a movement for realizing stable operation is generally performed by load distribution using online migration of a virtual server, priority control of communication for each virtual server, or the like.
- Patent Document 1 discloses a router configuration method and system for distributing communication loads on a network. With this system, a plurality of network paths can be used in parallel, and network resources can be used effectively.
- Patent Document 2 discloses a method for efficiently managing a configuration in a multi-tenant environment.
- An object of the present invention is to operate an information system in which a non-virtual server and a virtual server are operated by the same tenant, and security and performance independence are ensured, and performance and cost are optimized according to user requirements. Is to build.
- the management computer includes a first physical server on which a virtual switch that controls a plurality of virtual instances (virtual servers) and a network between the virtual instances, a second physical server on which the physical instances operate, and a first physical server
- the server is connected to the second physical server, and is connected to a physical switch that controls the network between the first physical server and the second physical server.
- the management computer indicates virtual switch management information indicating the correspondence between each of the plurality of virtual instances and the internal network to which the virtual instance is connected, and indicates the correspondence between the physical instance and the internal network to which the physical instance is connected. Physical switch management information.
- the management computer When the management computer receives a first instance creation request for creating a first virtual instance connected to the same internal network as the physical instance, the management computer creates the first virtual instance on the first physical server.
- the physical switch management information is referred to, the first internal network to which the physical instance is connected is specified, and the virtual switch and the physical are connected so that the first virtual instance is connected to the first internal network. Set the switch.
- the present invention it is possible to operate a plurality of tenants on the same physical hardware while providing a tenant with secured security for each user. And, it is possible to construct an information system that optimizes performance and cost according to the user's request while operating non-virtual server and virtual server in the same tenant and ensuring independence in security and performance. it can.
- non-virtual servers as the same tenant as the virtual server, the performance requirements are low for many business systems used by users according to the processing requests from time to time while ensuring security from other tenants. It is possible to selectively use processes such as consolidating processes into a small number of physical devices by server virtualization, or stably operating processes with high performance requirements on non-virtual servers.
- resources can be flexibly increased / decreased using virtual servers at the start of services where demand is unpredictable, and services can be stably operated on non-virtual servers when demand is gradually determined, and the transition to the next system can be closer.
- operations such as concentrating to virtual servers to improve resource utilization efficiency.
- 1 shows an overall configuration of a computer system in an embodiment of the present invention.
- the physical structure of the computer system in the Example of this invention is shown.
- 1 shows a logical configuration of a computer system in an embodiment of the present invention.
- 1 shows a network configuration of a computer system in a first embodiment of the present invention.
- 2 shows a VLAN ID management table in the first embodiment of the present invention.
- 2 shows a VLAN ID management table in the first embodiment of the present invention.
- the processing flow in the 1st example of the present invention is shown.
- 2 shows a network configuration of a computer system in a second embodiment of the present invention.
- the concept of the network route setting by the DHCP server in the 2nd Example of this invention is shown.
- 7 shows a network address management table in the second embodiment of the present invention.
- the detail of the management computer in the 2nd Example of this invention is shown.
- the processing flow in the 2nd example of the present invention is shown.
- the element management table group in the 2nd Example of this invention is shown.
- the relationship of the management table in connection with VLAN ID management in the 2nd Example of this invention is shown.
- the concept of the network route setting by OS image management in the 3rd Example of this invention is shown.
- the concept of the load distribution method of the external access in the 4th Example of this invention is shown.
- 2 shows a VLAN ID management table in the first embodiment of the present invention.
- FIG. 1 shows an overview of a computer system in this embodiment.
- a user who receives a service from an application on a server uses a client computer 70.
- One or more client computers 70 are physically connected to communicate with one or more physical servers 10 and 20 via a WAN (Wide Area Network) 302 and a LAN (Local Area Network) 300.
- WAN Wide Area Network
- LAN Local Area Network
- the LAN 300 (for service) to which the physical server 10 is connected is mainly distinguished from the WAN 302 to which the client computer 70 is connected.
- the former is an internal network and the latter is an external network.
- a physical gateway 500 is interposed at the boundary between the internal network and the external network, and various processes are performed on communication data flowing through the physical gateway 500 to control communication. Details of the configuration of the gateway and the functions of the gateway will be described later.
- FIG. 1 has a simple configuration, but the gateway may be multi-staged as necessary, and the WAN 302 may be another LAN. Furthermore, the WAN or LAN may be physically or logically divided into a plurality of networks.
- management computer and management interfaces of other devices are connected to each other via the management LAN 301.
- FIG. 2 shows a more detailed physical configuration and logical configuration of each device connected to the internal network.
- the internal network 300 in FIG. 1 is represented as the network 61 in FIG.
- At least one or more first physical servers 10, second physical servers 20, storage devices 100, and management computers 200 are physically connected to the network 61.
- one or more gateways 500 are connected to the boundary between the network 61 and the network 66.
- the network 66 in FIG. 2 corresponds to the external network 302 in FIG.
- the first physical server 10 includes a CPU 11, a memory 12, a fiber channel interface (FC IF) 15, and an Ethernet (hereinafter, registered trademark) interface (Ether IF) 16. At least the OS 13a is stored in the memory 12, and processing resources are provided to the application 13b operating on the physical server 10 by arithmetic processing of the CPU 11.
- the physical server 10 may be referred to as a non-virtual server or a bare metal server in the sense that migration, particularly the virtualization program does not operate, and the OS 13a directly operates on the physical server 10.
- the FC IF 15 is for communicating with other devices via the network 51, and is used mainly for the purpose of connecting storage resources.
- a communication standard other than Fiber Channel may be used as long as it is a means of interconnection that achieves the same purpose, and a plurality of communication standards may be provided depending on the application or logically divided into a plurality.
- the Ether IF 16 is for communicating with other devices via the network 60, and is used for the purpose of communicating with the other physical servers 10, 20 and the management computer 200. As long as it is a means of interconnection that achieves the same purpose, it may be compliant with a communication standard other than Ethernet. Good.
- the second physical server 20 includes a CPU 21, a memory 22, an FC IF 25, and an Ether IF 26. At least the OS 23a and the virtualization program 23b are stored in the memory 22, and the physical resources of the physical server 20 are divided into one or more virtual resource areas and provided to other OSs or applications 23c by arithmetic processing of the CPU 21. To do.
- the virtualization program 23b is not necessarily separated from the OS 23a, and may be implemented as a single module inside the OS 23a or the OS 23a itself as long as it has a function of dividing the physical server 20 into virtual resource areas. May be implemented as
- the virtualization program 23b is generally called a VMM (Virtual Machine Monitor) or a hypervisor. In the following description, these refer to the same thing.
- This resource area constitutes the hardware of one logical server called a virtual machine, and the second physical server 20 may be called a virtual machine host. Details of the FC IF 25 and Ether IF 26 are the same as those of the first physical server 10.
- the network 51 is for connecting one or more physical servers 10 and 20 and one or more storage apparatuses 100 to each other.
- the physical servers 10 and 20 communicate with the storage apparatus 100, and the storage resources necessary when the applications 13b and 23c are operated can be used.
- One or more guyiba channel switches (FC SW) 50 may be interposed on the network 51.
- the configuration of the FC SW 50 is set by the management computer 200 via the network 61 to which the Ether IF 56 is connected.
- the network 61 is mainly used for the following three purposes.
- the first purpose is to provide service communication between the client computer 70 and the physical servers 10 and 20.
- the physical server 10 receives a processing request or processing target data from the client computer 70, and transmits the data processed or generated by the application 13b to the client computer 70 again.
- the second purpose is to change the configuration of the physical servers 10 and 20 related to service communication. For example, a new application 23c is introduced on the physical server 20, or a resource area called a virtual server is created on the virtualization program 23b.
- the third purpose is to change the configuration of the data network 51 between the physical servers 10 and 20 and the storage device 100.
- the storage resource unit called a volume is passed through the storage control unit 150 of the storage device 100. Is created and a logical communication path with the physical server is set, so that storage resources can be used.
- the storage apparatus 100 is a collection of a plurality of physical storage devices 101 and includes a storage control unit 150 that centrally controls the apparatus and provides storage resources for data storage to other apparatuses such as a physical server. .
- the physical storage device 101 is, for example, a hard disk drive (HDD) or a non-volatile storage device called a solid state drive.
- the storage control unit 150 includes a CPU 151, a memory 152, a cache 154, an FC IF 155, an Ether IF 156, and a Serial Advanced Technology Attachment interface (SATA IF) 157.
- SATA IF Serial Advanced Technology Attachment interface
- the memory 152 stores at least a response program 153a that responds to read / write requests and a storage control program 153b that controls the logical configuration of the apparatus, and realizes the functions of the storage apparatus 100 through arithmetic processing in the CPU 151.
- the cache 154 is mainly used for improving the response performance of the storage resource with respect to the read / write request of the physical server.
- the FC IF is for communicating with other devices via the network 51 and is used mainly for the purpose of connecting to the physical servers 10 and 20. Any communication standard other than Fiber Channel may be used as long as it is an interconnection means that achieves the same purpose, and a plurality of communication standards may be used depending on the number of physical servers.
- the Ether IF 16 communicates with other devices via the network 60 and is mainly used for the purpose of connecting to the management computer 200.
- the management computer 200 includes a CPU 201, a memory 202, and an Ether IF 206, and mainly has a function of changing the configuration of other devices.
- the memory 202 stores at least an OS 203a for controlling the hardware of the management computer and a management program 203b, and realizes the function of the management computer 200 by the arithmetic processing of the CPU 201.
- a plurality of management programs 203b may be operated depending on the application. Details of the management program 203b will be described later.
- the Ether IF 206 is for communicating with other devices via the network 60.
- One or more physical gateways 500 exist at the boundary between the internal network 61 and the external network 66, and have a function of applying a specific policy to communication data passing through the gateway and communication data flowing in the internal network.
- the gateway in this embodiment is generally called a router, and has one or more of functions such as a layer 3 router, firewall, network address translation (NAT), proxy, reverse proxy, VPN router, and port forwarding. It is to be implemented.
- the physical gateway 500 includes a CPU 501, a memory 502, and an Ether IF 506.
- the memory 502 has an OS 503a and one or a plurality of network control programs 503b, and realizes the function of the physical gateway 500 by the arithmetic processing of the CPU 501.
- Ether IFs 506 there are at least a plurality of Ether IFs 506, and can be logically classified into an interface 506a on the internal network 61 side and an interface 506b on the external network 66 side. Details of functions realized by the network control program 503b will be described later.
- the network 66 is an external network when viewed from the physical servers 10 and 20, the management computer 200, and the storage apparatus 100. Although not shown in FIG. 2, a gateway may be provided outside the network 66.
- the network 66 may be configured via the Ether SW 65.
- ⁇ Instance configuration method> The computer system in this embodiment provides a function for managing the resource configuration of virtual servers and non-virtual servers.
- the configuration and functions of the system will be described by taking the configuration procedure when generating the virtual server and the non-virtual server as an example.
- a server that is generated in response to a user request and provides an information service to a client is called an instance
- a virtual server is called a virtual instance
- a non-virtual server is called a physical instance.
- FIG. 3 shows a system configuration for controlling resources allocated to instances in this embodiment.
- the end user accesses the management computer 200 using the management client 73 b on the client computer 70.
- the management computer 200 is connected to the client computer 70 via the management network 302b, and accepts an instance creation request transmitted by the management client 73b in the integrated service management unit 204a, which is a kind (or component) of the management program 203b.
- the integrated service management unit 204a is responsible for cooperatively controlling the device management units (server management unit 204b, network management unit 204c, and storage management unit 204b) that manage the configuration of each device and generating instances.
- an instance is generated by the following procedure.
- the integrated service management unit 204a issues a volume creation request to the storage management unit 204d.
- the storage management unit 204d reserves storage resources in a logical unit called a volume in the storage apparatus 100. If an appropriate volume already exists, this volume creation procedure is omitted. Through the procedure described later, the volume is recognized by the server device as a nonvolatile storage device such as a disk drive.
- the storage management unit 204d responds to the integrated service management unit 204a with the status of the volume and the identifier of the FC IF 155 that can use the volume. Thereafter, the integrated service management unit 204a selects a physical server for creating an instance together with the volume creation procedure.
- the integrated service management unit 204a uses the network management unit 204c to set a communication path in the FC SW50.
- the FC SW 50 controls a fiber channel port that can communicate using a technique called zoning, and such setting is required.
- the port 52 of the selected physical server 10 or 20 can communicate with the port 52 on the storage apparatus 100.
- the integrated service management unit 204a uses the storage management unit 204d to set an access control function such as Host Storage Domain or LUN Security.
- the volume starts the OS 13d or 23d installer through the disk device recognition server management unit, and the permanent OS environment 13a is installed in the disk drive.
- a general network installation technique using a PXE server or a TFTP server can be applied to the transfer of the installer. If there is a request from the user, middleware or an application 23c is installed.
- middleware or an application 23c is installed.
- the volume 160 in the storage apparatus 100 is connected to the OS 13a and configured to store data used by the application 13b.
- the server management unit 204b uses the hypervisor 23b to create a file called a virtual disk 161 in the volume 160 and connect it to the guest OS 23d of the virtual instance 24. From the guest OS 23d of the virtual instance 24, it is recognized as if the virtual disk drive 162 provided by the virtual disk 161 is connected. The configurations of the virtual disk 161 and the virtual instance 24 are directly controlled by the hypervisor 23b.
- the Ether SW 61 and the Ether IF for connecting to the internal network 300 are set, and further, the gateway 500 for connecting to the external network 302a is set. Details will be described later together with the tenant network configuration method.
- Information about the state of the instance is provided to the management client 73b by the integrated service management unit 204a and presented to the user.
- the user uses the information service of each instance via the service network 302a by a desired service client 73a. Furthermore, the user can change the configuration of the instance using the management client 73b as necessary.
- the function of changing the configuration of the instance is the same as that in the instance creation described above in that it is realized by the integrated service management unit 204a and each device management unit.
- the integrated service management unit 204a uses a configuration change function provided in each device management unit in combination to change the configuration of the instance required by the user.
- One of the objects of the present invention is to use a virtual instance and a physical instance in accordance with application requirements and user requirements.
- a private network that spans virtual and physical instances must be configured and able to communicate with each other.
- Control of the network communicable range can be realized only by setting the layer 2 or layer 3 network, or even in other layers, but a private network can be flexibly constructed according to the user's request while ensuring security.
- the method shown in this section is widely used. In other words, for internal networks that do not require increased security, it is necessary to configure a network that ensures layer 2 connectivity as a single layer 3 segment and to perform advanced security management in cooperation with applications. This is a method of using layer 3 path control for external network communication with other segments.
- One private network is assigned one VLAN ID and is independent of other private networks at the layer 2 level. In order to connect different private networks, communication using an IP address is performed via a layer 3 router.
- the private network spanning the virtual instance and the physical instance becomes layer 2 transparent, and configuration management using broadcast such as DHCP can be used.
- This section describes a method for configuring a layer 2 network in each Ethernet switch.
- FIG. 4 shows an example of a private network configuration.
- a method for configuring a layer 2 network will be described below with reference to FIG.
- VLAN is a technology for logically multiplexing a single physical network device constituting a LAN in accordance with an identifier called VLAN ID assigned to a packet.
- the VLAN ID can be set / released at the switch device and the Ethernet interface on the host.
- the VLAN ID can be controlled only by the switch. You can think of it.
- the reason for not using the method of controlling by the Ethernet interface is that there is a possibility that the VLAN setting of the Ethernet interface cannot be set unless waiting for the startup of the OS, and the behavior before the startup of the OS becomes temporarily uncontrollable. Because.
- the configuration of all devices, that is, the physical servers 10 and 20, and the Ethernet switch 60b is managed by the management computer 200.
- the virtual switches 406 and 412 implemented by the hypervisor on each physical switch 60b and the virtual machine hosts 400 and 401 are compliant with the VLAN, and by assigning the same VLAN ID, the layer 2 (data link) spans multiple switches. Layer) connectivity.
- the setting of the physical switch 60b may be a setting that permits all VLAN IDs (trunk all) for all ports.
- these settings are performed by the server management unit 204c that manages the hypervisor. Therefore, the existing virtual server environment management infrastructure generally does not have a physical switch setting function.
- the internal network is configured with a port VLAN. More specifically, in the physical switch 60b, the port VLAN attribute (access mode) is given to the port 415 to which the bare metal host is connected. As a result, only ports having the same VLAN ID can communicate with each other. These port VLANs are set by the network management unit 204b.
- the setting of the physical switch which has conventionally been a trunk all, must be appropriately controlled according to the location of the instance. Furthermore, since the setting of the physical switch and the setting of the virtual switch are performed under different management systems of the network management unit 204b and the server management unit 204c, respectively, a new device for matching between them is newly added. Necessary.
- the integrated service management unit 204a provides a configuration management method for performing VLAN settings in a virtual switch and a physical switch without inconsistency. More specifically, a network management unit 204b having a physical switch VLAN ID management table 218 for managing the VLAN configuration of the physical switch, and a server management unit 204c having a virtual switch VLAN ID management table 219 for managing the VLAN configuration of the virtual switch. Refer to and set both configuration information.
- the physical switch VLAN ID management table 218 is shown in FIG.
- the table stores a host ID 218a, a switch ID 218b, a port ID 218c, a port attribute 218d, and a VLAN ID 218e.
- the port attribute setting of the physical switch is held.
- the switch ID is held in the host ID field 218a instead of the host.
- the virtual switch VLAN ID management table 219 is shown in FIG.
- the table stores an instance ID 219c, a switch ID 219d, a port ID 219e, and a VLAN ID 219b.
- processing flow in this embodiment is for the instance addition procedure, and it is assumed that any one or more existing instances are operating in the same VLAN.
- a VLAN ID that is not in any VLAN ID management table is secured, and the same setting is performed thereafter.
- the integrated service management unit 204a authenticates the user authority and starts a procedure for creating the above-described instance on the existing private network.
- the user designates an existing instance to be connected to each other or makes a request for adding a new instance.
- step 600 when the above-described procedure for creating an instance is completed, the instance is once shut down in step 601, and the procedure moves to a private network configuration procedure.
- the process branches depending on the type of instance.
- step 603 the processing is further branched depending on whether the virtual instance is connected to the designated existing private network or whether the physical instance is connected. If it is determined in step 603 that the virtual instance is connected to the private network, the process proceeds to step 604. In this step, the integrated service management unit 204a refers to the virtual switch VLAN ID management table 219 and identifies the VLAN ID of the virtual switch from the specified virtual instance ID.
- the process branches from step 603 to step 605.
- the integrated service management unit 204a refers to the physical switch VLAN ID management table 218 and identifies the VLAN ID of the physical switch from the specified physical instance ID (host ID).
- host ID the physical instance ID
- all necessary VLAN settings are performed by following the switch ID held in the host ID field 218a.
- the VLAN ID specified in the previous step is set in the relevant port of the physical switch in step 606. At this time, since the port is connected to the newly added bare metal host, the port VLAN attribute is set.
- step 606 corresponds to, for example, when a physical instance 14 is newly created to connect to the virtual instance 402 in FIG.
- step 602 If the user requests to add a virtual instance, the process branches from step 602 to step 607. Similarly to the previous example (step 603), when the interconnection with the existing physical instance is designated, the VLAN ID setting of the physical switch is referred to in step 608. Alternatively, when the interconnection with the existing virtual instance is designated, the process proceeds to step 609, and the VLAN ID of the virtual switch to which the virtual instance is connected is specified.
- step 610 for setting the VLAN ID specified in the previous step to a virtual switch and step 611 for setting to a physical switch.
- step 611 the tag VLAN attribute is set because the virtual machine host is connected to the port of the physical switch.
- step 602 to step 611 corresponds to, for example, when a virtual instance 403 is newly created to connect to the virtual instance 410 in FIG.
- step 612 When the instance is restarted in step 612, the instance is restarted under the private network setting described above.
- the network setting is confirmed by, for example, ICMP reception by another instance in the private network to which the same VLAN ID is assigned.
- the start of use of the instance is notified to the user.
- the user account information for accessing the instance or the network address may be notified to the user.
- the same VLAN is defined across a plurality of physical switches and virtual switches, and a private network in which physical instances and virtual instances are mixed is configured.
- These private networks are logically divided at the layer 2 level, and security from another private network is ensured.
- logical network IDs are managed completely independently as in the network management unit 204b and server management unit 204c in FIG. 4, the above configuration is realized without changing these management tables. .
- a system for dynamically configuring a tenant network in which virtual servers and non-virtual servers are mixed is provided in a cloud environment.
- One of the purposes of the computer system described in the present embodiment is to control whether to access resources or applications to be processed according to the role of the user and the authority of the organization to which the computer system belongs.
- a desired business system can be operated without unauthorized access to its own data from other organizations or users, or without being affected by performance.
- the gateway has a function of applying a communication policy to communication flowing on the network, and realizes access control thereof.
- gateway may refer to a layer 4 or higher network protocol converter or a layer 3 router.
- a network appliance having one or more of functions for performing protocol conversion and policy control of layer 3 or higher to be described later is referred to as a gateway.
- the gateway has been regarded as a kind of physical computer. More precisely, the gateway is a network control computer called a network appliance.
- the configuration is substantially the same as other physical servers and management servers, and only the number of Ether IFs 506 and programs on the memory 502 are different. Therefore, the gateway does not necessarily have to be installed as a physical computer, and may be realized as a kind of virtual server.
- processing using software in this embodiment may be realized by dedicated hardware that executes the same processing.
- Router / layer 3 switch A function for performing path control in the network layer and protocol conversion in the OSI reference model.
- the IP address of a neighboring router or host is stored in the destination table, and is sent to the corresponding device according to the destination address of the received communication packet. Therefore, processing that refers to the destination information of the received packet, processing that determines the destination according to the referenced information, or processing that periodically updates the destination table is performed.
- the amount of communication data and the number of connected hosts The processing load increases according to the increase in.
- the address is converted by the NAT gateway at the relay point to enable transparent communication with devices on the Internet.
- TCP / IP there is an implementation that guarantees communication consistency using a pair of a local address and a port number.
- the NAT converts the IP address, but may have a function (MAT, MAC Address Translation) for converting the MAC address with the same IP address.
- Firewall This is a function for performing pass / discard / reject according to the control information (destination port number) of layer 3 and the layer 4 protocol for communication via the gateway. It is often used for the purpose of preventing unauthorized intrusion from the external network to the internal network and enhancing security, and it is important that it can be set flexibly according to the use of the host connected to the internal network and the characteristics of the user.
- Proxy This is a function of selectively performing communication by substituting a proxy server capable of interpreting an application layer protocol (for example, HTTP or FTP) mainly for communication from the internal network to the outside. Introduced for purposes such as security enhancement, load balancing, and caching. Since a server other than the designated partner responds on behalf, it is not transparent, unlike NAT, in that the address is different from the host that makes the communication request.
- HTTP HyperText Transfer Protocol
- control in the application layer since control in the application layer is provided, it has a high function such as redirecting browsing of a specific URL, for example, but on the other hand, the processing cost is higher than a firewall that simply monitors the port number and the destination IP address.
- the function of controlling the communication in the reverse direction that is, the communication from the external network to the internal network via a specific server is sometimes called a reverse proxy, and is included in this function in this embodiment.
- the gateway described in the present embodiment is a VPN router serving as a relay point / termination point of a VPN (virtual private network), a remote console gateway that provides a user interface that can be remotely operated from an external network, and communication for a specific port number. Assume functions such as port forwarding to relay sessions.
- a DHCP server function may be provided to dynamically set an IP address for an instance.
- the tenant network is used to ensure resource security and processing performance for each tenant composed of users and user groups. Considering the compatibility of network devices at the present time and the specifications of hypervisor products, the most common method is to configure a private network using (Layer 2) VLAN and (Layer 3) router.
- Control of the network communicable range can be realized only by setting the layer 2 or layer 3 network, or even in other layers, but a private network can be flexibly constructed according to the user's request while ensuring security.
- the method shown in this section is widely used.
- a network that ensures layer 2 connectivity is configured as one layer 3 segment, and external to other segments that require advanced security management in cooperation with applications.
- This is a method using layer 3 path control for network communication.
- the tenant network becomes layer 2 transparent, and configuration management using broadcast such as DHCP can be used. Therefore, in this section, first, a method for configuring a general tenant network by constructing a layer 2 network in each Ethernet switch and then setting a route in the layer 3 network will be described.
- Fig. 7 shows an example of tenant network configuration. A method for configuring a layer 2 network will be described below with reference to FIG.
- the configuration of all devices that is, the physical servers 10 and 20, the physical gateway 500, and the Ethernet switches 60a and 60b are managed by the management computer 200.
- each physical Ethernet interface is connected to the management network 301 and can communicate with each other.
- the virtual switch 27 implemented by each physical switch 60a, 60b and the hypervisor on the virtual machine host 20 is compliant with the VLAN, and provides layer 2 (data link layer) connectivity by giving the same VLAN ID. To do.
- the service internal network is configured by the port VLAN. More specifically, in the physical switch 60b, a port VLAN attribute (access mode) is given to the ports 62b, 62c, and 62d to which the bare metal host is connected. Thus, only ports having the same VLAN ID can communicate with each other, and the internal network 63a for communicating with each other between the hosts and the external network 63b for communicating with the outside via the gateway are separated.
- the internal network 63a and the internal network side interface 506a of the gateway 500 are prepared for each tenant, and in principle, only users and resources belonging to the tenant can be used. In other words, the physical instance 14 belonging to another tenant is separated at the level of the layer 2 network.
- the tag VLAN is set in the virtual switch 27 and the physical switch 60b. More specifically, in the virtual switch 27 provided by the hypervisor, different VLAN IDs are assigned to the internal network 63a and the external network 63b. Also, a tag VLAN attribute (trunk mode or tagging mode) is set in the virtual host side port 62a of the physical switch so that packets having the VLAN ID tag set in the virtual switch can communicate.
- a tag VLAN attribute is set in the virtual host side port 62a of the physical switch so that packets having the VLAN ID tag set in the virtual switch can communicate.
- the trunk mode is set so that all tag VLANs can communicate with the physical switch.
- a private network can be created only by setting the virtual switch 27 on the hypervisor, and there is no need to switch the setting of the physical switch each time. Therefore, the existing virtual server environment management infrastructure generally does not have a physical switch setting function.
- a gateway is installed to ensure connectivity with the external network 63b. Connection to the gateway is controlled in the layer 3 network settings.
- a gateway can be designated as a default gateway when setting a network address for each instance. According to the specification, the default gateway (IP address) set for one instance must be one.
- a virtual gateway 24b is created for each tenant, and all communication with an external network is set to pass through the gateway 24b.
- a subnet is created in the same VLAN ID space under the control of the gateway 24b.
- the OS that runs each instance has a routing table as its network setting information, and sends all communications addressed to addresses that do not exist in the routing table (you do not know the location on the network and are not neighboring hosts) to the default gateway. To do.
- a desired tenant network can be constructed by connecting to an existing virtual instance environment via a layer 2 network and setting through an existing virtual gateway.
- a virtual gateway becomes a bottleneck in performance.
- the configuration can be flexibly changed.
- the possibility that the virtual gateway is affected by performance from other virtual servers cannot be excluded. Users utilizing physical instances expect stable performance and it is very difficult to accept a gateway whose network performance varies depending on other workloads.
- the method of specifying the physical gateway for all instances including the virtual instance can provide stable performance in the physical instance, it is inefficient to use in the virtual instance.
- users who use virtual instances may want to increase resource utilization efficiency or reduce the cost in proportion to the amount of resource usage, we believe that performance stability is not essential. Does not require a physical gateway with ample performance.
- a tenant network configuration method for solving the above problems is provided. That is, in the configuration shown in FIG. 7, the layer 3 path control is performed so that the virtual instance passes through the virtual gateway and the physical instance passes through the physical gateway.
- a conceptual diagram of this configuration method is shown in FIG.
- the virtual instance 24a and the physical instance 14 are mutually connected to an internal network (LAN) 300, and further connected to an external network (WAN) 302a via a gateway to provide service to the client computer 70. I will provide a.
- LAN internal network
- WAN external network
- the mutual communication 808 between the virtual instance 24a and the physical instance 14 is performed in the same subnet connected at the layer 2, but the external communication 801 with the virtual instance 24a communicates with the physical instance 14 via the virtual gateway 24b.
- the external communication 800 is performed via the physical gateway 500.
- a DHCP (Dynamic Host Configuration Protocol) server 802 is used for setting each gateway.
- the DHCP server 802 is installed on the LAN 300 side of one of the gateways.
- the DHCP server 802 issues the IP address for the virtual instance 24a and the address of the virtual gateway 24b (in the figure, 192.168.11. Reply 1) as the default gateway.
- the DHCP server 802 in this embodiment has a network address management table 815 shown in FIG.
- a DHCP client virtual instance and physical instance in this embodiment
- a MAC address an IP address, a subnet mask, a DNS server, and a default gateway, which are pool-managed upon request, are specified. respond.
- a set of a MAC address 815d and an assigned IP address 815e is managed for the instance 815a.
- a different gateway 815f is specified according to the instance type 815b.
- FIG. 10 shows a detailed configuration of the management program 203b of the management computer 200.
- the integrated service management unit 204a further includes a user request management unit 211 that receives a request from the management client 73b, an instance management table 212 that manages instance configuration information, an OS image library 213 that holds an OS image to be installed in the instance, and a system It comprises an element management unit 214 that manages the configuration of the device group, a gateway management unit 217 that manages the configuration of the gateway, and a service orchestrator 210 that operates them in a coordinated manner.
- Each device management unit (a server management unit 204b, a network management unit 204c, and a storage management unit 204d) positioned below the integrated service management unit 204a is controlled mainly under the element management unit 214.
- the element management unit includes an element management table group 215 in which all device configurations are aggregated, and a general VLAN ID management table 216 in which VLAN IDs set in network switches are aggregated.
- FIG. 11 shows a processing flow for configuring a tenant network in accordance with instance creation in this embodiment.
- the user request management unit 211 of the integrated service management unit 204a authenticates the user authority, and a procedure for creating the above-described instance is started.
- the device configuration is managed in an element management table group 215 shown in FIG.
- the element management table group 215 includes a management table copied from the device management unit, for example, a server management table 820 that manages the configuration of the server device, a physical switch management table 821, and the like.
- examining the element management table group 215 it is possible to grasp the configuration such as the device usage status and connection relationship. For example, by examining the hypervisor field 820b of the server management table 820, it can be determined whether or not the hypervisor has been installed.
- the element management table group 215 includes association information 215a generated when the device is registered in the management computer. For example, which interface 820d of which server 820a is connected to the physical port 821c of which switch 821a You can know what is being done. At the time of creating an instance, resource free capacity is acquired from each device management unit based on this element management table group 215, and the creation destination device is determined.
- step 900 when the procedure for creating the above-described instance is completed, the instance is temporarily shut down in step 901, and the procedure proceeds to the tenant network configuration procedure.
- the VLAN ID is determined according to the user's request.
- the association between the tenant and the VLAN ID is managed in the general VLAN ID management table 216.
- FIG. 13 shows details of the table.
- the network management unit 204c aggregates information of the physical switch VLAN ID management table 218 and the virtual switch VLAN ID management table 219 without any inconsistency.
- there is a method of holding only the management table for each virtual / physical switch but it is also possible to use a separate management table such as a general VLAN ID management table. Logical network partitioning can be configured appropriately.
- the virtual switch is a function implemented by the hypervisor, and the management information is held in the server management unit 204b.
- the VLAN ID 216b of the tenant ID 216a specified by the user is referred to.
- a new tenant ID 216a and a VLAN ID 216b are secured and added to the general VLAN ID management table 216.
- step 903 the process branches depending on whether the user request is a physical instance or a virtual instance. If the user is requesting addition of a physical instance, the VLAN setting of the physical switch is performed in step 904. More specifically, in the physical switch VLAN ID management table 218, it is determined whether or not the VLAN ID 218e can be set (does not overlap with other IDs or can be set in the device specifications), and the physical server The port attribute 218d corresponding to the (host) ID 218a is set to the access mode (port VLAN attribute). Further, in Step 905, usable physical gateway information is acquired from the gateway management unit 217 that specifies the gateway of the instance.
- the gateway management unit 217 holds at least the internal network side IP address of the gateway in order to designate the physical gateway 500. If there is no appropriate physical gateway that establishes a physical connection relationship, the processing is canceled or a new physical gateway is created by the same method as the physical instance creation method. In this embodiment, the physical gateway is further set for the DHCP server. More specifically, in the network address management table 815, the created instance information, its MAC address 815d, and the gateway IP address are registered. If the user is requesting addition of a virtual instance, in step 906, the virtual switch VLAN is first set.
- the virtual switch VLAN ID management table 219 determines whether or not the VLAN ID 219b can be set, and sets the VLAN ID 219b corresponding to the tenant ID 219a and the instance ID 219c.
- the corresponding physical switch VLAN ID management table 218 is edited. More specifically, in the physical switch VLAN ID management table 218, it is determined whether or not the VLAN ID 218e can be set, and the port attribute 218d corresponding to the virtual server host ID 218a is set to the trunk mode (tag VLAN attribute). .
- usable virtual gateway information is acquired from the gateway management unit 217 that specifies the gateway of the instance.
- the gateway management unit 217 holds at least the internal network side IP address of the gateway in order to designate the virtual gateway 24b. If an appropriate virtual gateway that builds a physical connection relationship cannot be created, the process is canceled or a new virtual gateway is created by a method similar to the virtual instance creation method.
- the virtual gateway is further set for the DHCP server. More specifically, the created instance information, its MAC address 815d, and the gateway IP address are registered in the network address management table 815.
- the instance When the instance is restarted in step 909, the instance receives the network setting assignment from the DHCP server and operates. In step 910, the network setting is confirmed, for example, by ICMP reception by another instance in the same tenant network.
- the user request management unit 211 When the addition of the instance is completed normally, the user request management unit 211 notifies the user of the start of use of the instance. At this time, the user account information for accessing the instance and the network address may be notified together.
- This example configures the tenant network when a physical instance and a virtual instance are added according to the service level requested by the user. Furthermore, a physical instance that requires stable performance operates using a physical gateway with stable performance, and a virtual instance with high resource utilization efficiency operates using a highly efficient virtual gateway. That is, the overall optimization of the calculation processing resources and the storage resources can be realized by mixing the virtual / non-virtual servers, and the overall optimization of the network resources can be realized by properly using the virtual / physical gateway.
- the distribution ratio in communication with the external network is statically determined according to the instance type requested by the user. Therefore, it does not take a long time to achieve proper load distribution, as in the conventional technology that monitors the communication load and changes the load distribution method. Processing costs are also unnecessary.
- this function is realized by a similar system configuration when only a tenant network is newly created or when a virtual instance and a physical instance are migrated to each other.
- the above tenant network configuration is realized by VLAN and layer 3 route control, but the configuration of the present invention does not depend on these technologies. Therefore, even when a technique for encapsulating layer 2 communication by layer 3 communication and extending the layer 2 VLAN space, such as VXLAN, this function is realized by the same system configuration.
- the use of virtual and physical gateways was realized using a DHCP server.
- an operation that does not use a DHCP server is also performed because of a requirement to use a static address in case of a failure.
- the IP address pool can be efficiently managed, but the network setting must be updated every time the lease period of the address ends. At this time, there is a risk that an instance in the tenant becomes incapable of communication only when a failure occurs in the DHCP server.
- a network setting method that does not depend on DHCP is provided in cooperation with the master OS image management that is the basis for creating an instance.
- this embodiment has the same system configuration as that of the first embodiment except that a DHCP server is not used.
- the OS image library 213 has a feature of having a network setting customized according to the type of virtual / physical instance.
- the actual status of the master image registered in the OS image library 213 is stored in the storage apparatus 100.
- the master image 830 of the physical instance 14 is a volume, and a boot disk device 831 is created by the copy function of the storage control program 153b when the physical instance is created.
- the master image 832 of the virtual instance takes the form of a virtual disk, and a startup virtual disk 833 is created by the hypervisor copy function.
- the gateway management unit 217 has a network address management table 815, and includes a corresponding network setting in the master image depending on the type of virtual / physical instance. More specifically, a master image is created using an OS image with customized network settings, or an OS initialization file is configured so that the network settings are read when the instance is restarted in step 909 in FIG. deep.
- an IP address is statically assigned to the created instance, and a virtual / physical gateway is statically set according to the type of virtual / non-virtual server.
- a virtual / physical gateway is statically set according to the type of virtual / non-virtual server.
- it is not necessary to communicate with the network, and it is not necessary to install a DHCP server.
- a device such as a DHCP server that performs centralized management of network addresses fails, the connectivity between the instance and the client computer and the connectivity between instances connected to the same tenant are maintained.
- the communication band is not burdened unlike network installation.
- a function of distributing access from an external network to an internal network in consideration of virtual / physical instances is provided.
- the method of passing the gateway according to the type of the instance mainly for the access from the internal network to the external network has been described.
- an access request from the client computer 70 side should be distributed to the gateway according to the characteristics of the physical instance and the virtual instance.
- the performance requirement required by the user appears as the number of physical instances and virtual instances. Therefore, rather than implementing complex monitoring functions and load balancing functions to cope with unpredictable changes in access requests, the gateway is first statically specified according to the size of virtual / physical instances in the tenant. Therefore, it is considered that more concise and effective performance improvement can be realized.
- FIGS. 15A and 15B two configurations shown in FIGS. 15A and 15B are considered as configurations for distributing access from an external network to a physical gateway and a virtual gateway.
- the first method is a method using DNS.
- the client computer 70 makes an inquiry to the DNS server 810 and resolves the access destination domain name to an IP address.
- the ratio of whether to notify the IP address of the physical gateway (or physical instance) as the destination or to notify the IP address of the virtual gateway (or virtual instance) is adjusted by the setting of the DNS server. More specifically, the performance ratio of the virtual and physical gateways or instances is evaluated with a certain value, and the occurrence probability of the responding IP address is set.
- the second method is a method of placing a load balancer 811 in front of the gateway.
- the proportion of whether a physical gateway (or physical instance) is a destination or a virtual gateway (or virtual instance) is a destination is proportional to the performance ratio of the gateway or instance. Act as a proxy or provide transparent access through NAT.
- access from an external network can be distributed to a physical gateway and a virtual gateway.
- the external access distribution ratio is statically determined according to the instance type requested by the user. Therefore, it does not take a long time to achieve proper load distribution of client requests, as in the conventional technology that changes the load distribution method by monitoring the communication load. Implementation costs and processing costs are also unnecessary.
- management computer 203b ... management program, 204a ... integrated service management unit, 204b ... server device management unit 204c ... network device management unit, 204d ... storage device management unit, 300 ... internal network, 301 ... Management network 302 ... external network 500 ... physical gateway 802 ... DHCP server
Abstract
Description
また、性能の観点においても、他のテナントで稼働する業務システムの稼働状態に影響されず、安定的に稼働する仕組みが必要である。仮想サーバ環境においては、仮想サーバのオンライン移行を利用した負荷分散や、仮想サーバごとの通信の優先度制御などにより、安定稼働を実現する動きが一般的である。
例えば、特許文献1はネットワーク上の通信負荷を分散するルータの構成方法およびシステムを開示している。同システムにより、複数のネットワーク経路を並行して利用でき、ネットワークリソースを有効に活用することができる。また、特許文献2は、マルチテナント環境において効率的に構成を管理する方法を開示している。
<物理構成および論理構成>
図1は、本実施例における計算機システムの全体像を示す。
内部ネットワークと外部ネットワークとの境界には物理ゲートウェイ500が介在し、同物理ゲートウェイ500に流れる通信データに対して様々な処理を行い、通信を制御する。ゲートウェイの構成およびゲートウェイが有する機能についての詳細は後述する。説明を簡略化するため、図1は単純な構成をとるが、必要に応じてゲートウェイが多段であってもよいし、WAN302が別のLANであってもよい。さらには、WANやLANが複数のネットワークに物理分割または論理分割されていてもよい。
図2は、内部ネットワークに接続される各装置の、より詳細な物理構成および論理構成を示す。図1における内部ネットワーク300は、図2においてネットワーク61として表現される。ネットワーク61には、少なくとも一つ以上の第一の物理サーバ10、第二の物理サーバ20、ストレージ装置100、管理コンピュータ200が物理的に接続される。さらに、一つ以上のゲートウェイ500がネットワーク61およびネットワーク66の境界に接続される。このとき、図2におけるネットワーク66は、図1における外部ネットワーク302にあたる。
第一の目的は、クライアントコンピュータ70と、物理サーバ10および20との間のサービス通信を提供することである。例えば、物理サーバ10は、クライアントコンピュータ70から処理要求や処理対象のデータを受信し、アプリケーション13bより加工または生成されたデータを再度クライアントコンピュータ70に向け送信する。
ストレージ制御部150は、CPU151、メモリ152、キャッシュ154、FC IF155、Ether IF156、Serial Advanced Technology Attachmentインタフェース(SATA IF)157を備える。メモリ152には、少なくとも読み書き要求に応答する応答プログラム153a、および装置の論理構成を制御するストレージ制御プログラム153b含むプログラムが格納され、CPU151における演算処理によりストレージ装置100の機能を実現する。キャッシュ154は、主に物理サーバの読み書き要求に対するストレージ資源の応答性能を向上されるために用いられる。FC IFはネットワーク51を介して他の装置と通信を行うためのものであり、主に物理サーバ10、20と接続する目的で使用される。同じ目的を達成する相互接続の手段であればファイバチャネル以外の通信規格を使用してもよく、また物理サーバの台数などに応じて複数あってもよい。Ether IF16は、ネットワーク60を介して他の装置と通信を行うためのものであり、主に管理コンピュータ200と接続する目的で使用される。
<インスタンスの構成方法>
本実施例における計算機システムは、仮想サーバおよび非仮想サーバのリソース構成を管理する機能を提供する。以下、仮想サーバおよび非仮想サーバを生成する際の構成手順を例に、システムの構成および機能を説明する。なお、ここでは、ユーザの要求に応じて生成され、クライアントに情報サービスを提供するサーバをインスタンスと呼び、仮想サーバを仮想インスタンス、非仮想サーバを物理インスタンスと呼ぶ。
<レイヤ2ネットワークの構成方法>
本発明の目的の一つは、仮想インスタンスおよび物理インスタンスをアプリケーションの要件やユーザの要求に応じて使い分けることである。そのためには、仮想インスタンスおよび物理インスタンスにまたがるプライベートネットワークを構成し、相互に通信可能でなければならない。
仮想スイッチVLAN ID管理テーブル219を図5(b)に示す。同テーブルは、インスタンスID219c、スイッチID219d、ポートID219e、VLAN ID219bを格納する。これにより、物理スイッチVLAN ID管理テーブル218と同様に、仮想スイッチのポート属性設定を保持する。
これらのVLAN ID管理テーブルは管理コンピュータ200上にあり、各管理プログラムの設定に従って、各物理スイッチ装置および仮想マシンホスト上の仮想スイッチに適用される。
<処理フロー>
本発明に特徴的なネットワーク構成方法を、図6に示す処理フローにおいて説明する。統合サービス管理部の詳細については後述することにし、ここでは、VLANを用いたネットワーク構成の詳細手順について述べる。本処理フローの目的は、新たなインスタンスの作成を契機として、仮想インスタンスおよび物理インスタンスが混在する環境においてインスタンス間を相互に接続するプライベートネットワークを構成することである。
ステップ603において仮想インスタンスが当該プライベートネットワークに接続されていると判定された場合はステップ604に進む。同ステップにおいて、統合サービス管理部204aは仮想スイッチVLAN ID管理テーブル219を参照し、指定された仮想インスタンスIDから仮想スイッチのVLAN IDを特定する。
<ゲートウェイの機能>
本実施例に述べる計算機システムの目的の一つは、ユーザの役割や所属する組織などの権限に応じて、処理を行うリソースやアプリケーションへのアクセスの可否を制御することである。これにより、例えば他の組織・ユーザから自身のデータに不正にアクセスされたり、性能上の影響を受けたりすることなく、所望の業務システムを稼働させることができる。
(1)ルータ/レイヤ3スイッチ
OSI参照モデルにおけるネットワーク層における経路制御や、プロトコル変換を行う機能である。実装としては、近隣のルータやホストのIPアドレスを宛先表に記憶し、受信した通信パケットの宛先アドレスに応じて、該当する装置へと送信する仕組みをとる。したがって、受信したパケットの宛先情報を参照する処理や、参照した情報に応じて宛先を決定する処理、あるいは定期的に宛先表を更新する処理を行っており、通信データ量や接続されるホスト数の増加に応じて処理負荷が増大する。加えて、異なるデータリンク層(例えばイーサネットとFDDI)を接続する機能が合わせて実装されることもあり、ホスト側で行われる処理と比較して処理コストが無視できない機能であるため、専用の装置が用意される場合が多い。
また、可用性を高めるためVRRP(Virtual Router Redundancy Protocol)を実装するものもあり、原理的に複数のルータが存在してもよい。なお、VRRPに関して“仮想ルータ”という用語が用いられることがあるが、本実施例における仮想ゲートウェイとは別のものを指す。
(2)ネットワークアドレス変換
あるネットワークにおいて、外側と通信するためのアドレスと、内側で通信するためのアドレスを変換する機能であり、一般にNAT(Network Address Translation)と呼ばれる。例えば、IPv4のグローバルアドレスは全てのローカルコンピュータに付与できるほどには潤沢に用意されていないという事情から広く用いられている。このとき、ローカルコンピュータ側のアドレスを変更せずに、中継点にあるNATゲートウェイでアドレスを変換し、インターネット上の機器との透過的な通信を可能とする。TCP/IPでは、ローカルアドレスとポート番号の組を用いて通信の一貫性を保証する実装がある。
(3)ファイヤウォール
ゲートウェイを経由する通信のレイヤ3の制御情報(宛先ポート番号)やレイヤ4プロトコルに応じて通過/破棄/拒否を行う機能である。外部ネットワークから内部ネットワークへの不正侵入を防ぎ、セキュリティを高める目的で利用されることが多く、内部ネットワークに接続するホストの用途やユーザの特性に応じて、柔軟に設定できることが重要である。
(4)プロキシ
主に内部ネットワークから外部への通信を、アプリケーション層のプロトコル(例えばHTTPやFTP)を解釈できるプロキシサーバに代替させ、選択的に通信を行う機能である。セキュリティ強化、負荷分散、キャッシュなどの目的で導入される。指定した相手とは別のサーバが代理で応答するため、通信要求を行うホストとは別のアドレスとなる点においてNATと異なり透過的でない。
<テナントネットワークの構成方法>
これを説明するために、まず一般的なテナントネットワークの構成方法を述べ、次に本発明に特長的な構成方法を述べることとする。
さらに、従来技術によれば、ゲートウェイを設置し、外部ネットワーク63bとの接続性を確保する。ゲートウェイとの接続はレイヤ3ネットワーク設定において制御される。例えば、各インスタンスにネットワークアドレスを設定する際のデフォルトゲートウェイとして、ゲートウェイが指定可能である。仕様上、一つのインスタンスに設定するデフォルトゲートウェイ(のIPアドレス)は一つでなければならない。したがって、一般的なクラウド環境においては、テナントごとに仮想ゲートウェイ24bを作成しておき、外部ネットワークとの全ての通信について、ゲートウェイ24bを経由するよう設定する。また、通常は、ゲートウェイ24bの制御下の同一VLAN IDの空間内でサブネットを作成する。各インスタンスを稼働させるOSは、自身のネットワーク設定情報として経路表を有し、経路表にない(ネットワーク上の位置を知らない、近隣のホストでない)アドレス宛の通信を全てデフォルトゲートウェイへ向けて送信する。
<本発明に特徴的なテナントネットワークの構成方法>
従来技術によるテナントネットワークの構成方法として、仮想ゲートウェイが性能上のボトルネックになることが挙げられる。
前述の通り、物理インスタンスの場合も仮想インスタンスと同じく仮想ゲートウェイを利用する場合、柔軟に構成を変更できることが利点であるものの、仮想ゲートウェイが他の仮想サーバから性能影響を受ける可能性は排除できない。物理インスタンスを利用するユーザは、安定した性能を期待しており、他のワークロード次第でネットワーク性能が変動するゲートウェイを受容することは非常に困難である。
図8に示す通り、本実施例では仮想インスタンス24aと物理インスタンス14を相互に内部ネットワーク(LAN)300へ接続し、さらにゲートウェイを介して外部ネットワーク(WAN)302aへ接続し、クライアントコンピュータ70へサービスを提供する。このとき、仮想インスタンス24aと物理インスタンス14の相互通信808はレイヤ2で接続された同一サブネット内で行われるが、仮想インスタンス24aとの外部通信801は仮想ゲートウェイ24bを介して、物理インスタンス14との外部通信800は物理ゲートウェイ500を介して行われる。
それぞれのゲートウェイの設定には、DHCP(Dynamic Host Configuration Protocol)サーバ802を用いる。DHCPサアーバ802は、いずれかのゲートウェイのLAN300側に設置される。
仮想インスタンス24aが生成され、LAN300に接続してIPアドレス割り当て要求803をブロードキャストすると、DHCPサーバ802は、仮想インスタンス24a用のIPアドレスの払い出しとともに仮想ゲートウェイ24bのアドレス(図では192.168.11.1)をデフォルトゲートウェイとして応答する。
<処理フロー>
以下、管理コンピュータの構成と本実施例の処理フローについて説明する。
ユーザが物理インスタンスの追加を要求している場合、ステップ904において物理スイッチのVLAN設定を行う。より具体的には、物理スイッチVLAN ID管理テーブル218において、当該VLAN ID218eが設定可能か否か(他のIDと重複しないか、あるいは装置仕様上設定可能な範囲か)を判別し、当該物理サーバ(ホスト)ID218aに対応するポート属性218dをアクセスモード(ポートVLAN属性)に設定する。さらに、ステップ905において、当該インスタンスのゲートウェイを指定するゲートウェイ管理部217から、利用可能な物理ゲートウェイ情報を取得する。ここでゲートウェイ管理部217は、物理ゲートウェイ500を指定するために少なくとも当該ゲートウェイの内部ネットワーク側IPアドレスを保持する。物理的な接続関係を構築している適切な物理ゲートウェイが存在しない場合は処理を中止するか、物理インスタンスの作成方法と同様の方法により、新たに物理ゲートウェイを作成する。本実施例においては、さらにDHCPサーバに対して同物理ゲートウェイの設定を行う。より具体的には、ネットワークアドレス管理テーブル815において、作成したインスタンス情報と、その MACアドレス815d、およびゲートウェイのIPアドレスを登録する。
ユーザが仮想インスタンスの追加を要求している場合、ステップ906においてまず仮想スイッチのVLAN設定を行う。より具体的には、仮想スイッチVLAN ID管理テーブル219において、当該VLAN ID219bが設定可能か否かを判別し、当該テナントID219aおよびインスタンスID219cに対応させてVLAN ID219bを設定する。次に、ステップ907において、対応する物理スイッチVLAN ID管理テーブル218を編集する。より具体的には、物理スイッチVLAN ID管理テーブル218において、当該VLAN ID218eが設定可能か否かを判別し、当該仮想サーバホストID218aに対応するポート属性218dをトランクモード(タグVLAN属性)に設定する。さらに、ステップ905において、当該インスタンスのゲートウェイを指定するゲートウェイ管理部217から、利用可能な仮想ゲートウェイ情報を取得する。ここでゲートウェイ管理部217は、仮想ゲートウェイ24bを指定するために少なくとも当該ゲートウェイの内部ネットワーク側IPアドレスを保持する。物理的な接続関係を構築している適切な仮想ゲートウェイを作成しえない場合には処理を中止するか、仮想インスタンスの作成方法と同様の方法により、新たに仮想ゲートウェイを作成する。本実施例においては、さらにDHCPサーバに対して同仮想ゲートウェイの設定を行う。より具体的には、ネットワークアドレス管理テーブル815において、作成したインスタンス情報と、そのMACアドレス815d、およびゲートウェイのIPアドレスを登録する。
本発明が対象とするような仮想/非仮想混在環境では、ユーザが求める性能要件が、物理インスタンスと仮想インスタンスの数などとして現れる。したがって、予測のできないアクセス要求の変動に対応するために複雑な監視機能と負荷分散機能を実装するよりも、まずテナント内の仮想/物理インスタンスの規模に応じて静的にゲートウェイを指定する方法で、より簡潔かつ効果的な性能改善が実現できると考えられる。
第二の方法はゲートウェイの手前にロードバランサ811を配置する方法である。一般的なロードバランサにおける負荷分散アルゴリズムとして、物理ゲートウェイ(または物理インスタンス)を宛先とするか、仮想ゲートウェイ(または仮想インスタンス)を宛先とするか、の割合をゲートウェイまたはインスタンスの性能比と比例させる。プロキシとして動作させるか、またはNATにより透過的なアクセスを提供する。
73a…サービスクライアント、73b…管理クライアント、100…ストレージ装置、150…ストレージ制御部、153a…応答プログラム、153b…ストレージ制御プログラム、154…キャッシュ、200…管理コンピュータ、203b…管理プログラム、204a…統合サービス管理部、204b…サーバ装置管理部
204c…ネットワーク装置管理部、204d…ストレージ装置管理部、300…内部ネットワーク、301…管理ネットワーク、302…外部ネットワーク、500…物理ゲートウェイ、802…DHCPサーバ
Claims (15)
- 複数の仮想インスタンスおよび当該仮想インスタンス間のネットワークを制御する仮想スイッチが動作する第1の物理サーバと、
物理インスタンスが動作する第2の物理サーバと、
前記第1の物理サーバと前記第2の物理サーバと接続され、当該前記第1の物理サーバおよび前記第2の物理サーバ間のネットワークを制御する物理スイッチと、
に接続される管理コンピュータであって、
前記複数の仮想インスタンスのそれぞれについて、当該仮想インスタンスが接続する内部ネットワークとの対応関係を示す仮想スイッチ管理情報と、
前記物理インスタンスと、当該物理インスタンスが接続する内部ネットワークとの対応関係を示す物理スイッチ管理情報と、
前記第1の物理サーバと、前記第2の物理サーバと、前記物理スイッチの構成を管理する統合管理部を備え、
前記統合管理部は、
前記物理インスタンスと同じ内部ネットワークに接続する第1の仮想インスタンスを作成するための第1のインスタンス作成要求を受信すると、
前記第1の物理サーバ上に前記第1の仮想インスタンスを作成し、
前記物理スイッチ管理情報を参照して、前記物理インスタンスが接続する第1の内部ネットワークを特定し、
前記第1の仮想インスタンスが前記第1の内部ネットワークに接続されるように前記仮想スイッチと前記物理スイッチを設定する
ことを特徴とする管理コンピュータ。 - 請求項1に記載の管理コンピュータであって、
前記第2の物理サーバは、当該第2の物理サーバの物理リソース論理的に分割して複数の仮想マシンとして管理する仮想化機能を有し、前記仮想スイッチおよび前記複数の仮想インスタンスの各々は前記複数の仮想マシンのいずれかで動作する
ことを特徴とする管理コンピュータ。 - 請求項2に記載の管理コンピュータであって、
前記統合管理部は、前記複数の仮想インスタンスのうちの第2の仮想インスタンスと同じ内部ネットワークに接続する第1の物理インスタンスを作成するための第2インスタンス作成要求を受信すると、
前記物理スイッチに接続される第3の物理サーバ上に前記第1の物理インスタンスを作成し、
前記仮想スイッチ管理情報を参照して、前記第2の仮想インスタンスが接続する第2の内部ネットワークを特定し、
前記第1の物理インスタンスが前記第2の内部ネットワークに接続されるように前記物理スイッチを設定する
ことを特徴とする管理コンピュータ。 - 請求項2に記載の管理コンピュータであって、
前記第2の物理サーバは、前記複数の仮想マシンのうちの第1の仮想マシン上で動作し、前記仮想スイッチと前記物理スイッチの制御によって前記第1の内部ネットワークおよび外部ネットワークに接続されている仮想ゲートウェイを有し、
前記管理コンピュータは、前記物理スイッチの制御によって前記第1の内部ネットワークおよび前記外部ネットワークに接続されている物理ゲートウェイと、さらに接続されており、
前記第1のインスタンス作成要求に応じて、
前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定することを特徴とする管理コンピュータ。 - 請求項4に記載の管理コンピュータであって、
前記物理インスタンスが前記物理ゲートウェイを経由して前記第1の内部ネットワークから前記第2のネットワークに接続するように、前記物理インスタンスを設定する
ことを特徴とする管理コンピュータ。 - 請求項5に記載の管理コンピュータであって、
前記物理ゲートウェイのネットワークアドレス情報と、
前記仮想ゲートウェイのネットワークアドレス情報と、管理し、
DHCP(Dynamic Host Configuration Protocol)サーバを介して、当該ネットワークアドレス情報を前記物理インスタンス、前記第1の仮想インスタンスに通知することで、前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定する
ことを特徴とする管理コンピュータ。 - 請求項5に記載の管理コンピュータであって、
前記物理ゲートウェイのネットワークアドレス情報と、
前記仮想ゲートウェイのネットワークアドレス情報と、管理し、
前記第1の物理サーバおよび前記第2の物理サーバに接続されるストレージ装置と更に接続され、
前記第1の仮想インスタンスによって読み込まれるOS(Operation system)のマスタイメージとともに前記仮想ゲートウェイのアドレス情報を格納することで、前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定する
ことを特徴とする管理コンピュータ。 - 複数の仮想インスタンスおよび当該仮想インスタンス間のネットワークを制御する仮想スイッチが動作する第1の物理サーバと、物理インスタンスが動作する第2の物理サーバと、前記第1の物理サーバと前記第2の物理サーバと接続され、当該前記第1の物理サーバおよび前記第2の物理サーバ間のネットワークを制御する物理スイッチと、
に接続される管理コンピュータによるネットワーク構成方法であって、
前記複数の仮想インスタンスのそれぞれについて、当該仮想インスタンスが接続する内部ネットワークとの対応関係を示す仮想スイッチ管理情報を管理し、
さらに、前記物理インスタンスと、当該物理インスタンスが接続する内部ネットワークとの対応関係を示す物理スイッチ管理情報を管理し、
前記物理インスタンスと同じ内部ネットワークに接続する第1の仮想インスタンスを作成するための第1のインスタンス作成要求を受信すると、
前記第1の物理サーバ上に前記第1の仮想インスタンスを作成し、
前記物理スイッチ管理情報を参照して、前記物理インスタンスが接続する第1の内部ネットワークを特定し、
前記第1の仮想インスタンスが前記第1の内部ネットワークに接続されるように前記仮想スイッチと前記物理スイッチを設定する
ことを特徴とするネットワーク構成方法。 - 請求項8に記載のネットワーク構成方法であって、
前記第2の物理サーバは、当該第2の物理サーバの物理リソース論理的に分割して複数の仮想マシンとして管理する仮想化機能を有し、前記仮想スイッチおよび前記複数の仮想インスタンスの各々は前記複数の仮想マシンのいずれかで動作する
ことを特徴とするネットワーク構成方法。 - 請求項9に記載のネットワーク構成方法であって、
前記複数の仮想インスタンスのうちの第2の仮想インスタンスと同じ内部ネットワークに接続する第1の物理インスタンスを作成するための第2インスタンス作成要求を受信すると、
前記物理スイッチに接続される第3の物理サーバ上に前記第1の物理インスタンスを作成し、
前記仮想スイッチ管理情報を参照して、前記第2の仮想インスタンスが接続する第2の内部ネットワークを特定し、
前記第1の物理インスタンスが前記第2の内部ネットワークに接続されるように前記物理スイッチを設定する
ことを特徴とするネットワーク構成方法。 - 請求項9に記載のネットワーク構成方法であって、
前記第2の物理サーバは、前記複数の仮想マシンのうちの第1の仮想マシン上で動作し、前記仮想スイッチと前記物理スイッチの制御によって前記第1の内部ネットワークおよび外部ネットワークに接続されている仮想ゲートウェイを有し、
前記管理コンピュータは、前記物理スイッチの制御によって前記第1の内部ネットワークおよび前記外部ネットワークに接続されている物理ゲートウェイと、さらに接続されており、
前記第1のインスタンス作成要求に応じて、
前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定することを特徴とするネットワーク構成方法。 - 請求項11に記載のネットワーク構成方法であって、
前記物理インスタンスが前記物理ゲートウェイを経由して前記第1の内部ネットワークから前記第2のネットワークに接続するように、前記物理インスタンスを設定する
ことを特徴とするネットワーク構成方法。 - 請求項11に記載の管理コンピュータであって、
前記物理ゲートウェイのネットワークアドレス情報と、
前記仮想ゲートウェイのネットワークアドレス情報と、管理し、
DHCP(Dynamic Host Configuration Protocol)サーバを介して、当該ネットワークアドレス情報を前記物理インスタンス、前記第1の仮想インスタンスに通知することで、前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定する
ことを特徴とするネットワーク構成方法。 - 請求項12に記載のネットワーク構成方法であって、
前記管理コンピュータは、第1の物理サーバおよび前記第2の物理サーバに接続されるストレージ装置と更に接続され、
前記物理ゲートウェイのネットワークアドレス情報と、前記仮想ゲートウェイのネットワークアドレス情報と、を管理し、
前記第1の仮想インスタンスによって読み込まれるOS(Operation system)のマスタイメージとともに前記仮想ゲートウェイのアドレス情報を格納することで、前記第1の仮想インスタンスが前記仮想ゲートウェイを経由して前記第1の内部ネットワークから前記外部ネットワークに接続するように、前記第1の仮想インスタンスを設定する
ことを特徴とする管理コンピュータ。 - 請求項13に記載のネットワーク構成方法であって、
前記外部ネットワークに接続されるクライアントが前記物理ゲートウェイもしくは前記仮想ゲートウェイを経由して前記第1の内部ネットワークに接続する前記仮想インスタンスもしくは前記物理インスタンスにアクセスについて、
前記第1の内部ネットワークに接続する前記物理インスタンスの数と前記仮想インスタンスの数の比率に基づき、前記物理ゲートウェイもしくは前記仮想ゲートウェイのいずれを経由するかを設定する
ことを特徴とするネットワーク構成方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/766,228 US9575798B2 (en) | 2013-02-25 | 2013-02-25 | Method of managing tenant network configuration in environment where virtual server and non-virtual server coexist |
JP2015501211A JP5953421B2 (ja) | 2013-02-25 | 2013-02-25 | 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 |
PCT/JP2013/054655 WO2014128948A1 (ja) | 2013-02-25 | 2013-02-25 | 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/054655 WO2014128948A1 (ja) | 2013-02-25 | 2013-02-25 | 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014128948A1 true WO2014128948A1 (ja) | 2014-08-28 |
Family
ID=51390771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/054655 WO2014128948A1 (ja) | 2013-02-25 | 2013-02-25 | 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9575798B2 (ja) |
JP (1) | JP5953421B2 (ja) |
WO (1) | WO2014128948A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014182544A (ja) * | 2013-03-19 | 2014-09-29 | Fujitsu Ltd | 監視装置,情報処理システム,監視方法および監視プログラム |
US9444704B2 (en) * | 2013-05-20 | 2016-09-13 | Hitachi, Ltd. | Method for controlling monitoring items, management computer, and computer system in cloud system where virtual environment and non-virtual environment are mixed |
WO2017154163A1 (ja) * | 2016-03-10 | 2017-09-14 | 株式会社日立製作所 | 計算機システム、ゲートウェイ装置の制御方法、および記録媒体 |
US20180041388A1 (en) * | 2015-03-13 | 2018-02-08 | Koninklijke Kpn N.V. | Method and Control System for Controlling Provisioning of a Service in a Network |
Families Citing this family (131)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10454714B2 (en) | 2013-07-10 | 2019-10-22 | Nicira, Inc. | Method and system of overlay flow control |
US9887960B2 (en) | 2013-08-14 | 2018-02-06 | Nicira, Inc. | Providing services for logical networks |
US10614047B1 (en) | 2013-09-24 | 2020-04-07 | EMC IP Holding Company LLC | Proxy-based backup and restore of hyper-V cluster shared volumes (CSV) |
US20160248729A1 (en) * | 2013-10-02 | 2016-08-25 | Telefonaktiebolaget L M Ericsson(Publ) | A movable gateway, a dhcp server and respective methods performed thereby for enabling the gateway to move from a first access point to a second access point |
US10200239B2 (en) * | 2013-12-27 | 2019-02-05 | Red Hat Israel, Ltd. | Normalized management network |
US9602308B2 (en) | 2014-06-23 | 2017-03-21 | International Business Machines Corporation | Servicing packets in a virtual network and a software-defined network (SDN) |
US9146764B1 (en) | 2014-09-30 | 2015-09-29 | Amazon Technologies, Inc. | Processing event messages for user requests to execute program code |
US9678773B1 (en) | 2014-09-30 | 2017-06-13 | Amazon Technologies, Inc. | Low latency computational capacity provisioning |
US9715402B2 (en) | 2014-09-30 | 2017-07-25 | Amazon Technologies, Inc. | Dynamic code deployment and versioning |
US10048974B1 (en) | 2014-09-30 | 2018-08-14 | Amazon Technologies, Inc. | Message-based computation request scheduling |
US9830193B1 (en) | 2014-09-30 | 2017-11-28 | Amazon Technologies, Inc. | Automatic management of low latency computational capacity |
US9323556B2 (en) | 2014-09-30 | 2016-04-26 | Amazon Technologies, Inc. | Programmatic event detection and message generation for requests to execute program code |
US9600312B2 (en) | 2014-09-30 | 2017-03-21 | Amazon Technologies, Inc. | Threading as a service |
US9537788B2 (en) | 2014-12-05 | 2017-01-03 | Amazon Technologies, Inc. | Automatic determination of resource sizing |
JP6467906B2 (ja) * | 2014-12-19 | 2019-02-13 | 富士通株式会社 | 情報処理システム、情報処理方法、情報処理プログラム、及び情報処理装置 |
JP2016134721A (ja) * | 2015-01-19 | 2016-07-25 | 富士通株式会社 | 情報処理システム、情報処理システムの制御方法及び管理装置の制御プログラム |
US10341188B2 (en) * | 2015-01-27 | 2019-07-02 | Huawei Technologies Co., Ltd. | Network virtualization for network infrastructure |
US9588790B1 (en) | 2015-02-04 | 2017-03-07 | Amazon Technologies, Inc. | Stateful virtual compute system |
US9733967B2 (en) | 2015-02-04 | 2017-08-15 | Amazon Technologies, Inc. | Security protocols for low latency execution of program code |
US9930103B2 (en) | 2015-04-08 | 2018-03-27 | Amazon Technologies, Inc. | Endpoint management system providing an application programming interface proxy service |
US9785476B2 (en) | 2015-04-08 | 2017-10-10 | Amazon Technologies, Inc. | Endpoint management system and virtual compute system |
US10498652B2 (en) | 2015-04-13 | 2019-12-03 | Nicira, Inc. | Method and system of application-aware routing with crowdsourcing |
US10135789B2 (en) | 2015-04-13 | 2018-11-20 | Nicira, Inc. | Method and system of establishing a virtual private network in a cloud service for branch networking |
US10425382B2 (en) | 2015-04-13 | 2019-09-24 | Nicira, Inc. | Method and system of a cloud-based multipath routing protocol |
US10419365B2 (en) * | 2015-04-20 | 2019-09-17 | Hillstone Networks Corp. | Service insertion in basic virtual network environment |
US20170063627A1 (en) * | 2015-08-25 | 2017-03-02 | Bluedata Software, Inc. | Allocation of virtual clusters in a large-scale processing environment |
US10754701B1 (en) | 2015-12-16 | 2020-08-25 | Amazon Technologies, Inc. | Executing user-defined code in response to determining that resources expected to be utilized comply with resource restrictions |
US9811434B1 (en) | 2015-12-16 | 2017-11-07 | Amazon Technologies, Inc. | Predictive management of on-demand code execution |
US10067801B1 (en) | 2015-12-21 | 2018-09-04 | Amazon Technologies, Inc. | Acquisition and maintenance of compute capacity |
US9910713B2 (en) | 2015-12-21 | 2018-03-06 | Amazon Technologies, Inc. | Code execution request routing |
US9990222B2 (en) | 2016-03-18 | 2018-06-05 | Airwatch Llc | Enforcing compliance rules against hypervisor and virtual machine using host management component |
US10128577B2 (en) * | 2016-03-29 | 2018-11-13 | Space Systems/Loral, Llc | Satellite system with different frequency plan at the equator |
US10891145B2 (en) | 2016-03-30 | 2021-01-12 | Amazon Technologies, Inc. | Processing pre-existing data sets at an on demand code execution environment |
US11132213B1 (en) | 2016-03-30 | 2021-09-28 | Amazon Technologies, Inc. | Dependency-based process of pre-existing data sets at an on demand code execution environment |
US10841273B2 (en) * | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10282229B2 (en) | 2016-06-28 | 2019-05-07 | Amazon Technologies, Inc. | Asynchronous task management in an on-demand network code execution environment |
US10102040B2 (en) | 2016-06-29 | 2018-10-16 | Amazon Technologies, Inc | Adjusting variable limit on concurrent code executions |
US10277708B2 (en) | 2016-06-30 | 2019-04-30 | Amazon Technologies, Inc. | On-demand network code execution with cross-account aliases |
US10884787B1 (en) | 2016-09-23 | 2021-01-05 | Amazon Technologies, Inc. | Execution guarantees in an on-demand network code execution system |
US10061613B1 (en) | 2016-09-23 | 2018-08-28 | Amazon Technologies, Inc. | Idempotent task execution in on-demand network code execution systems |
US11119813B1 (en) | 2016-09-30 | 2021-09-14 | Amazon Technologies, Inc. | Mapreduce implementation using an on-demand network code execution system |
US10348813B2 (en) * | 2016-10-28 | 2019-07-09 | International Business Machines Corporation | Provisioning a bare-metal server |
US10992568B2 (en) | 2017-01-31 | 2021-04-27 | Vmware, Inc. | High performance software-defined core network |
US11706127B2 (en) | 2017-01-31 | 2023-07-18 | Vmware, Inc. | High performance software-defined core network |
US20200036624A1 (en) * | 2017-01-31 | 2020-01-30 | The Mode Group | High performance software-defined core network |
US20180219765A1 (en) | 2017-01-31 | 2018-08-02 | Waltz Networks | Method and Apparatus for Network Traffic Control Optimization |
US10778528B2 (en) | 2017-02-11 | 2020-09-15 | Nicira, Inc. | Method and system of connecting to a multipath hub in a cluster |
WO2018170647A1 (zh) * | 2017-03-19 | 2018-09-27 | 华为技术有限公司 | 一种网络切片的管理方法、单元和系统 |
US10523539B2 (en) | 2017-06-22 | 2019-12-31 | Nicira, Inc. | Method and system of resiliency in cloud-delivered SD-WAN |
JP2019029946A (ja) * | 2017-08-03 | 2019-02-21 | 富士通株式会社 | 通信制御装置、通信制御システム、及び通信制御方法 |
US10999100B2 (en) | 2017-10-02 | 2021-05-04 | Vmware, Inc. | Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider |
US11102032B2 (en) | 2017-10-02 | 2021-08-24 | Vmware, Inc. | Routing data message flow through multiple public clouds |
US11115480B2 (en) | 2017-10-02 | 2021-09-07 | Vmware, Inc. | Layer four optimization for a virtual network defined over public cloud |
US11223514B2 (en) | 2017-11-09 | 2022-01-11 | Nicira, Inc. | Method and system of a dynamic high-availability mode based on current wide area network connectivity |
US10564946B1 (en) | 2017-12-13 | 2020-02-18 | Amazon Technologies, Inc. | Dependency handling in an on-demand network code execution system |
US10353678B1 (en) | 2018-02-05 | 2019-07-16 | Amazon Technologies, Inc. | Detecting code characteristic alterations due to cross-service calls |
US10831898B1 (en) | 2018-02-05 | 2020-11-10 | Amazon Technologies, Inc. | Detecting privilege escalations in code including cross-service calls |
US10733085B1 (en) | 2018-02-05 | 2020-08-04 | Amazon Technologies, Inc. | Detecting impedance mismatches due to cross-service calls |
US10725752B1 (en) | 2018-02-13 | 2020-07-28 | Amazon Technologies, Inc. | Dependency handling in an on-demand network code execution system |
US10776091B1 (en) | 2018-02-26 | 2020-09-15 | Amazon Technologies, Inc. | Logging endpoint in an on-demand code execution system |
US10853115B2 (en) | 2018-06-25 | 2020-12-01 | Amazon Technologies, Inc. | Execution of auxiliary functions in an on-demand network code execution system |
US10649749B1 (en) | 2018-06-26 | 2020-05-12 | Amazon Technologies, Inc. | Cross-environment application of tracing information for improved code execution |
US11146569B1 (en) | 2018-06-28 | 2021-10-12 | Amazon Technologies, Inc. | Escalation-resistant secure network services using request-scoped authentication information |
US10949237B2 (en) | 2018-06-29 | 2021-03-16 | Amazon Technologies, Inc. | Operating system customization in an on-demand network code execution system |
US11099870B1 (en) | 2018-07-25 | 2021-08-24 | Amazon Technologies, Inc. | Reducing execution times in an on-demand network code execution system using saved machine states |
US11099917B2 (en) | 2018-09-27 | 2021-08-24 | Amazon Technologies, Inc. | Efficient state maintenance for execution environments in an on-demand code execution system |
US11243953B2 (en) | 2018-09-27 | 2022-02-08 | Amazon Technologies, Inc. | Mapreduce implementation in an on-demand network code execution system and stream data processing system |
US20200106669A1 (en) * | 2018-09-27 | 2020-04-02 | Nutanix, Inc. | Computing node clusters supporting network segmentation |
CN113542128B (zh) * | 2018-10-12 | 2023-03-31 | 华为技术有限公司 | 一种发送路由信息的方法和装置 |
US11943093B1 (en) | 2018-11-20 | 2024-03-26 | Amazon Technologies, Inc. | Network connection recovery after virtual machine transition in an on-demand network code execution system |
US10884812B2 (en) | 2018-12-13 | 2021-01-05 | Amazon Technologies, Inc. | Performance-based hardware emulation in an on-demand network code execution system |
US11489730B2 (en) | 2018-12-18 | 2022-11-01 | Storage Engine, Inc. | Methods, apparatuses and systems for configuring a network environment for a server |
US10983886B2 (en) | 2018-12-18 | 2021-04-20 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud-based disaster recovery |
US10958720B2 (en) | 2018-12-18 | 2021-03-23 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud based disaster recovery |
US11252019B2 (en) | 2018-12-18 | 2022-02-15 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud-based disaster recovery |
US11176002B2 (en) | 2018-12-18 | 2021-11-16 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud-based disaster recovery |
US10887382B2 (en) | 2018-12-18 | 2021-01-05 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud-based disaster recovery |
US11178221B2 (en) | 2018-12-18 | 2021-11-16 | Storage Engine, Inc. | Methods, apparatuses and systems for cloud-based disaster recovery |
US11240160B2 (en) * | 2018-12-28 | 2022-02-01 | Alibaba Group Holding Limited | Method, apparatus, and computer-readable storage medium for network control |
US11010188B1 (en) | 2019-02-05 | 2021-05-18 | Amazon Technologies, Inc. | Simulated data object storage using on-demand computation of data objects |
WO2020180761A1 (en) * | 2019-03-04 | 2020-09-10 | Airgap Networks Inc. | Systems and methods of creating network singularities |
US11861386B1 (en) | 2019-03-22 | 2024-01-02 | Amazon Technologies, Inc. | Application gateways in an on-demand network code execution system |
US11216297B2 (en) * | 2019-04-29 | 2022-01-04 | Hewlett Packard Enterprise Development Lp | Associating virtual network interfaces with a virtual machine during provisioning in a cloud system |
US11119809B1 (en) | 2019-06-20 | 2021-09-14 | Amazon Technologies, Inc. | Virtualization-based transaction handling in an on-demand network code execution system |
US11190508B2 (en) | 2019-06-27 | 2021-11-30 | Vmware, Inc. | Location-aware service request handling |
US11190609B2 (en) | 2019-06-28 | 2021-11-30 | Amazon Technologies, Inc. | Connection pooling for scalable network services |
US11159528B2 (en) | 2019-06-28 | 2021-10-26 | Amazon Technologies, Inc. | Authentication to network-services using hosted authentication information |
US11115404B2 (en) | 2019-06-28 | 2021-09-07 | Amazon Technologies, Inc. | Facilitating service connections in serverless code executions |
US11057348B2 (en) | 2019-08-22 | 2021-07-06 | Saudi Arabian Oil Company | Method for data center network segmentation |
US11252106B2 (en) | 2019-08-27 | 2022-02-15 | Vmware, Inc. | Alleviating congestion in a virtual network deployed over public clouds for an entity |
US11106477B2 (en) | 2019-09-27 | 2021-08-31 | Amazon Technologies, Inc. | Execution of owner-specified code during input/output path to object storage service |
US11263220B2 (en) | 2019-09-27 | 2022-03-01 | Amazon Technologies, Inc. | On-demand execution of object transformation code in output path of object storage service |
US10996961B2 (en) | 2019-09-27 | 2021-05-04 | Amazon Technologies, Inc. | On-demand indexing of data in input path of object storage service |
US11416628B2 (en) | 2019-09-27 | 2022-08-16 | Amazon Technologies, Inc. | User-specific data manipulation system for object storage service based on user-submitted code |
US11656892B1 (en) | 2019-09-27 | 2023-05-23 | Amazon Technologies, Inc. | Sequential execution of user-submitted code and native functions |
US11023416B2 (en) | 2019-09-27 | 2021-06-01 | Amazon Technologies, Inc. | Data access control system for object storage service based on owner-defined code |
US11023311B2 (en) | 2019-09-27 | 2021-06-01 | Amazon Technologies, Inc. | On-demand code execution in input path of data uploaded to storage service in multiple data portions |
US11394761B1 (en) | 2019-09-27 | 2022-07-19 | Amazon Technologies, Inc. | Execution of user-submitted code on a stream of data |
US10908927B1 (en) | 2019-09-27 | 2021-02-02 | Amazon Technologies, Inc. | On-demand execution of object filter code in output path of object storage service |
US11055112B2 (en) | 2019-09-27 | 2021-07-06 | Amazon Technologies, Inc. | Inserting executions of owner-specified code into input/output path of object storage service |
US11550944B2 (en) | 2019-09-27 | 2023-01-10 | Amazon Technologies, Inc. | Code execution environment customization system for object storage service |
US11360948B2 (en) | 2019-09-27 | 2022-06-14 | Amazon Technologies, Inc. | Inserting owner-specified data processing pipelines into input/output path of object storage service |
US11386230B2 (en) | 2019-09-27 | 2022-07-12 | Amazon Technologies, Inc. | On-demand code obfuscation of data in input path of object storage service |
US11250007B1 (en) | 2019-09-27 | 2022-02-15 | Amazon Technologies, Inc. | On-demand execution of object combination code in output path of object storage service |
US11044190B2 (en) | 2019-10-28 | 2021-06-22 | Vmware, Inc. | Managing forwarding elements at edge nodes connected to a virtual network |
US10942795B1 (en) | 2019-11-27 | 2021-03-09 | Amazon Technologies, Inc. | Serverless call distribution to utilize reserved capacity without inhibiting scaling |
US11119826B2 (en) | 2019-11-27 | 2021-09-14 | Amazon Technologies, Inc. | Serverless call distribution to implement spillover while avoiding cold starts |
US11489783B2 (en) | 2019-12-12 | 2022-11-01 | Vmware, Inc. | Performing deep packet inspection in a software defined wide area network |
US11394640B2 (en) | 2019-12-12 | 2022-07-19 | Vmware, Inc. | Collecting and analyzing data regarding flows associated with DPI parameters |
US20210234804A1 (en) | 2020-01-24 | 2021-07-29 | Vmware, Inc. | Accurate traffic steering between links through sub-path path quality metrics |
US11714682B1 (en) | 2020-03-03 | 2023-08-01 | Amazon Technologies, Inc. | Reclaiming computing resources in an on-demand code execution system |
US11188391B1 (en) | 2020-03-11 | 2021-11-30 | Amazon Technologies, Inc. | Allocating resources to on-demand code executions under scarcity conditions |
US11775640B1 (en) | 2020-03-30 | 2023-10-03 | Amazon Technologies, Inc. | Resource utilization-based malicious task detection in an on-demand code execution system |
US11477127B2 (en) | 2020-07-02 | 2022-10-18 | Vmware, Inc. | Methods and apparatus for application aware hub clustering techniques for a hyper scale SD-WAN |
US11363124B2 (en) | 2020-07-30 | 2022-06-14 | Vmware, Inc. | Zero copy socket splicing |
CN111901177B (zh) * | 2020-08-06 | 2022-08-30 | 鹏城实验室 | 一种裸金属服务器网络配置方法、系统及相关设备 |
US11444865B2 (en) | 2020-11-17 | 2022-09-13 | Vmware, Inc. | Autonomous distributed forwarding plane traceability based anomaly detection in application traffic for hyper-scale SD-WAN |
US11575600B2 (en) | 2020-11-24 | 2023-02-07 | Vmware, Inc. | Tunnel-less SD-WAN |
US11593270B1 (en) | 2020-11-25 | 2023-02-28 | Amazon Technologies, Inc. | Fast distributed caching using erasure coded object parts |
US11550713B1 (en) | 2020-11-25 | 2023-01-10 | Amazon Technologies, Inc. | Garbage collection in distributed systems using life cycled storage roots |
CN112653608B (zh) * | 2020-12-14 | 2023-01-20 | 聚好看科技股份有限公司 | 一种显示设备、移动终端及跨网数据传输的方法 |
US11601356B2 (en) | 2020-12-29 | 2023-03-07 | Vmware, Inc. | Emulating packet flows to assess network links for SD-WAN |
CN116783874A (zh) | 2021-01-18 | 2023-09-19 | Vm维尔股份有限公司 | 网络感知的负载平衡 |
US11509571B1 (en) | 2021-05-03 | 2022-11-22 | Vmware, Inc. | Cost-based routing mesh for facilitating routing through an SD-WAN |
US11729065B2 (en) | 2021-05-06 | 2023-08-15 | Vmware, Inc. | Methods for application defined virtual network service among multiple transport in SD-WAN |
US11489720B1 (en) | 2021-06-18 | 2022-11-01 | Vmware, Inc. | Method and apparatus to evaluate resource elements and public clouds for deploying tenant deployable elements based on harvested performance metrics |
US11388210B1 (en) | 2021-06-30 | 2022-07-12 | Amazon Technologies, Inc. | Streaming analytics using a serverless compute system |
US11375005B1 (en) | 2021-07-24 | 2022-06-28 | Vmware, Inc. | High availability solutions for a secure access service edge application |
US11880791B2 (en) * | 2021-08-27 | 2024-01-23 | Oracle International Corporation | Attachment and detachment of compute instances owned by different tenancies |
US11943146B2 (en) | 2021-10-01 | 2024-03-26 | VMware LLC | Traffic prioritization in SD-WAN |
US11909815B2 (en) | 2022-06-06 | 2024-02-20 | VMware LLC | Routing based on geolocation costs |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009232207A (ja) * | 2008-03-24 | 2009-10-08 | Hitachi Ltd | ネットワークスイッチ装置、サーバシステム及びサーバシステムにおけるサーバ移送方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003023444A (ja) | 2001-07-06 | 2003-01-24 | Fujitsu Ltd | 仮想ルータを利用した動的な負荷分散システム |
US20050198303A1 (en) * | 2004-01-02 | 2005-09-08 | Robert Knauerhase | Dynamic virtual machine service provider allocation |
GB2459433B (en) * | 2008-03-07 | 2012-06-06 | Hewlett Packard Development Co | Distributed network connection policy management |
KR101303718B1 (ko) * | 2009-02-27 | 2013-09-04 | 브로드콤 코포레이션 | 가상 머신 네트워킹을 위한 방법 및 시스템 |
JP4780237B2 (ja) * | 2010-04-26 | 2011-09-28 | 株式会社日立製作所 | 障害回復方法 |
US8407366B2 (en) * | 2010-05-14 | 2013-03-26 | Microsoft Corporation | Interconnecting members of a virtual network |
JP2012182605A (ja) | 2011-03-01 | 2012-09-20 | Hitachi Ltd | ネットワーク制御システム及び管理サーバ |
EP2568672A1 (en) * | 2011-08-24 | 2013-03-13 | Alcatel Lucent | Method for managing network resources within a plurality of datacenters |
-
2013
- 2013-02-25 WO PCT/JP2013/054655 patent/WO2014128948A1/ja active Application Filing
- 2013-02-25 JP JP2015501211A patent/JP5953421B2/ja not_active Expired - Fee Related
- 2013-02-25 US US14/766,228 patent/US9575798B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009232207A (ja) * | 2008-03-24 | 2009-10-08 | Hitachi Ltd | ネットワークスイッチ装置、サーバシステム及びサーバシステムにおけるサーバ移送方法 |
Non-Patent Citations (1)
Title |
---|
SERDAR CABUK ET AL.: "Towards automated security policy enforcement in multi-tenant virtual data centers", JOURNAL OF COMPUTER SECURITY, vol. 18, no. 1, January 2010 (2010-01-01), pages 89 - 121, Retrieved from the Internet <URL:http://www.sirrix.com/media/downloads/59725.pdf> [retrieved on 20130408] * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014182544A (ja) * | 2013-03-19 | 2014-09-29 | Fujitsu Ltd | 監視装置,情報処理システム,監視方法および監視プログラム |
US9444704B2 (en) * | 2013-05-20 | 2016-09-13 | Hitachi, Ltd. | Method for controlling monitoring items, management computer, and computer system in cloud system where virtual environment and non-virtual environment are mixed |
US20180041388A1 (en) * | 2015-03-13 | 2018-02-08 | Koninklijke Kpn N.V. | Method and Control System for Controlling Provisioning of a Service in a Network |
US11888683B2 (en) * | 2015-03-13 | 2024-01-30 | Koninklijke Kpn N.V. | Method and control system for controlling provisioning of a service in a network |
WO2017154163A1 (ja) * | 2016-03-10 | 2017-09-14 | 株式会社日立製作所 | 計算機システム、ゲートウェイ装置の制御方法、および記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2014128948A1 (ja) | 2017-02-02 |
US20150363221A1 (en) | 2015-12-17 |
US9575798B2 (en) | 2017-02-21 |
JP5953421B2 (ja) | 2016-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5953421B2 (ja) | 仮想サーバおよび非仮想サーバ混在環境におけるテナントネットワーク構成の管理方法 | |
US11218420B2 (en) | Virtual network interface objects | |
US11438194B2 (en) | Scalable tenant networks | |
US10757234B2 (en) | Private allocated networks over shared communications infrastructure | |
US8331362B2 (en) | Methods and apparatus for distributed dynamic network provisioning | |
US8565118B2 (en) | Methods and apparatus for distributed dynamic network provisioning | |
US8255496B2 (en) | Method and apparatus for determining a network topology during network provisioning | |
US20150172104A1 (en) | Managing failure behavior for computing nodes of provided computer networks | |
EP4111647A1 (en) | Vrf segregation for shared services in multi-fabric cloud networks | |
Shen et al. | S-fabric: towards scalable and incremental SDN deployment in data centers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13875914 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015501211 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14766228 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13875914 Country of ref document: EP Kind code of ref document: A1 |