JP5757552B2 - Computer system, controller, service providing server, and load distribution method - Google Patents

Computer system, controller, service providing server, and load distribution method Download PDF

Info

Publication number
JP5757552B2
JP5757552B2 JP2010035322A JP2010035322A JP5757552B2 JP 5757552 B2 JP5757552 B2 JP 5757552B2 JP 2010035322 A JP2010035322 A JP 2010035322A JP 2010035322 A JP2010035322 A JP 2010035322A JP 5757552 B2 JP5757552 B2 JP 5757552B2
Authority
JP
Japan
Prior art keywords
server
load
controller
flow entry
switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2010035322A
Other languages
Japanese (ja)
Other versions
JP2011170718A (en
Inventor
高橋 秀行
秀行 高橋
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2010035322A priority Critical patent/JP5757552B2/en
Publication of JP2011170718A publication Critical patent/JP2011170718A/en
Application granted granted Critical
Publication of JP5757552B2 publication Critical patent/JP5757552B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a computer system and a load distribution method, and more particularly, to a computer system that performs load distribution using open flow technology.
  In the computer network, a technique (open flow) for centrally controlling the transfer operation of each switch by an external controller has been proposed by the OpenFlow Consortium (see Non-Patent Document 1). A network switch compatible with this technology (hereinafter referred to as OpenFlow Switch (OFS)) holds detailed information such as protocol type and port number in a flow table, and can control the flow and collect statistical information. it can.
  The flow table held by the OFS is set by a controller (hereinafter referred to as an open flow controller (OFC)) provided separately from the OFS. The OFC performs setting of a communication path between nodes, a transfer operation (relay operation) for the OFS on the path, and the like. At this time, the OFC sets a flow entry in which a rule specifying a flow (packet data) and an action defining a process for the flow are associated with each other in a flow table held by the OFS. The contents of entries set in the flow table are defined in Non-Patent Document 1, for example.
  The OFS on the communication path determines the transfer destination of the received packet data according to the flow entry set by the OFC, and performs transfer processing. As a result, a node on the network can transmit and receive packet data to and from other nodes using the communication path set by the OFC. That is, in a computer system using OpenFlow, communication of the entire system can be centrally controlled and managed by OFC provided separately from OFS that performs transfer processing.
  The OFC calculates a communication path and updates a flow table in the OFS on the communication path in response to a request from the OFS. Specifically, when the OFS receives packet data not defined in its own flow table, it notifies the OFC of the packet data. The OFC generates a flow entry (rule + action) for specifying the transfer source and the transfer destination based on the notified header information of the packet data and setting the communication path and OFS, and updates the flow table of each OFS.
  On the other hand, there is a server virtualization technology that makes a plurality of nodes look like one node for the purpose of improving processing capability and fault tolerance. When a plurality of servers are virtualized as one server, the server (virtualized server) can be accessed by a plurality of clients. At this time, load distribution by access from a plurality of clients becomes possible. However, in order to prevent the load from being concentrated on one server (physical server) constituting the virtual server, it is necessary to determine the server to be connected from each client by some means.
  As a technique for realizing load distribution for a plurality of servers, a load distribution method using a round robin function of a DNS (Domain Name System) server is known. In the round robin function, a DNS server sets a plurality of IP addresses to a host having the same name (FQDN: Fully Qualified Domain Name), and sequentially selects one of the plurality of IP addresses in response to a name resolution request from a DNS client. return. As a result, there are a plurality of servers that can be referred to by one name by the client, and load distribution is realized.
  Further, for example, Japanese Patent Application Laid-Open No. 2009-259206 describes a system that distributes loads between virtual servers by setting whether or not FQDNs can be distributed according to the load status of the virtual servers (see Patent Document 1).
JP2009-259206A
OpenFlow Switch Specification Version 0.9.0 (Wire Protocol 0x98) July 20, 2009
  In a system using OpenFlow technology, the operation can be determined based on the IP address, MAC address, port number, VLAN ID, and the like of the source and destination of the received packet. For this reason, in the OpenFlow technology, rules and operations (actions) can be defined in detail, but complicated processing is required to increase the accuracy.
  On the other hand, load balancing methods using open flow technology are beginning to be studied. For example, the OFC can realize load distribution by dynamically changing the flow entry (rule + action) of the OFS according to the increase of the load in the OFS. In this case, it is expected that the update frequency of the OFS flow entry will increase depending on the increase or bias of the load. In the system using the open flow technology, the operation of a plurality of OFS is controlled centrally by one OFC, so that the processing load in the OFC increases as the flow entry update frequency increases. Further, as described above, in order to realize load distribution by setting fine flow entries, complicated processing is required, so that it is expected that the processing load in the OFC increases and processing delay is caused.
  Therefore, an object of the present invention is to realize load distribution using the open flow technology while suppressing an increase in processing load of the open flow controller.
  In order to solve the above problems, the present invention employs the means described below. In the description of technical matters constituting the means, in order to clarify the correspondence between the description of [Claims] and the description of [Mode for Carrying Out the Invention] The number / symbol used in [Form] is added. However, the added numbers and symbols should not be used to limit the technical scope of the invention described in [Claims].
  The computer system according to the present invention includes a controller (1) that sets a flow entry for a switch (4i) on a communication path, and a switch that relays a received packet according to the flow entry set by the controller (1). (4i), a plurality of service providing servers (31 to 3m) for providing services to each of a plurality of client terminals (21 to 2n) connected thereto via the switch (4i), and round robin According to the function, a DNS (Domain Name System) server (5) that performs load distribution between the plurality of client terminals (21 to 2n) and the plurality of service providing servers (31 to 3m) is provided. Each of the plurality of service providing servers (31 to 3m) monitors the respective load statuses and issues a load distribution request to the controller (1) when it is determined that the load on itself is equal to or greater than the threshold. The controller (1) changes the flow entry set in the switch (4i) in response to the load distribution request.
  In the load distribution method according to the present invention, the controller (1) sets a flow entry for the switch (4i) on the communication path, and the flow entry in which the switch (4i) is set by the controller (1). And the step of relaying the received packet, and each of the plurality of service providing servers (31 to 3m) is connected to each of the plurality of client terminals (21 to 2n) connected to itself via the switch (4i). On the other hand, a step of providing a service, and a step of the DNS server (5) performing load distribution between the plurality of client terminals (21 to 2n) and the plurality of service providing servers (31 to 3m) by a round robin function. Each of the plurality of service providing servers (31 to 3m) monitors the load status. When each of the plurality of service providing servers (31 to 3m) determines that the load on itself is equal to or greater than the threshold, the step of issuing a load distribution request to the controller (1), and the controller (1) And changing the flow entry set in the switch (4i) in response to the distribution request.
  ADVANTAGE OF THE INVENTION According to this invention, the load distribution using an open flow technique is realizable, suppressing the increase in the processing load of an open flow controller.
FIG. 1 is a diagram showing the configuration of an embodiment of a computer system according to the present invention. FIG. 2 is a diagram showing a configuration in the embodiment of the service providing server according to the present invention. FIG. 3 is a diagram showing a configuration in the embodiment of the OpenFlow controller according to the present invention. FIG. 4 is a diagram showing an example of a flow table held by the OpenFlow controller according to the present invention. FIG. 5 is a diagram showing an example of topology information held by the OpenFlow controller according to the present invention. FIG. 6 is a diagram showing an example of communication path information held by the OpenFlow controller according to the present invention. FIG. 7 is a diagram showing the configuration of the embodiment of the open flow switch according to the present invention. FIG. 8 is a diagram illustrating an example of a flow table held by the OpenFlow switch. FIG. 9 is a diagram for explaining the open flow control according to the present invention. FIG. 10A is a sequence diagram showing an example of a load distribution operation in the embodiment of the computer system according to the present invention. FIG. 10B is a sequence diagram showing an example of a load distribution operation in the embodiment of the computer system according to the present invention. FIG. 11A is a diagram showing an example of a flow table set in the OpenFlow switch according to the present invention. FIG. 11B is a diagram showing an example of a flow table updated by the OpenFlow controller according to the present invention. FIG. 12 is a sequence diagram showing an example of a packet transfer operation after load distribution in the embodiment of the computer system according to the present invention. FIG. 13 is a sequence diagram illustrating an example of a communication path return operation when the load situation is improved.
(Overview)
In the computer system according to the present invention, during normal operation, load distribution is performed by the DNS round robin function. When the load is concentrated on one server, the access destination server of the client is changed by changing the flow table by the OpenFlow controller. As a result, it is possible to prevent the concentration of loads that cannot be dealt with by the load distribution method using the round robin function, and to reduce the processing load related to the communication path change processing in the OpenFlow controller.
  Since OpenFlow (also called Programmable Flow) is a new technology, its processing amount has not been studied yet. In the present invention, the load distribution method using OpenFlow technology and the processing amount associated therewith are suppressed. The method is realized.
  Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same or similar reference numerals indicate the same, similar, or equivalent components.
(Computer system configuration)
The configuration of the computer system according to the present invention will be described with reference to FIG. FIG. 1 is a diagram showing the configuration of an embodiment of a computer system according to the present invention. The computer system according to the present invention performs communication path construction and packet data transfer control using OpenFlow. Referring to FIG. 1, a computer system according to the present invention includes an open flow controller 1 (hereinafter referred to as OFC 1), a plurality of client terminals 21 to 2n (hereinafter referred to as clients 21 to 2n), and a plurality of service providing servers 31. To 3 m (hereinafter referred to as servers 31 to 3 m), a switch group 4 having a plurality of open flow switches 41 to 4 i (hereinafter referred to as OFS 41 to 4 i), and a DNS server 5. However, m, n, and i are natural numbers of 2 or more. In addition, when the clients 21 to 2n are collectively referred to as the client 2n, the clients 31 to 3m are collectively referred to as the server 3m, and the OFS 41 to 4i are collectively referred to without being distinguished. This will be described as OFS4i.
  The client 2n, the OFS group 4, and the DNS server 5 are connected via a LAN 6 (Local Area Network). The OFC 1 and the server 3m are connected to the LAN 6 via the OFS group 4. The DNS server 5 performs name resolution for nodes (client 2n and server 3m) connected to the LAN 6. Specifically, the DNS server 5 has a name resolution function (round robin function) by round robin for returning a plurality of IP addresses to one host name. When there is a name resolution inquiry from the client 2n, the DNS server 5 returns IP addresses in order from the pooled IP address. The DNS server 5 according to the present invention preferably has a general round robin function.
  The client 2n is a computer device including a CPU, a network interface (I / F), and a memory (not shown), and communicates with the server 3m by executing a program in the memory.
  The server 3m is a computer device that includes a CPU 301, a network I / F 302, and a memory 303. The memory 303 stores a service providing program 310 and a program 320 for notifying the flow control server. The server 3m executes the service providing program 310 by the CPU 301, thereby realizing the function of the service providing unit 311 illustrated in FIG.
  The service providing unit 311 controls the network I / F 302 to communicate with the client 2n and other servers 3m. Communication with the client 2n and the other server 3m is performed via the switch group 4. The service providing unit 311 provides business services such as a database and a file sharing function to the client 2n. Depending on the service content provided by the service providing unit 311, the server 3 m realizes a function exemplified by one of a Web server, a file server, and an application server. For example, when the server 3m functions as a Web server, an HTML document or image data (not shown) in the memory 303 is transferred to the client 2n in accordance with a request from the client 2n.
  The service providing unit 311 can execute the same processing as that of the other server 3m for the accessing client by communicating with the other server 3m.
  The server 3m implements the function of the load monitoring unit 312 shown in FIG. 2 by executing the program 320 for notifying the flow control server by the CPU 301. The load monitoring unit 312 monitors the load according to the access from the client 2n. Further, the load monitoring unit 312 determines whether or not the load corresponding to the access is equal to or greater than a preset reference value (threshold value). This determination may be performed at a predetermined cycle or at an arbitrary time, or may be performed every time the client accesses. Further, when the load is equal to or greater than the threshold, the load monitoring unit 312 identifies the client 2n corresponding to the load and notifies the OFC1.
  For example, the load monitoring unit 312 monitors and determines the number of simultaneous connections to the server 3m in a predetermined cycle, and when the number of simultaneous connections exceeds a reference value, the IP address of the client 2n being connected at the time of determination And issue a load balancing request. Alternatively, the load monitoring unit 312 monitors and determines the number of simultaneous connections to the server 3m each time access is made, and when the number of simultaneous connections reaches a reference value, the load monitoring unit determines that the IP of the client 2n connected at the time of determination Notify the address to OFC 1 and issue a load distribution request.
  The monitoring and determination of the load status is not limited to monitoring the number of client connections, and the load on the CPU 301 may be monitored to determine whether or not the load has exceeded a reference value.
  A plurality of IP addresses “1” to “k” (k is an arbitrary number) are set in the network I / F 302 of the server 3m. It is assumed that there is no particular relationship between the number of IP addresses set in the network interface I / F 302 and the number of servers 3m. The servers 31 to 3m are preferably assigned the same host name (FQDN) and function as a single virtual server 10. Further, the same host name as that of the servers 31 to 3m may be given to the OFC1.
  The OFC 1 includes a switch control unit 11 that controls communication path packet transfer processing related to packet transfer in the system using the open flow technology. The open flow technology is a technology in which the controller (here OFC1) performs path control and node control by setting multi-layer and flow unit path information in OFS4i according to a routing policy (flow entry: flow + action). (For details, see Non-Patent Document 1.) As a result, the route control function is separated from the routers and switches, and optimal routing and traffic management are possible through centralized control by the controller. The OFS 4i to which the open flow technology is applied handles communication as a flow of END2END, not as a unit of packet or frame like a conventional router or switch.
  The OFC 1 sets the flow entry (rule 144 + action information 145) in the flow table 403 held by the OFS 4i, thereby controlling the operation of the OFS 4i (for example, packet data relay operation).
  Details of the configuration of the OFC 1 will be described with reference to FIGS. 1 and 3. FIG. 3 is a diagram showing the configuration of the OFC 1 according to the present invention. The OFC 1 is realized by a computer including a CPU 101, a network I / F 102, and a memory 103. In the OFC 1, the CPU 101 executes the flow control program 110 stored in the memory 103, thereby realizing the functions of the switch control unit 11, the flow management unit 12, and the flow generation unit 13 illustrated in FIG. The OFC 1 includes a flow table 14, topology information 15, and communication path information 16 stored in the memory 103.
  The switch control unit 11 sets or deletes a flow entry (rule + action) for each OFS 4 i according to the flow table 14. The OFS 4i refers to the set flow entry and executes an action (for example, relay or discard of packet data) corresponding to the rule according to the header information of the received packet. Details of the rules and actions will be described later.
  The switch control unit 11 sets, deletes, or updates a flow entry (flow + action) for the OFS 4i in response to a first packet reception notification from the OFS 4i or a load distribution request from the server 3m. Here, the first packet indicates packet data that does not conform to the rule 144 set in the OFS 4i. In addition, when the switch control unit 11 receives a setting return request issued according to load reduction from the server 3m, the switch control unit 11 preferably changes the flow entry changed according to the load distribution request to the original flow entry. At this time, referring to setting information 146 described later, the flow table entry before the change is specified, and the flow entry is restored using this.
  FIG. 4 is a diagram illustrating an example of the configuration of the flow table 14 held by the OFC 1. Referring to FIG. 4, the flow table 14 includes a flow identifier 141 for specifying a flow entry, an identifier (target device 142) for identifying a setting target (OFS4i) of the flow entry, path information 143, a rule 144, Action information 145 and setting information 146 are set in association with each other. In the flow table 14, flow entries (rules 144 + action information 145) generated for all OFSs 4 to be controlled by the OFC 1 are set. The flow table 14 may define how to handle communication, such as QoS and encryption information for each flow.
  The rule 144 defines, for example, combinations of addresses and identifiers of layers 1 to 4 of the OSI (Open Systems Interconnection) reference model included in header information in TCP / IP packet data. For example, each combination of a layer 1 physical port, a layer 2 MAC address, a layer 3 IP address, a layer 4 port number, and a VLAN tag (VLAN id) shown in FIG. The VLAN tag may be given a priority (VLAN Priority).
  Here, identifiers such as port numbers and addresses set in the rule 144 may be set within a predetermined range. In addition, it is preferable to set the rule 144 by distinguishing the destination and the address of the transmission source. For example, the range of the MAC destination address, the range of the destination port number that identifies the connection destination application, and the range of the transmission source port number that identifies the connection source application are set as the rule 144. Furthermore, an identifier for specifying the data transfer protocol may be set as the rule 144.
  The action information 145 defines a method for processing TCP / IP packet data, for example. For example, information indicating whether or not the received packet data is to be relayed and the transmission destination in the case of relaying are set. The action information 145 may be set with information instructing to copy or discard the packet data.
  The route information 143 is information for specifying a route to which the flow entry (rule 144 + action information 145) is applied. This is an identifier associated with communication path information 16 described later.
  The setting information 146 includes information (“set” or “not set”) indicating whether or not the flow entry (rule 144 + action information 145) is currently set in the OFS 4 on the communication path. Since the setting information 146 is associated with the target device 142 and the path information 143, it can be confirmed whether or not a flow is set for the communication path, and whether or not a flow is set for each OFS4 on the communication path. Can be confirmed.
  The setting information 146 preferably includes information (referred to as change information) that identifies a flow entry that has been changed because the load is greater than or equal to a reference value. For example, information for associating the flow entry before the change with the flow entry after the change is suitably set as the change information.
  The flow management unit 12 attaches the flow identifier 141 to the flow (rule 144 + action information 145) generated by the flow generation unit 13 and records it in the storage device. At this time, the identifier of the communication route to which the flow entry is applied (route information 143) and the identifier of the OFS 4 to which the flow entry is applied (target device 442) are attached to the flow entry (rule 144 + action information 145) and recorded.
  Further, the flow management unit 12 refers to the flow table 14 and extracts a flow entry (rule 144 + action information 145) corresponding to the header information of the first packet, the IP address of the transmission source notified from the server 3m, and the like. The switch control unit 11 is notified. The switch control unit 11 sets the notified flow entry (rule 144 + action information 145) in the target device 142 (OFS4) associated with the flow entry.
  Furthermore, the flow management unit 12 assumes that the setting information 146 of the flow entry (rule 144 + action information 145) set by the switch control unit 11 has been set. In addition, the setting information of the flow entry changed to another flow entry is changed to unset by load distribution processing, and the other flow entry is associated with the unset flow entry.
  The flow generation unit 13 generates a flow entry (rule 144 + action information 145) in response to a first packet reception notification from the OFS 4i or a load distribution request from the server 3m.
  When generating the flow entry in response to the notification of the first packet, the flow generation unit 13 calculates the communication path based on the header information of the first packet notified from the OFS 4i, and sets the flow entry in the OFS 4i on the communication path (Rule 144 + Action information 145) is generated. Specifically, the flow generation unit 13 specifies the client 2n that is the transmission source of the packet data and the server 3m that is the destination from the header information of the first packet, calculates the communication path using the topology information 15, and communicates the calculation result. The route information 136 is recorded in the storage device. Here, the server 3m serving as the end point of the communication path, the OFS 4i on the communication path, and the connection relationship between them are set as the communication path information 136. The flow generation unit 13 sets a flow entry (rule 144 + action information 145) to be set in the OFS 4i on the communication path based on the communication path information 136.
  Alternatively, when generating the flow entry in response to the load distribution request, the flow generation unit 13 calculates a communication path based on the transmission source IP address notified from the server 3m and the requesting server 3m, and the communication path A flow entry (rule 144 + action information 145) to be set in the above OFS 4i is generated. At this time, the flow generation unit 13 generates a flow entry (rule 144 + action information 145) that reduces the load on the requesting server 3m. For example, the flow generation unit 13 controls the OFS 4i so that part or all of the packet data addressed to the requesting server 3m is transferred to a server other than the requesting server 3m (rule 144 + action information 145). ) Is preferably generated.
  More specifically, the flow generation unit 13 is notified of a flow entry for transferring packet data addressed to the requesting server 3m from the client 2n having the notified source IP address to the requesting server 3m. A flow entry for transferring packet data addressed to the requesting server 3m to a server other than the requesting server 3m is generated from the client 2n having an IP address other than the transmission source IP address. Alternatively, the flow generation unit 13 generates a flow entry that transfers packet data addressed to the requesting server 3m to a server other than the server 3m for a predetermined period.
  Even when a flow entry is generated in response to a load distribution request, the communication path between the newly designated transmission source client and the destination server is calculated using the topology information 15 in the same manner as described above. The communication path information 136 is recorded in the storage device. The flow generation unit 13 sets a flow entry (rule 144 + action information 145) to be set in the OFS 4i on the communication path based on the communication path information 136.
  FIG. 5 is a diagram showing an example of topology information held by the OpenFlow controller according to the present invention. The topology information 15 includes information related to the connection status of the OFS 4i, the server 3m, and the like. Specifically, as the topology information 15, the device identifier 151 that identifies the OFS 4i and the server 3m is associated with the port number 152 and the port connection destination information 153 of the device, and is recorded in the storage device. The port connection destination information 153 includes a connection type (switch / node / external network) that specifies a connection partner, information that specifies a connection destination (switch ID in the case of OFS4, MAC address in the case of a host, external network (example: Internet) ) Includes an external network ID).
  FIG. 6 is a diagram showing an example of communication path information held by the OpenFlow controller according to the present invention. The communication path information 16 is information for specifying a communication path. Specifically, as the communication path information 136, end point information 161 that specifies the server 3m or an external network interface (not shown) as an end point, passing switch information 162 that specifies a pair group of OFS 4i and a port to pass through, and accompanying information 163 Are associated and recorded in the storage device. For example, when the communication path is a path connecting the servers 3m, the MAC address of each server 3m is recorded as the end point information 161. The passing switch information 162 includes an identifier of the OFS 4i provided on the communication path between the end points indicated by the end point information 161. The passing switch information 162 may include information for associating the flow entry (rule 144 + action information 145) set in the OFS 4i with the OFS 4i. The accompanying information 163 includes information on the OFS 4 (passage switch) on the route after the end point is changed.
  With the configuration as described above, the OFC 1 according to the present invention generates a flow entry for transferring the packet in response to a first packet reception notification from the OFS 4i and a load distribution request from the server 3m. The flow entry is set in the OFS 4i on the calculated communication path.
  The OFC 1 may be a device different from the servers 31 to 3m or may be mounted on any of the servers 31 to 3m.
  FIG. 7 is a diagram showing the configuration of the embodiment of the open flow switch according to the present invention. The OFS 4i determines the received packet processing method (action) according to the flow table 403 set (updated) by the OFC1. The OFS 4 i includes a transfer processing unit 401 and a flow setting unit 402. The transfer processing unit 401 and the flow setting unit 402 may be configured by hardware or may be realized by software executed by the CPU.
  A flow table 403 as shown in FIG. 8 is set in the storage device of the OFS 4i. The flow setting unit 402 sets the flow entry (rule 144 + action information 145) acquired from the OFC 1 in the flow table 403. Specifically, when the header information of the received packet does not match (or matches) the rule 144 recorded in the flow table 403, the flow setting unit 402 determines that the packet data is the first packet and receives the first packet. This is notified to the OFC 1 and a flow entry setting request is issued.
  Then, the flow setting unit 402 sets the flow entry (rule 144 + action information 145) transmitted from the OFC 1 in response to the notification of the first packet in the flow table 403. As described above, each of the OFS 41 to 2n is set as a flow entry with the reception of the first packet as a trigger.
  When the header information of the received packet matches (matches) the rule 144 recorded in the flow table 403, the packet data is transferred by the transfer processing unit 401 to another OFS 4 i or the server 3 m. Specifically, the transfer processing unit 401 identifies action information 145 corresponding to the rule 144 that matches (or matches) the header information of the packet data. The transfer processing unit 401 transfers the packet data to the transfer destination node (OFS 4 i or server 3 m) designated by the action information 145.
  Specifically, Rule 144: MAC source address (L2) is “A1 to A3”, IP destination address (L3) is “B1 to B3”, protocol is “http”, and destination port number (L4) is “C1”. The operation of the OFS 4i in which a flow in which “˜C3” and action information 145: “relay to server 31” are associated will be described. When the packet data with MAC source address (L2) “A1”, IP destination address (L3) “B2”, protocol “http” and destination port number (L4) “C3” is received, OFS4i Then, it is determined that the header information matches the rule 144, and the received packet data is transferred to the server 31. On the other hand, when packet data having a MAC source address (L2) of “A5”, an IP destination address (L3) of “B2”, a protocol of “http”, and a destination port number (L4) of “C4” is received, The OFS 4i determines that the header information does not match (conforms) the rule 144, and notifies the OFC 1 that the first packet has been received.
  With the configuration as described above, the update of the flow table for OFS4 according to the present invention is not only triggered by the reception of the first packet in OFS4i but also triggered by a load distribution request according to the load monitoring result in server 3m. Is also executed. The flow entry set in response to the load distribution request from the server 3m is preferably a flow entry that reduces the load on the requesting server 3m.
(Load distribution method)
Details of the load balancing operation in the computer system according to the present invention will be described with reference to FIGS. Hereinafter, a case where the load concentrated on the server 31 is distributed will be described as an example.
  First, as a precondition, it is assumed that OFC1 has set a flow entry (rule 144 + action information 145) in each flow table 403 of OFS 41-4i. Here, a flow entry as shown in FIG. 11A is set in the flow table 403 of the OFS 41. When the flow table 403 shown in FIG. 11A is followed, the OFS 41 sends the packet data whose destination IP address (access destination IP address) is IP address “1” to the server regardless of the transmission source IP address (access source IP address). The packet data having the destination IP address (access destination IP address) of IP address “2” is transferred to the server 32.
  10A and 10B are sequence diagrams showing an example of load distribution operation in the embodiment of the computer system according to the present invention. Referring to FIGS. 10A and 10B, first, the client 21 issues a name resolution request for the host name of the virtual server 10 to the DNS server 5 in order to access the virtual server 10 (step S101). In response to the name resolution request from the client 21, the DNS server 5 returns one pooled IP address based on the round robin rule (step S102). Here, it is assumed that the IP address “1” is returned.
  The client 21 accesses the virtual server 10 using the IP address “1” acquired from the DNS server 5 (steps S103 and S104). Specifically, the client 21 sets the IP address “1” as the destination address of the packet data addressed to the virtual server 10, sets its own IP address “A” as the source address, and transfers it to the OFS 41 (step S 103). ). The OFS 41 processes the packet data according to the flow entry that matches the received packet. Here, since the destination IP address is “1” according to the flow table 403 shown in FIG. 11A, the received packet data is transferred to the server 31 set as the transfer destination (step S104).
  When receiving the packet data, the server 31 performs processing according to the contents of the packet data (step S105). As for the processing contents, the service providing process corresponding to the contents of the packet data and the service providing program 310 is executed. In parallel with this, the server 31 monitors its own load status and determines whether or not the load is excessive (step S106). For example, the server 31 determines whether or not the number of simultaneous connections to itself is greater than or equal to a predetermined reference value.
  In step S106, when the load on the server 31 (here, the number of simultaneous connections) is equal to or less than a reference value (here, the maximum number of connections) preset in the server 31, an access from the client 2n is awaited as usual ( Step S106 No).
  On the other hand, if the load on the server 31 (here, the number of simultaneous connections) is equal to or greater than a reference value (here, the maximum number of connections) set in advance in the server 31 in step S106, a load distribution request is issued to the OFC 1 ( Step S106 Yes, S107). At this time, the IP addresses of all clients connected to the server 31 are notified to the OFC 1 when determining the load status.
  In response to the load distribution request, the OFC 1 calculates a communication path that prevents clients other than the client connected to the server 31 from accessing the server 31 thereafter, and the OFS 4i on the communication path and packets addressed to the server 31 are calculated. A flow entry (rule 144 + action information 145) to be set in the OFS 4i on the communication path through which is passed (step S108). For example, when the notified IP addresses being connected are the IP addresses “A” and “B”, the OFC 1 has the IP address “A” or “B” as the source address and the IP address “1” as the destination address. The flow entry for transferring the packet data to the server 31 and the packet data whose source address is an address other than the IP addresses “A” and “B” and whose destination address is the IP address “1” A flow entry to be transferred to a server (for example, server 32) is created.
  The OFC 1 sets the created flow entry in the OFS 4i on the newly calculated communication path (hereinafter referred to as the new communication path) and also uses the communication path (hereinafter referred to as the old communication path) used to access the server 31. ) Delete the flow entry for accessing the server 31 set in the OFS 4i (steps S109 and S110). For example, when the route to the server 32 via the OFS 41 and OFS 42 is calculated as a new communication route, the OFC 1 issues a flow entry setting instruction to the OFS 41 and 42 on the new communication route (step S109). At this time, since the OFS 41 is a switch on the old communication path, the OFC 1 issues a flow entry deletion instruction with the server 31 set in the OFS 41 as the transfer destination.
  The OFSs 41 and 42 set the flow entry transmitted from the OFC 1 in its own flow table 403 (step S110). For example, the flow table 403 of the OFS 41 is updated as shown in FIG. 11B. Here, the flow entry having the transfer destination shown in FIG. 11A as the server 31 is deleted from the flow table 403 of the OFS 41, and a new flow entry shown in FIG. 11B is added. Here, the packet data whose source IP address (access source IP address) is IP address “A” or “B” and whose destination IP address (access destination IP address) is IP address “1” is transferred to the server 31. A flow entry and a flow entry for transferring packet data whose source IP address is an address other than IP addresses “A” and “B” and whose destination IP address is “1” to the OFS 42 are additionally set. . On the other hand, although not shown in the flow table 403 of the OFS 42, packet data whose source IP address is an address other than the IP address “A” or “B” and whose destination IP address is “1” is sent to the server 32. A flow entry to be transferred is newly set.
  After the OFS4i flow table is updated, when the IP address “A”, “B” client accesses the virtual server 10 with the IP address “1” as the destination address, the client accesses the server 31. It becomes. On the other hand, when a client other than the IP addresses “A” and “B” accesses the virtual server 10 using the IP address “1” as the destination address, the client accesses the server 32. With reference to FIG. 12, the packet data transfer operation of the load distribution processing shift by the OFC 1 will be described.
  When a packet destined for the IP address “1” is transmitted from the currently connected client 21 (IP address “A”), the OFS 41 processes the packet data in accordance with the flow entry suitable for the received packet (step). S201, S202). Here, since the source IP address is IP address “A” and the destination IP address is “1” in accordance with the flow table 403 shown in FIG. 11B, the received packet data is sent to the server 31 set as the transfer destination. Is transferred (step S202).
  When receiving the packet data, the server 31 performs processing according to the contents of the packet data (step S203). As for the processing contents, the service providing process corresponding to the contents of the packet data and the service providing program 310 is executed. As described above, the server 31 monitors its own load status, determines whether the load is excessive, and changes the flow table 403 of the OFS 4i according to the determination result.
  On the other hand, in order to access the virtual server, the client 23 issues a name resolution request for the host name of the virtual server to the DNS server 5 (step S204). In response to the name resolution request from the client 23, the DNS server 5 returns one pooled IP address based on the round robin rule (step S205). Here, it is assumed that the IP address “1” is returned.
  The client 23 accesses the virtual server 10 using the IP address “1” acquired from the DNS server 5 (steps S206 to S208). Specifically, the client 23 sets the IP address “1” as the destination address of the packet data addressed to the virtual server 10, sets its own IP address “C” as the source address, and forwards it to the OFS 41 (step). S206). The OFS 41 processes the packet data according to the flow entry that matches the received packet. Here, since the source IP address is IP address “C” and the destination IP address is IP address “1” in accordance with the flow table 403 shown in FIG. 11B, the received packet data is transferred to the OFS 42 set as the transfer destination. Transfer (step S207).
  The OFS 42 processes the packet data according to the flow entry that matches the received packet (step S208). Here, according to its own flow table 403, the source IP address is IP address “C” and the destination IP address is IP address “1”, so the received packet data is transferred to the server 32 set as the transfer destination. To do.
  When receiving the packet data, the server 32 performs processing according to the content of the packet data (step S209). As for the processing contents, the service providing process corresponding to the contents of the packet data and the service providing program 310 is executed. Although not shown, similarly to the server 31, the server 32 monitors its own load status, determines whether the load is excessive, and changes the flow table 403 of the OFS 4i according to the determination result.
  As described above, according to the present invention, during normal operation, load distribution is performed by the round robin function of the DNS server 5, and when the load is concentrated on the same server, the flow table 403 is updated by the OFC1. , Load concentration can be distributed.
  Even if load distribution is performed using only the round robin function, if a high load is applied from a small number of clients, the load may be biased to one server. In the present invention, such concentration of load can be avoided by communication path change processing by the OFC 1. That is, load control by OFC1 is performed only when load control by unrobin does not function sufficiently in DNS and the load is biased to a specific server.
  Further, load distribution may be performed using only the flow control function by OFC1, but when load distribution is performed using only this function, if the frequency of load imbalance (concentration) increases, OFC1 The processing load increases. In addition, since the frequency of changing the flow table increases, packet transfer processing may be delayed. In the present invention, the frequency at which the load concentrates on the same server is suppressed by using the DNS round robin function. Thereby, the processing load in the OFC 1 is reduced.
  The OFC 1 according to the present invention can avoid the concentration of load on a specific server by updating the flow table 403 of the OFS 4i regardless of the name resolution processing by the DNS server 5. For this reason, the DNS server 5 may perform normal name resolution and does not require any special configuration change. For example, in the system described in Patent Document 1, whether or not to allocate an address is set for the DNS server, but in the present invention, such a specification change is not necessary. That is, the OFC 1 according to the present invention can perform load distribution in cooperation with a DNS server according to the prior art. Also, since OFC 1 performs load distribution processing according to the notification from server 3m (load status of server 3m), it does not communicate directly with DNS server 5 and distributes the load concentration that could not be avoided by DNS server 5. can do.
  In addition, since the correspondence between the number of candidate IP addresses to be returned in round robin and the number of servers 3m is unnecessary, the number of servers 3m can be increased or decreased without changing the setting of the client 2n.
  Furthermore, since load distribution is performed using a flow control function based on open flow, load distribution can be performed under more detailed conditions than in the past. More specifically, since the OFC 1 can control the flow (packet data) with a combination of layer 1 to layer 4 addresses and identifiers, load distribution can be performed on the condition of these combinations. For example, it is also possible to distribute the load concentration on the virtual server operating on the server 3m.
  When the load on the server 31 is reduced by the load distribution processing by the OFC 1, the communication path (new communication path) addressed to the IP address “1” may be changed to the old communication path before load distribution.
  FIG. 13 is a sequence diagram illustrating an example of a communication path return operation when the load situation is improved. Referring to FIG. 13, the server 31 that has issued a load distribution request with an increase in load determines whether or not the load (for example, the number of simultaneous connections) is below a reference value (threshold) (step S301). This determination is preferably performed at a predetermined cycle or arbitrarily.
  When the number of simultaneous connections to the server 31 falls below a preset reference value, the server 31 issues a setting return request to the OFC 1 (Yes in steps S301 and S302). The server 31 preferably issues a setting recovery request when the period during which the load (for example, the number of simultaneous connections) is lower than the reference value is maintained for a predetermined period or longer. Alternatively, the server 31 preferably issues a setting recovery request when the number of times that the load (for example, the number of simultaneous connections) is detected to be below the reference value reaches a predetermined number. On the other hand, if the number of simultaneous connections to the server 31 is greater than or equal to a preset reference value, the current state is maintained and access from the client 2n is awaited (No in step S301).
  When the OFC 1 receives a notification that the number of simultaneous connections has fallen below the threshold (setting return request), the OFC 4i returns the flow table of the OFS 4i changed in steps S108 to S110 (steps S303 to S305). Specifically, the flow entry (rule 144 + action information 145) set before the change is extracted from the flow table 14 and set in the OFSs 41 and 42. At this time, the flow entry before the change (the flow entry corresponding to the old communication path) is linked to the flow entry corresponding to the new communication path, thereby identifying the flow entry corresponding to the old communication path. Is preferred.
  By returning the access destination (communication path) to the original state as described above, while the load control by the round robin function in the DNS server 5 is sufficiently functioning, the flow entry changed by the OFC 1 becomes unnecessary, and the default A communication environment in a state (for example, a state initially set by the user) is maintained. That is, the system can be maintained and managed in a communication environment desired by the user. However, even when the load is distributed by the OFC 1, the OFC 1 controls the communication path and the operation of the OFS 4i in an integrated manner, so that system maintenance can be easily performed.
  The embodiment of the present invention has been described in detail above, but the specific configuration is not limited to the above-described embodiment, and changes within a scope not departing from the gist of the present invention are included in the present invention. . In the above-described embodiment, the new communication path and the old communication path pass through different OFS 4i, but may be paths that pass through the same OFS 4i. In this case, the access destination server of the client 2n can be changed by changing only the last flow entry of the OFS 4i. As a result, it is possible to further reduce the load due to the update process of the flow entry of OFC1.
  A part or all of the above embodiment can be described as in the following supplementary notes, but is not limited thereto.
(Appendix 1)
A controller that sets a flow entry for a switch on the communication path;
A switch that relays received packets in accordance with the flow entry set by the controller;
A plurality of service providing servers that provide services to each of a plurality of client terminals connected to itself via the switch;
A DNS (Domain Name System) server that distributes load between the plurality of client terminals and the plurality of service providing servers by a round robin function;
Each of the plurality of service providing servers monitors each load situation, and when determining that the load on itself is equal to or greater than a threshold, issues a load distribution request to the controller,
The controller changes a flow entry set in the switch in response to the load distribution request.
(Appendix 2)
In the computer system according to attachment 1,
The computer system sets a flow entry in the switch to reduce a load on a request source server of the load distribution request.
(Appendix 3)
In the computer system according to attachment 2,
The computer system calculates a new communication path for accessing a service providing server different from the request source server, and generates a flow entry to be set in a switch on the new communication path.
(Appendix 4)
In the computer system according to appendix 2 or 3,
The request source server notifies the controller of the address of the client terminal connected to itself,
The controller changes the flow entry set in the switch so that packet data whose source address is an address other than the notified address reaches a service providing server other than the request source server.
(Appendix 5)
In the computer system according to attachment 4,
The controller changes the flow entry set in the switch so that packet data having the notified address as a transmission source address reaches the request source server.
(Appendix 6)
In the computer system according to any one of appendices 1 to 5,
Each of the plurality of service providing servers monitors the number of simultaneous connections to itself, and issues a load distribution request to the controller when it is determined that the number of simultaneous connections is equal to or greater than a threshold.
(Appendix 7)
In the computer system according to any one of appendices 1 to 6,
The request source server that issued the load distribution request issues a setting return request to the controller when the load falls below the threshold,
The controller returns the changed flow entry of the switch to the original flow entry in response to the load distribution request.
(Appendix 8)
In the computer system according to any one of appendices 1 to 7,
The plurality of service providing servers form a virtual server to which one host name is assigned,
A computer system in which a plurality of IP addresses are assigned to each of the plurality of service providing servers.
(Appendix 9)
9. A controller used in the computer system according to any one of appendices 1 to 8.
(Appendix 10)
A switch used in the computer system according to any one of appendices 1 to 8.
(Appendix 11)
A service providing server used in the computer system according to any one of appendices 1 to 8.
(Appendix 12)
The controller sets a flow entry for the switch on the communication path;
A switch relaying a received packet according to a flow entry set by the controller;
Each of a plurality of service providing servers provides a service to each of a plurality of client terminals connected to the server via the switch;
A DNS (Domain Name System) server performs load balancing between the plurality of client terminals and the plurality of service providing servers by a round robin function;
Each of the plurality of service providing servers monitoring their load status;
When each of the plurality of service providing servers determines that the load on itself is equal to or greater than a threshold, issuing a load distribution request to the controller;
The controller comprises a step of changing a flow entry set in the switch in response to the load distribution request.
(Appendix 13)
In the load distribution method according to attachment 12,
The step of changing the flow entry includes the step of the controller setting a flow entry for reducing a load on a request source server of the load distribution request in the switch.
(Appendix 14)
In the load distribution method according to attachment 13,
The step of changing the flow entry includes:
Calculating a new communication path for the controller to access a service providing server different from the requesting server;
The controller further comprising: generating a flow entry to be set in the switch on the new communication path.
(Appendix 15)
In the load distribution method according to attachment 13 or 14,
The step of changing the flow entry includes:
The requesting server notifying the controller of the address of a client terminal connected to the server;
The controller changing the flow entry set in the switch so that packet data having an address other than the notified address as a source address reaches a service providing server other than the request source server; A load balancing method further provided.
(Appendix 16)
In the load distribution method according to attachment 15,
The step of changing the flow entry includes a step in which the controller changes the flow entry set in the switch so that packet data having the notified address as a transmission source address reaches the request source server. A load balancing method further provided.
(Appendix 17)
In the load balancing method according to any one of appendices 12 to 16,
The step of monitoring the load state includes the step of monitoring the number of simultaneous connections to each of the plurality of service providing servers,
The step of issuing the load distribution request includes the step of issuing the load distribution request to the controller when each of the plurality of service providing servers determines that the number of simultaneous connections is equal to or greater than a threshold value. Distribution method.
(Appendix 18)
In the load distribution method according to any one of appendices 12 to 17,
A requesting server that has issued the load distribution request issues a setting return request to the controller when the load falls below the threshold;
The load distribution method further comprising: the controller returning the switch flow entry changed in response to the load distribution request to the original flow entry.
1: Open flow controller (OFC)
21 to 2n: client terminals 31 to 3m: service providing servers 41 to 4i: open flow switch (OFS)

Claims (10)

  1. A controller that sets a flow entry for a switch on the communication path;
    A switch that relays received packets in accordance with the flow entry set by the controller;
    A plurality of service providing servers that provide services to each of a plurality of client terminals connected to itself via the switch;
    A DNS (Domain Name System) server that distributes load between the plurality of client terminals and the plurality of service providing servers by a round robin function;
    Each of the plurality of service providing servers monitors each load situation, and when determining that the load on itself is equal to or greater than a threshold, issues a load distribution request to the controller,
    The controller changes a flow entry set in the switch in response to the load distribution request.
  2. The computer system of claim 1,
    The computer system sets a flow entry in the switch to reduce a load on a request source server of the load distribution request.
  3. The computer system according to claim 2, wherein
    The request source server notifies the controller of the address of the client terminal connected to itself,
    The controller changes the flow entry set in the switch so that packet data whose source address is an address other than the notified address reaches a service providing server other than the request source server.
  4. The computer system according to claim 3.
    The controller changes the flow entry set in the switch so that packet data having the notified address as a transmission source address reaches the request source server.
  5. The computer system according to any one of claims 1 to 4,
    Each of the plurality of service providing servers monitors the number of simultaneous connections to itself, and issues a load distribution request to the controller when it is determined that the number of simultaneous connections is equal to or greater than a threshold.
  6. The computer system according to any one of claims 1 to 5,
    The request source server that issued the load distribution request issues a setting return request to the controller when the load falls below the threshold,
    The controller returns the changed flow entry of the switch to the original flow entry in response to the load distribution request.
  7. The computer system according to any one of claims 1 to 6,
    The plurality of service providing servers form a virtual server to which one host name is assigned,
    A computer system in which a plurality of IP addresses are assigned to each of the plurality of service providing servers.
  8.   A controller used in the computer system according to claim 1.
  9.   The service provision server utilized with the computer system of any one of Claim 1 to 7.
  10. The controller sets a flow entry for the switch on the communication path;
    A switch relaying a received packet according to a flow entry set by the controller;
    Each of a plurality of service providing servers provides a service to each of a plurality of client terminals connected to the server via the switch;
    A DNS (Domain Name System) server performs load balancing between the plurality of client terminals and the plurality of service providing servers by a round robin function;
    Each of the plurality of service providing servers monitoring their load status;
    When each of the plurality of service providing servers determines that the load on itself is equal to or greater than a threshold, issuing a load distribution request to the controller;
    The controller comprises a step of changing a flow entry set in the switch in response to the load distribution request.
JP2010035322A 2010-02-19 2010-02-19 Computer system, controller, service providing server, and load distribution method Active JP5757552B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2010035322A JP5757552B2 (en) 2010-02-19 2010-02-19 Computer system, controller, service providing server, and load distribution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010035322A JP5757552B2 (en) 2010-02-19 2010-02-19 Computer system, controller, service providing server, and load distribution method

Publications (2)

Publication Number Publication Date
JP2011170718A JP2011170718A (en) 2011-09-01
JP5757552B2 true JP5757552B2 (en) 2015-07-29

Family

ID=44684758

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2010035322A Active JP5757552B2 (en) 2010-02-19 2010-02-19 Computer system, controller, service providing server, and load distribution method

Country Status (1)

Country Link
JP (1) JP5757552B2 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2811701B1 (en) 2012-02-02 2019-07-24 Nec Corporation Controller, load-balancing method, computer program product and computer system
CN102594697B (en) * 2012-02-21 2015-07-22 华为技术有限公司 Load balancing method and device
US9521586B2 (en) 2012-03-02 2016-12-13 Ntt Docomo, Inc. Mobile communication system, communication system, node, flow-control network, and communication-control method
JP5808700B2 (en) * 2012-03-05 2015-11-10 株式会社Nttドコモ Communication control device, communication control system, virtualization server management device, switch device, and communication control method
CN104160665B (en) * 2012-03-08 2017-03-08 日本电气株式会社 Network system, controller and load-distribution method
US20150063361A1 (en) * 2012-03-28 2015-03-05 Nec Corporation Computer system and communication route changing method
US9549413B2 (en) 2012-03-30 2017-01-17 Nec Corporation Control apparatus, communication apparatus, communication method and program
US9967177B2 (en) 2012-05-31 2018-05-08 Nec Corporation Control apparatus, communication system, switch control method and program
WO2013183664A1 (en) 2012-06-06 2013-12-12 日本電気株式会社 Switch device, vlan configuration and management method, and program
WO2013183231A1 (en) * 2012-06-06 2013-12-12 日本電気株式会社 Communication system, communication control method, communication relay system, and communication relay control method
EP2966813A4 (en) 2013-03-06 2016-09-14 Nec Corp Communication system, switch, control device, packet processing method, and program
CN108646992B (en) 2013-11-07 2021-06-08 精工爱普生株式会社 Printing control system
US9882814B2 (en) * 2014-09-25 2018-01-30 Intel Corporation Technologies for bridging between coarse-grained and fine-grained load balancing
WO2016082169A1 (en) 2014-11-28 2016-06-02 华为技术有限公司 Memory access method, switch and multi-processor system
JP2016163085A (en) * 2015-02-27 2016-09-05 アラクサラネットワークス株式会社 Communication device
US10462059B2 (en) 2016-10-19 2019-10-29 Intel Corporation Hash table entries insertion method and apparatus using virtual buckets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004350078A (en) * 2003-05-23 2004-12-09 Fujitsu Ltd Line distribution transmission system

Also Published As

Publication number Publication date
JP2011170718A (en) 2011-09-01

Similar Documents

Publication Publication Date Title
JP5757552B2 (en) Computer system, controller, service providing server, and load distribution method
US20180013626A1 (en) Information system, control server, virtual network management method, and program
US9185031B2 (en) Routing control system for L3VPN service network
JP5944537B2 (en) Communication path management method
US8923296B2 (en) System and methods for managing network packet forwarding with a controller
US9042234B1 (en) Systems and methods for efficient network traffic forwarding
US9379975B2 (en) Communication control system, control server, forwarding node, communication control method, and communication control program
US9386085B2 (en) Techniques for providing scalable application delivery controller services
JP5648926B2 (en) Network system, controller, and network control method
US20150019756A1 (en) Computer system and virtual network visualization method
JP5488979B2 (en) Computer system, controller, switch, and communication method
US20170208005A1 (en) Flow-Based Load Balancing
US8787388B1 (en) System and methods for forwarding packets through a network
JP2011160041A (en) Front end system and front end processing method
JP5713101B2 (en) Control device, communication system, communication method, and communication program
JP5861772B2 (en) Network appliance redundancy system, control device, network appliance redundancy method and program
JP2014161098A (en) Communication system, node, packet transfer method and program
JP5870995B2 (en) Communication system, control device, computer, node control method and program
JP5747997B2 (en) Control device, communication system, virtual network management method and program
Khoshbakht et al. SDTE: Software defined traffic engineering for improving data center network utilization
JP2015046936A (en) Communication system, control device, processing rule setting method, and program
KR20210016802A (en) Method for optimizing flow table for network service based on server-client in software defined networking environment and sdn switch thereofor
JP2017158103A (en) Communication management device, communication system, communication management method and program
Surampalli et al. Cloud Server with OpenFlow: Load Balancing
CN110300073A (en) Cascade target selecting method, polyplant and the storage medium of port

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130109

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20131118

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20131122

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140204

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20140502

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150528

R150 Certificate of patent (=grant) or registration of utility model

Ref document number: 5757552

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150