WO2014199924A1 - Dispositif de commande, système de communication, et procédé et programme de commande d'un dispositif relais - Google Patents

Dispositif de commande, système de communication, et procédé et programme de commande d'un dispositif relais Download PDF

Info

Publication number
WO2014199924A1
WO2014199924A1 PCT/JP2014/065122 JP2014065122W WO2014199924A1 WO 2014199924 A1 WO2014199924 A1 WO 2014199924A1 JP 2014065122 W JP2014065122 W JP 2014065122W WO 2014199924 A1 WO2014199924 A1 WO 2014199924A1
Authority
WO
WIPO (PCT)
Prior art keywords
multicast
flow
flow processing
processing device
multicast group
Prior art date
Application number
PCT/JP2014/065122
Other languages
English (en)
Japanese (ja)
Inventor
直之 岩下
應好 坪内
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2014199924A1 publication Critical patent/WO2014199924A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport

Definitions

  • the present invention is based on the priority claim of Japanese patent application: Japanese Patent Application No. 2013-121952 (filed on June 10, 2013), the entire contents of which are incorporated herein by reference. Shall.
  • the present invention relates to a control device, a communication system, a relay device control method, and a program, and more particularly, to a control device, a communication system, a relay device control method, and a program that realize multicast.
  • Non-Patent Document 1 proposes a technique called OpenFlow that realizes a centralized control type network using switches called OpenFlow switches and OpenFlow controllers that centrally control these switches.
  • OpenFlow captures communication as an end-to-end flow and performs path control, failure recovery, load balancing, and optimization on a per-flow basis.
  • the OpenFlow switch specified in Non-Patent Document 2 includes a secure channel for communication with the OpenFlow controller, and operates according to a flow table that is appropriately added or rewritten from the OpenFlow controller. For each flow, a set of match conditions (Match Fields), flow statistical information (Counters), and instructions (Instructions) that define processing contents are defined for each flow (non-patented). (Refer to “5.2 Flow Table” in Document 2).
  • the OpenFlow switch searches the flow table for an entry having a matching condition (see “5.3 Matching” in Non-Patent Document 2) that matches the header information of the received packet. If an entry that matches the received packet is found as a result of the search, the OpenFlow switch updates the flow statistical information (counter) and processes the processing (designated) in the instruction field of the entry for the received packet. Perform packet transmission, flooding, discard, etc. from the port. On the other hand, if no entry matching the received packet is found as a result of the search, the OpenFlow switch sends an entry setting request to the OpenFlow controller via the secure channel, that is, a control for processing the received packet. An information transmission request (Packet-In message) is transmitted. The OpenFlow switch receives a flow entry whose processing content is defined and updates the flow table. As described above, the OpenFlow switch performs packet transfer using the entry stored in the flow table as control information.
  • a matching condition see “5.3 Matching” in Non-Patent Document 2
  • the OpenFlow switch updates the flow statistical information (counter
  • Patent Document 1 discloses a method of configuring a virtual network obtained by logically dividing a physical network using the centralized control type network.
  • IGMP Internet Group Management Protocol
  • MLD Multicast Listener Discovery
  • Non-Patent Document 2 describes that an OpenFlow switch performs multicasting and broadcasting by defining a group of group type “all” (see “5.6 Group Table” in Non-Patent Document 2). reference).
  • Patent Document 1 and Non-Patent Documents 1 and 2 do not describe how to manage multicast groups and create flow entries or group table entries. For example, every time movement occurs in a multicast group, it is possible to calculate an exhaustive route from the sender to the receiver and set control information (for example, the flow entry of Non-Patent Document 2). There is a limit to the number of pieces of control information that can be held by individual flow processing devices (for example, the open flow switch of Non-Patent Document 2). Further, the increase in the number of control information also increases the load on the control device (for example, the open flow controller of Non-Patent Document 2). A proposal of a multicast realization method that takes these into consideration is required.
  • the present invention provides a control device, a communication system, a control method for a relay device, and a program for realizing multicast capable of suppressing an increase in resources used and load on the flow processing device and the control device in the centralized control type network. Objective.
  • the sender management unit that manages the transmission node of the IP (Internet Protocol) multicast group on the virtual network, and the subordinate flow processing device Based on the notification, a route calculation for calculating the IP multicast route for the receiver management unit that manages the receiving node of the IP multicast group on the virtual network and the IP multicast group in which at least one pair of transmitting node and receiving node exists And a device control unit that sets control information for transferring IP multicast along the route to the flow processing device on the calculated route.
  • IP Internet Protocol
  • a communication system including the control device described above and a flow processing device that operates according to control information set by the control device.
  • the control device that controls the flow processing device manages the sending node of the IP multicast group on the virtual network based on the notification from the subordinate flow processing device, and the subordinate flow processing Managing a receiving node of the IP multicast group on the virtual network based on a notification from the device; calculating an IP multicast route for the IP multicast group having at least one pair of transmitting node and receiving node; Setting control information for transferring IP multicast along the route to the flow processing device on the calculated route;
  • a control method for a flow processing apparatus comprising: The method is associated with a specific machine, a controller that controls the flow processor.
  • the computer constituting the control device that controls the flow processing device uses the storage means to transmit the IP multicast group on the virtual network based on the notification from the subordinate flow processing device.
  • Processing for managing a node processing for managing a receiving node of an IP multicast group on a virtual network based on a notification from a subordinate flow processing device using the storage means, and a topology of the flow processing device collected in advance
  • Let the flow processor forward IP multicast along the path Program for executing a process of setting the control information, is provided.
  • This program can be recorded on a computer-readable (non-transient) storage medium. That is, the present invention can be embodied as a computer program product.
  • FIG. 16 is a diagram representing the multicast group (MC1) of the IP multicast group information of FIG. 15 on the virtual network of FIG.
  • FIG. 16 is a diagram showing the multicast group (MC2) of the IP multicast group information of FIG. 15 on the virtual network of FIG.
  • MC3 the multicast group (MC3) of the IP multicast group information of FIG. 15 on the virtual network of FIG.
  • FIG. 16 is a block diagram which shows the structure of the flow processing apparatus of the 1st Embodiment of this invention.
  • FIG. 26 is a flowchart showing details of step S308 in FIG. 21 and step S3134 in FIG. It is a flowchart showing the detail of step S309 of FIG. 21 and step S3133 of FIG.
  • FIG. 27 is a flowchart showing details of step S310 in FIG. 21 and step S3142 in FIG. It is a flowchart showing the detail of step S313 of FIG. It is a flowchart showing the detail of step S314 of FIG. FIG.
  • FIG. 27 is a flowchart showing details of step S353 in FIG. 21 and step S3141 in FIG. It is a figure for demonstrating the flow entry for IP multicast group MC1 set by the control apparatus of the 1st Embodiment of this invention. It is a figure which shows an example of the flow entry for IP multicast group MC1 set to the flow processing apparatus 200A. It is a figure which shows an example of the flow entry for IP multicast group MC1 set to the flow processing apparatus 200B. It is a figure which shows an example of the flow entry for IP multicast group MC1 set to the flow processing apparatus 200C. It is a figure for demonstrating the flow entry for IP multicast group MC2 set by the control apparatus of the 1st Embodiment of this invention.
  • the present invention includes a control device 10 that controls a centralized control network and a flow processing device 20 that operates according to control information set from the control device 10. Can be realized.
  • the control device 10 performs transmission for managing a transmission node of an IP multicast group on the virtual network (hereinafter, IP multicast is also referred to as “IPMC”) based on a notification from a subordinate flow processing device.
  • IP multicast is also referred to as “IPMC”
  • IPMC IP multicast
  • a receiver management unit 12 that manages the reception node of the IP multicast group on the virtual network, and an IP that includes at least one transmission node and reception node
  • a route calculation unit 13 that calculates an IP multicast route
  • a device control unit 14 that sets control information for transferring the IP multicast along the route to the flow processing device 20 on the calculated route, .
  • the control device 10 calculates a route from the sender to each receiver for the IP multicast group in which the receiver and the sender exist among the IP multicast groups, and performs flow processing on the route.
  • Control information is set in the device 20. That is, route calculation and control information setting for an IP multicast group with no receiver and no receiver or an IP multicast group with no sender is omitted. As a result, it is possible to suppress an increase in use resources and load of the flow processing device 20 and the control device 10.
  • FIG. 2 is a diagram showing a configuration of the first exemplary embodiment of the present invention.
  • a configuration including a plurality of flow processing devices 200A to 200C and a control device 100 that controls these flow processing devices 200A to 200C.
  • the flow processing devices 200A to 200C are connected to the external nodes 300A to 300E, respectively, and the control device 100 controls the flow processing devices 200A to 200C to realize communication between the external nodes 300A to 300E.
  • the solid line in FIG. 2 represents the data transfer channel, and the broken line in FIG. 2 represents “the flow processing device 200 when the control device 100 and the flow control device 200 (hereinafter, flow processing devices 200A to 200C are not particularly distinguished). ").) Represents a control channel.
  • the control device 100 performs flow management as to which flow processing device the communication between the external nodes 300 is processed, and sets a flow entry (control information) for one or more flow processing devices 200.
  • the flow processing device 200 controls an input frame from the external node 300 or another flow processing device (hereinafter also referred to as “frame” without being distinguished from “packet” processed by the OpenFlow switch of Non-Patent Document 2).
  • the input frame is processed according to the flow entry (control information) having a matching condition that matches the input frame with reference to a flow table storing the flow entry (control information) set by the instruction of the apparatus 100.
  • the flow processing device 200 outputs an input frame to the external node 300, outputs it to another flow processing device 200, or outputs it to the control device 100 according to the action (processing content) defined in the flow entry (control information). Or drop.
  • the external node 300 transmits a frame addressed to another external node 300 via the flow processing device 200, or receives a frame originating from another external node 300 from the flow processing device 200.
  • the flow processing device 200 connected to the external node 300 is an edge flow processing device
  • the flow processing device 200 connected only to the flow processing device 200 is a core flow processing device.
  • the core flow processing apparatus may not exist depending on the flow network, and does not exist in FIG.
  • the external node 300 does not need to be directly connected to the edge flow processing device, and may be connected via a layer 2 switch (L2SW), or the external node 300 is a layer 3 switch (L3SW) or a router. In some cases, another external node 300 may exist ahead.
  • L2SW layer 2 switch
  • L3SW layer 3 switch
  • another external node 300 may exist ahead.
  • FIG. 3 is a block diagram showing a detailed configuration of the control device 100.
  • the flow processing device communication unit 101 the physical network management unit 102, the virtual network management unit 103, the device information storage unit 104, the physical topology information storage unit 105, and the base flow entry storage unit 106
  • a configuration comprising a person information storage unit 113 and an IP multicast group information storage unit 114 is shown.
  • the flow processing device communication unit 101 relays the flow processing device information, input frame information, and flow entry deletion information due to timeout from the flow processing device 200 to the physical network management unit 102 and the virtual network management unit 103. Further, the flow processing device communication unit 101 relays the setting and reference of the output frame information from the virtual network management unit 103 and the flow entry (control information) from the physical network management unit 102 to the flow processing device 200. In addition, the flow processing device communication unit 101 provides the flow processing device information stored in the device information storage unit 104 to the virtual network management unit 103 and the physical network management unit 102.
  • the physical network management unit 102 sets the flow device information received from the flow processing device communication unit 101 in the device information storage unit 104, and further, based on the information in the device information storage unit 104 and the input frame information, the physical topology The topology information of the flow processing device is set in the information storage unit 105.
  • the physical network management unit 102 also functions as a route calculation unit that obtains a base route for broadcast or layer 2 multicast based on the topology information of the flow processing device.
  • the physical network management unit 102 sets the obtained base flow entry 400 for broadcast or layer 2 multicast for each flow processing device 200 in the base flow entry storage unit 106.
  • FIG. 4 is a diagram showing the flow of a broadcast frame realized by the base flow entry 400 on the upper side of FIG.
  • the first 3-digit AAA represents the type of the flow entry
  • the next 3-digit CCC represents the destination (broadcast or multicast) of the flow entry.
  • the flow entry No. Is represented by the AAA-BBB-CCC 9-digit code the first 3 digits AAA indicates the type of flow entry
  • the next 3 digits BBB indicates the location of the flow entry
  • the last 3 digits It represents the source or destination (broadcast or multicast) of the flow entry.
  • FIG. 5 to 7 show examples of the base flow entry 400 set in the flow processing devices 200A to 200C.
  • processing contents to be applied to a frame in which a broadcast or layer 2 multicast (hereinafter also referred to as “BCMC”) address is set are determined according to the input port.
  • the BCMC frame received from the (external) node 300A hits the top entry of table1 and is processed in table2.
  • table 2 if an appropriate VLAN ID is set, the top entry will be hit, and after the MPLS shim header is inserted according to the VLAN ID, the flow processing devices 200B and 200C Transferred.
  • the BCMC address is an address for broadcast and multicast.
  • the I / G bit of the first 1 byte (I / G is an abbreviation of “Individual Address / Group Address”) is 1. Become.
  • the physical network management unit 102 sets the IP multicast default flow entry shown in FIGS. 8 to 10 in the flow processing apparatus having a port (external node connection port) to which the external node 300 is connected.
  • the IP multicast default flow entry includes three types of flow entries: an IGMP information acquisition default flow entry, an IP multicast Well known distribution default flow entry, and an IP multicast packet information acquisition default flow entry.
  • the default flow entry for acquiring IGMP information is as shown in 480-200A-300A in FIG. 8, 480-200B-300B, 480-200B-300C in FIG. 9, and 480-200C-300D, 480-200A-300E in FIG.
  • the IPMC Well-known delivery default flow entry is a flow entry for delivering a packet reserved as a communication control protocol that does not need to cross the L3 switch using the base flow entry 400. Transfer to another flow processing apparatus or the like is performed (not transmitted to the control apparatus 100).
  • the default flow entries for obtaining IP multicast packet information are 482-200A-300A in FIG. 8, 482-200B-300B, 482-200B-300C in FIG. 9, and 482-200C-300D, 482-200C-300E in FIG. As shown, it is a flow entry for detecting IP multicast reception from the external node 300.
  • the priority of the default flow entry for acquiring IP multicast packet information is set lower than the flow entry 481 (481-200A-300A, etc.) and the IP multicast sender flow entry 450 described later.
  • the IP multicast packet information acquisition default flow entry is not applied (the transmission to the control device 100 is not performed).
  • the physical network management unit 102 obtains broadcast and multicast flow entries based on the virtual node information from the virtual network management unit 103, the virtual network topology information, and the mapping information in the mapping information storage unit 109.
  • the flow entry is set in the flow entry storage unit 107, and the flow entry is set in the flow processing device 200 through the flow processing device communication unit 101.
  • the physical network management unit 102 obtains a unicast match condition (flow) and a flow entry from the topology information and the like.
  • the physical network management unit 102 sets the created flow setting information and the flow entry in the flow storage unit 108 and the flow entry storage unit 107, and sets the flow entry in the flow processing device 200 through the flow processing device communication unit 101. .
  • the physical network management unit 102 in response to the IP multicast flow setting request from the virtual network management unit 103, the multicast label ID accompanying the IP multicast flow setting request, the external node 300 connection port of the receiver or the sender, the VLAN Using the ID and the connection port information of the other flow processing device 200 that is loop-free in the base flow entry storage unit 106, the IP multicast sender flow entry 450, the IP multicast encapsulated flow entry 451, the IP multicast A flow entry 452 for receiver is generated, and the flow entry communication unit 101 is requested to set these flow entries.
  • the flow entry 450 for the IP multicast sender is a flow entry for transmitting the frame received from the IP multicast sender to the external node 300 connection port of the flow processing apparatus
  • the IP multicast encapsulation flow entry 451 is This is a flow entry for encapsulating a frame with a multicast label ID assigned so as not to overlap with the BC domain ID used in the above base flow entry, adding a label, and multicast distribution.
  • the IP multicast receiver flow entry 452 is a flow entry for executing restoration or the like of the frame encapsulated in the IP multicast encapsulation flow entry 451 and executing output to the external node 300 (FIG. 28 etc.).
  • the IP multicast sender flow entry 450, the IP multicast encapsulation flow entry 451, and the IP multicast receiver flow entry 452 will be described in detail later.
  • the virtual network management unit 103 sets virtual node information on a plurality of virtual networks in the virtual node information storage unit 111, and sets a connection relationship of virtual node information for each virtual network in the virtual topology information storage unit 110.
  • FIG. 11 is a diagram illustrating a connection relationship of virtual nodes (virtual L3SW and L2SW) on a certain virtual network set in the virtual topology information storage unit 110.
  • a virtual network 500A in which virtual L3SW 501A, virtual L2SWs 502A and 502B, and external NW endpoints 503A to 503F exist as virtual nodes is shown.
  • a virtual L3SW 501A and virtual L2SWs 502A and 502B are connected, a virtual L2SW 502A and external NW end points 503A and 503B are connected, and a virtual L2SW 502B and external NW end points 503C to 503F are connected.
  • the virtual L2SW 502A has 1 as the BC domain ID
  • the virtual L2SW 502B has 2 as the BC domain ID.
  • the BC domain ID may be allocated so that the virtual network management unit 103 is unique in the virtual L2SW 502.
  • the virtual network management unit 103 stores mapping information indicating the connection relationship between the virtual network and the real network in the mapping information storage unit 109, and passes this mapping information to the physical network management unit 102.
  • FIG. 12 is a diagram illustrating a correspondence relationship between the virtual network in FIG. 11 and the physical network in FIG. 2 set in the virtual topology information storage unit 110.
  • the external NW end point 503A indicates the untagged VLAN ID 10 of the external node 300A connection port of the flow processing apparatus 200A.
  • the external NW endpoint 503B is associated with the tagged VLAN ID 20 of the external node 300B connection port of the flow processing device 200B
  • the external NW endpoint 503C is associated with the tagged VLAN ID 30 of the external node 300C connection port of the flow processing device 200B.
  • the external NW endpoint 503D is associated with the tagged VLAN ID 20 of the external node 300D connection port of the flow processing device 200C
  • the external NW endpoint 503E is associated with the tagged VLAN ID 20 of the external node 300D connection port of the flow processing device 200C.
  • the external NW end point 503F is associated with the tagged VLAN ID 30 of the external node 300E connection port of the flow processing apparatus 200C.
  • the virtual network management unit 103 processes the frame as input to the corresponding virtual network based on the virtual network topology information. Specifically, the virtual network management unit 103 drops the input frame information, causes the virtual node such as the virtual L3SW 501 to receive the host, or causes the virtual node such as the external NW endpoint 503 to output the output frame information. Note that the output of the output frame information of the virtual node such as the external NW endpoint 503 is not only triggered by the processing of the input frame information from the flow processing device communication unit 101 but also the host transmission of the virtual node such as the virtual L3SW 501 in some cases. The virtual network management unit 103 outputs the output frame information to the flow processing device 200 through the flow processing device communication unit 101.
  • the virtual network management unit 103 sets the flow entry setting for processing the input frame information from the flow processing device communication unit 101 and the output frame to the flow processing device communication unit 101 corresponding thereto, to the physical network management unit 102 may be requested.
  • the request can be made by indicating a route on the virtual network or the like from the virtual network topology information or the like.
  • the virtual network management unit 103 sets the external NW end point 503 where the IP multicast packet receiver exists in the IP multicast receiver information storage unit 112 based on the input frame information of IGMP Report or IGMP Leave (FIG. 1). Equivalent to the Recipient Management Department). Specifically, the virtual network management unit 103 generates an IP multicast packet based on the external node 300 connection port, the VLAN ID, the IP multicast group, and the information in the mapping information storage unit 109 that has received the IGMP Report or IGMP Leave frame. The external NW endpoint 503 where the receiver exists is set in the IP multicast receiver information storage unit 112. Similarly, the virtual network management unit 103 sets the external NW endpoint 503 where the IP multicast packet sender exists in the IP multicast sender information storage unit 113 (corresponding to the sender management unit in FIG. 1).
  • IP multicast packet senders and one or more IP multicast receivers are associated with the IP multicast group of the virtual network 500. If they are present, the virtual network management unit 103 assigns a multicast label ID for the IP multicast group and registers it in the IP multicast group information storage unit 114.
  • the multicast label ID is set to a value after 65536 that does not overlap with the BC domain ID inserted in the MPLS shim header in the base flow entry 400.
  • the virtual network management unit 103 also determines that the number of IP multicast packet receivers for the IP multicast group of the virtual network 500 registered in the IP multicast group information storage unit 114 becomes zero, or the number of IP multicast packet senders is 0. In the case of the number, the registration of the corresponding IP multicast group is canceled and the assigned multicast label ID is returned (the corresponding entry is deleted from the IP multicast group information storage unit 114).
  • the virtual network management unit 103 stores the physical network in the IP multicast group information storage unit 114 at the time of registration of the IP multicast group of the virtual network 500 or according to increase / decrease of IP multicast receivers and IP multicast senders of the registered IP multicast group.
  • the management unit 102 is requested to set a flow entry for realizing IP multicast.
  • the virtual NW endpoint 503 of the receiver registered in the IP multicast receiver information storage unit 112 and the receiver registered in the IP multicast sender information storage unit 113 are used.
  • the external node 300 connection port (obtained using the information in the mapping information storage unit 109) corresponding to the virtual NW endpoint 503, the VLAN ID, and the multicast label ID in the IP multicast group information storage unit 114 are used.
  • the virtual network management unit 103 also sets the external node 300 connection port, the VLAN ID, and the IP multicast group address to the physical network management unit 102 when only one of the IP multicast sender or receiver exists in the IP multicast group. Used to request Drop setting of the IP multicast flow or deletion of the flow entry itself.
  • the device information storage unit 104 holds information on the flow processing device to be controlled by the control device 100. Examples of information held in the device information storage unit 104 include port information of the flow processing device, VLAN setting information, the capability of the flow processing device, an address for accessing the flow processing device 200, and the like. Further, statistical information held by the flow processing apparatus 200 may be included.
  • the physical topology information storage unit 105 holds connection information between the flow processing devices 200 and port connection destinations (information on ports connected to other flow processing devices or ports for connection to the external node 300).
  • the base flow entry storage unit 106 holds, for each flow processing apparatus 200, a base flow entry for the broadcast and multicast flows described with reference to FIGS. In addition, the base flow entry storage unit 106 holds connection port information of another flow processing device 200 that is loop-free for each flow processing device 200.
  • the flow entry storage unit 107 holds a flow entry to be set in the flow processing device 200 for each flow processing device 200.
  • the flow storage unit 108 holds flow setting information.
  • the flow setting information is, for example, a matching condition in a flow entry to be collated with input frame information, an output frame state (header content after rewriting in the flow processing device, etc.), and the like.
  • the mapping information storage unit 109 stores connection information between the virtual network and the real network, that is, mapping information of the flow processing device 200, the external node 300 connection port, and the VLAN ID corresponding to the virtual external NW endpoint 503 (see FIG. 12).
  • the mapping information includes, for each virtual network 500, the external NW end point 503 when an input frame is input / output, the external NW end point 503 where the IP multicast receiver or sender exists, and the VLAN of which port of the flow processing apparatus 200. It indicates whether it is related to the ID.
  • the VLAN ID information of the external NW endpoint 503 is untagged or tagged, and if untagged is used, the VLAN ID for untagged is used or not, which VLAN ID is used, and which is tagged
  • the VLAN ID may be settable.
  • the virtual topology information storage unit 110 holds connection information between virtual nodes for each virtual network 500 (see FIG. 11).
  • the virtual node information storage unit 111 holds virtual node information such as the virtual L3SW 501, the virtual L2SW 502, and the external NW endpoint 503 for each virtual network.
  • the virtual node information is the relationship between the virtual network 500 and information similar to the L3SW such as virtual interface information, routing table information, and ARP entry information.
  • the virtual node information includes information similar to the L2SW such as virtual interface information and MAC entry information, a broadcast domain ID (BC domain ID) for uniquely identifying the broadcast domain, and the virtual network 500. It becomes a relationship.
  • the IP multicast receiver information storage unit 112 holds a list of external NW endpoints 503 in which IP multicast receivers for IP multicast groups under the virtual network 500 exist.
  • FIG. 13 is an example of a list storing IP multicast group recipient information.
  • the IP multicast sender information storage unit 113 holds a list of external NW endpoints 503 in which IP multicast senders for IP multicast groups under the virtual network 500 exist.
  • FIG. 14 is an example of a list storing IP multicast group sender information.
  • the IP multicast group information storage unit 114 stores information on IP multicast groups under the virtual network 500 in which both the IP multicast receiver and the IP multicast sender exist in association with the multicast label ID.
  • FIG. 15 is an example of a list storing IP multicast group information. As shown in FIG. 15, the virtual network ID and the IP multicast group ID are search keys, and each IP multicast receiver and sender are separately managed for the same IP multicast group address under a plurality of virtual networks. Is possible.
  • IP multicast group MC4 is an IP multicast group in which only the receiver exists and IP multicast group MC5 is only in the sender, it is not registered as a multicast group in FIG. .
  • each unit (processing means) of the control device 100 shown in FIG. 3 can be realized by a computer program that causes a computer constituting the control device 100 to execute the above-described processes using the hardware thereof.
  • FIG. 19 is a block diagram illustrating a detailed configuration of the flow processing apparatus 200.
  • the flow processing device 200 includes a flow entry search unit 201, a flow entry storage unit 202, a flow entry processing unit 203, a flow processing unit 204, and a control device communication unit 205.
  • the flow entry search unit 201 extracts flow entry search condition information for searching for a flow entry from the frame input to the flow processing apparatus 200, and uses the flow entry search condition information to input the input frame from the flow entry storage unit 202. Search for entries with matching match conditions.
  • the flow entry search unit 201 updates the time-out time or statistical information of the flow entry that matches at that time. Then, the flow entry search unit 201 passes the matched flow entry action and the input frame to the flow processing unit 204.
  • the flow entry storage unit 202 holds a flow entry for the flow processing apparatus 200 to process a frame.
  • the flow entry storage unit 202 performs setting and reference such as addition / deletion of a flow entry from the flow processing unit 204.
  • the flow entry search unit 201 searches for flow entries and updates matched flow entries.
  • the flow entry held by the flow entry storage unit 202 is the same as the flow processing device 200 held by the flow entry storage unit 107 of the control device 100.
  • the flow entry processing unit 203 executes setting and reference instruction such as addition / deletion regarding the flow entry received from the control device 100 via the control device communication unit 205 to the flow entry storage unit 202. Further, the flow entry processing unit 203 refers to the flow entry storage unit 202, deletes the flow entry that has timed out, and deletes the flow entry to the control device 100 via the control device communication unit 205. Tell that.
  • the flow processing unit 204 processes a frame according to an input frame and its action passed from the flow entry search unit 201 or the control device 100 via the control device communication unit 205. For example, the value of the input frame is changed, output to the external node 300, output to another flow processing apparatus 200, output to the control apparatus 100 through the control apparatus communication unit 205, or dropped.
  • the flow processing apparatus 200 as described above can also be configured by the open flow switch of Non-Patent Document 2, for example.
  • FIG. 20 is a flowchart illustrating a flow of a base flow entry and IP multicast default flow entry generation process performed as an initial setting by the control device according to the first embodiment of this invention.
  • the physical network management unit 102 acquires the device information of the flow processing device 200 when connected to the flow processing device 200, and sets the device information of the flow processing device in the device information storage unit 104. (Step S101).
  • the physical network management unit 102 acquires connection information between the flow processing devices 200 and sets it in the physical topology information storage unit 105 (step S102). For example, the physical network management unit 102 acquires input frame information from another flow processing device 200 by instructing the newly connected flow processing device 200 to transmit a frame to the other flow processing device 200. be able to. The physical network management unit 102 can grasp the connection relation by referring to the information of the corresponding flow processing device in the device information storage unit 104.
  • the physical network management unit 102 obtains a spanning tree so that the broadcast and multicast frames are loop-free in the flow network, and broadcasts each flow processing device 200.
  • a BCMC base flow entry (see FIGS. 4 to 7) for the multicast frame is generated and set in the base flow entry storage unit 106 (step S103).
  • the physical network management unit 102 sends an IPMC default flow to each external node 300 connection port connected to the outside of the flow network in the flow processing device 200.
  • the entries (the IGMP information acquisition default flow entry, the IPMC Well known distribution default flow entry 481, the IPMC packet information acquisition default flow entry 482) are set (step S104).
  • the IPMC default flow entry detection of IGMP frame information and IP multicast detected by the flow processing device 200 in the control device 100 is started.
  • FIG. 21 is a flowchart showing the overall operation of the control device 100 according to the first embodiment of the present invention.
  • the flow entry search unit 201 extracts flow entry search condition information for searching for a flow entry from the input frame (step S301).
  • the flow entry search unit 201 uses the flow entry search condition information to search the flow entry storage unit 202 for a flow entry having a matching condition that matches the flow entry search condition information (step S302).
  • the flow entry search unit 201 then passes the matched flow entry action and the input frame to the flow processing unit 204.
  • the flow processing unit 204 inputs an input frame according to the content of the action of the flow entry Additional information such as a port is added and transmitted to the control device 100 (step S303).
  • the control device 100 determines whether the input frame is an input frame other than Well Known addressed to the IP multicast group. Here, if it is determined that the input frame is addressed to the IP multicast group and other than the well-known input frame, the control device 100 extracts the external node 300 connection port and the VLAN ID that input the IP multicast packet from the input frame information, and further, The mapping information storage unit 109 is searched to acquire the corresponding external NW end point 503 (step S304).
  • the control device 100 sets the external NW endpoint 503 as the sender. Is registered, and the flow entry setting process at the sender external NW endpoint 503 shown in step S308 is performed.
  • FIG. 22 is a flowchart showing details of step S308 in FIG. Referring to FIG. 22, first, the control device 100 searches the mapping information storage unit 109 to search for the flow processing device 200, the external node 300 connection port, and the VLAN ID corresponding to the sender external NW endpoint 503 (step S3081).
  • the control device 100 acquires the external NW endpoint 503 from the IP multicast group information storage unit 114.
  • the destination IP multicast group of the IP multicast packet of the virtual network 500 to which it belongs is searched and checked whether it has been registered (step S3082).
  • the control device 100 refers to the flow entry storage unit 107, and the flow processing device 200 connects to the corresponding external node 300 connection port. It is checked whether or not the IP multicast sender flow entry 450 of the IP multicast group having the input port and the VLAN ID as a match condition is set (step S3083). If the IP multicast sender flow entry 450 of the corresponding IP multicast group is set here, no particular action is required.
  • the control device 100 sets the IP multicast encapsulated flow entry 451 with the IP multicast group and VLAN ID as the matching condition as the flow processing device 200. (Step S3084).
  • the control device 100 searches for the IPMC receiver flow entry 452 (having the multicast label ID of the IPMC encapsulated flow entry 451 in the match condition) set in the flow processing device 200, and the VLAN ID + output of the action
  • the output destination list of the external node 300 connection port is acquired.
  • the control device 100 deletes the combination from the output destination list so as not to loop (step S3085). Thereby, an output destination list to be set as an action of the IPMC sender flow entry 450 can be acquired.
  • control device 100 sets the external node 300 connection port to which the IGMP frame has been input as the input port, and sets the VLAN ID and the IP multicast group address as a matching condition, and the flow for the IP multicast sender together with the action of step S3085.
  • An entry 450 is generated (step S3086).
  • control device 100 sets the IPMC sender flow entry 450 of the IP multicast group generated in step S3086 in the flow processing device 200 (step S3087).
  • the control apparatus 100 In order to prevent unnecessary IP multicast packets from flowing into the network, the flow processing device 200 retrieved from the IP multicast frame input information has the corresponding external node 300 connection port as the input port, and the corresponding VLAN ID and the corresponding IP.
  • the IPMC sender flow entry 450 (Drop) that sets the multicast group as the layer 3 destination address and the frame discard (Drop) as an action is set (step S3088).
  • an IP multicast non-reception time or a set time-out value is set. This allows the flow processing device to automatically delete the IP multicast sender flow entry 450 when the timeout is established, and allows the control device 100 to send a deletion notification.
  • step S308 ends, the control device 100 collates the IP multicast group information storage unit 114, the IP multicast receiver information storage unit 112, and the IP multicast sender information storage unit 113, and the IP multicast sender for the destination IP multicast group. It is checked whether the information and the IP multicast sender information are newly prepared (step S311). Here, when the IP multicast sender information and the IP multicast sender information for the destination IP multicast group are not newly prepared, the control device 100 omits the subsequent processing (end in FIG. 21).
  • control device 100 prevents unnecessary IP multicast packets as described in step S3088 from flowing into the flow network or allows communication. Setting of unnecessary flow entries for receiving IP multicast packets that do not hold is omitted. As a result, the flow entry resource is not wasted, and an optimized process is realized.
  • the control device 100 registers the IGMP IP multicast group in the IP multicast group information storage unit 114 (step S313). .
  • FIG. 25 is a flowchart showing details of step S313 in FIG.
  • the control device 100 causes the flow processing device 200 to enter a flow entry for an IP multicast sender in an IP multicast group with the corresponding external node 300 connection port as an input port and the corresponding VLAN ID as a match condition. It is confirmed whether 450 (Drop) has been set. When the IP multicast sender flow entry 450 (Drop) has been set, the control device 100 deletes the IP multicast sender flow entry 450 (Drop). (Step S3131).
  • control device 100 assigns a multicast label ID that is unique to the IP multicast group of the virtual network 500 (step S3132).
  • a value of 65536 or more is assigned as the multicast label ID so as not to overlap with the BC domain ID.
  • control device 100 performs flow entry setting processing at the recipient external NW endpoint 503 (step S3133).
  • FIG. 23 is a flowchart showing details of step S3133 in FIG.
  • the control device 100 refers to the IP multicast group information storage unit 114 to check whether or not the IP multicast group of the virtual network to which the recipient external NW endpoint 503 belongs has been registered (step S100). S3091). If the corresponding entry is not registered in the IP multicast group information storage unit 114, the IP multicast sender has not been registered, so the control device 100 omits the subsequent processing (END).
  • step S3133 since the corresponding entry has always been registered in the IP multicast group information storage unit 114, the processes after step S3092 are executed.
  • the control device 100 searches the mapping information storage unit 109 to search for the flow processing device 200 corresponding to the recipient external NW endpoint 503, its external node 300 connection port, and VLAN ID (step S3092). As a result of the search, if there is no corresponding mapping information, the control device 100 omits the subsequent processing (end) because it is not a recipient to be added.
  • the flow control apparatus 100 refers to the flow entry storage unit 107. Then, it is confirmed whether or not the IP multicast receiver flow entry of the IP multicast group has been set in the corresponding flow control apparatus 200 (step S3093). If the IP multicast receiver flow entry is not set as a result of the confirmation, the receiver is not connected to the corresponding flow control apparatus 200, so the control apparatus 100 omits the subsequent processing (END).
  • the control device 100 when an IP multicast receiver flow entry is set in the corresponding flow control device 200, the control device 100 further outputs the IP multicast receiver flow entry output destination with the external node 300 connection port and VLAN ID. It is confirmed whether or not it is registered as an external NW end point (step S3094). As a result of the confirmation, when the external node 300 connection port and the VLAN ID are registered as the output destination external NW endpoint of the IP multicast receiver flow entry, the control device 100 omits the subsequent processing (end).
  • the control device 100 determines that the IP multicast receiver flow entry 452 Are added to the action of VLAN ID change processing and frame output processing from the external node 300 connection port (step S3095). If the corresponding flow processing device 200 has an IPMC sender flow entry 450 in the IP multicast group, the control device adds a similar action to the IP multicast sender flow entry 450.
  • control device 100 sets the flow entry 452 and the flow entry 450 updated in step S3094 in the corresponding flow processing device 200 (step S3096).
  • step S3133 ends, the control device 100 performs a flow entry setting process for the sender external NW endpoint 503 (step S3134). Details of the processing in step S3134 are the same as those in FIG.
  • step S303 processing when the input frame information transmitted from the flow processing apparatus 200 in step S303 is IGMP Report will be described.
  • the control device 100 determines that the input frame information transmitted from the flow processing device 200 is an IGMP report
  • the control device 100 searches the mapping information storage unit 109 and the connection port of the external node 300 to which the IGMP report is input from the input frame information.
  • the external NW end point 503 corresponding to the VLAN ID is acquired.
  • the control device 100 registers the external NW endpoint 503 as a receiver. (Step S305).
  • the control device 100 performs a flow entry setting process on the flow processing device of the recipient external NW endpoint 503 (step S309). Details of the processing in S309 are the same as the contents shown in FIG.
  • step S309 the control device performs the processing after step S311 in FIG. Since the processing after step S311 in FIG. 21 has also been described, description thereof will be omitted.
  • step S303 processing when the input frame information received from the flow processing apparatus 200 in step S303 is IGMP Leave will be described.
  • the control device 100 determines that the input frame information transmitted from the flow processing device 200 is IGMP Leave, the control device 100 searches the mapping information storage unit 109, and corresponds to the external node 300 connection port that inputs IGMP Leave and the external ID corresponding to the VLAN ID.
  • the NW end point 503 is acquired.
  • the control device 100 cancels the registration of the external NW endpoint 503 of the receiver. Is performed (step S306).
  • the control device 100 performs a flow entry setting process for deleting the recipient external NW endpoint 503 (step S310).
  • FIG. 24 is a flowchart showing details of step S310 in FIG.
  • the control device 100 searches the IP multicast group information storage unit 114 to confirm whether or not the IP multicast group of the virtual network 500 to which the recipient external NW endpoint 503 belongs has been registered (Ste S3101). As a result of the confirmation, if the corresponding IP multicast group is not registered, the flow entry that needs to be deleted is not set, and the control device 100 omits the subsequent processing (END).
  • the control device 100 searches the mapping information storage unit 109 and connects the flow processing device 200 corresponding to the recipient external NW endpoint 503 and the external node 300.
  • the port and VLAN ID are searched (step S3102).
  • the control device 100 searches the flow entry storage unit 108 and sends it to the corresponding flow processing device 200 for the IP multicast receiver of the IP multicast group. It is confirmed whether or not the flow entry 452 is set (step S3103). If there is no entry corresponding to the recipient external NW endpoint 503 in step S3102, or if the IPMC recipient flow entry 452 of the IP multicast group is not set in the corresponding flow processing device 200, the flow entry that needs to be deleted Is not set, the control device 100 omits the subsequent processing (end).
  • step S3103 As a result of the confirmation in step S3103, when the IP multicast receiver flow entry 452 of the IP multicast group is set in the corresponding flow processing apparatus 200, the control apparatus 100 detects the IP multicast receiver flow entry detected in step S3103. It is confirmed whether or not a combination of the flow processing device 200, the external node 300 connection port, and the VLAN ID corresponding to the recipient external NW end point 503 is set as the output destination of 452 (step S3104).
  • the control device 100 omits the subsequent processing (end). .
  • the control device 100 When the combination of the flow processing device 200, the external node 300 connection port, and the VLAN ID corresponding to the recipient external NW endpoint 503 has already been set in the output destination of the IP multicast receiver flow entry 452, the control device 100 The VLAN ID change for the external NW end point 503 and the output destination external node 300 connection port are deleted from the action of the IPMC sender flow entry 450 of the corresponding IP multicast group of the corresponding flow processing apparatus 200, and the flow setting apparatus 200 The flow entry 450 is set (step S3105). Note that if the flow processing apparatus 450 does not have the IP multicast sender flow entry 450 of the corresponding IP multicast group, nothing needs to be done.
  • control device 100 deletes the VLAN ID change and the external node 300 connection port designated as the output destination from the action of the IP multicast receiver flow entry 452. Further, the control device 100 checks whether or not the number of output destinations of the action after the deletion becomes zero (step S3106).
  • the control device 100 deletes the IP multicast receiver flow entry 452 of the IP multicast group on the flow processing device 200 (step S3107).
  • the IP multicast receiver flow entry 452 is deleted, the corresponding frame is matched with the match conditions of the table 3 flow entries 405 to 407 in FIGS. 4 to 6 and discarded. Henceforth, the flow processing device 200 will not output the corresponding frame to the external node 300 connection port.
  • control device 100 sets the IP multicast receiver flow entry 452 of the IP multicast group generated in step 3105 in the flow processing device 200. Update (step S3108).
  • step S310 the control device 100 checks whether or not the IP multicast receiver (external NW endpoint) for the destination IP multicast group has become 0 (step S312). As a result of the confirmation, if the number of IP multicast receivers (external NW endpoints) for the destination IP multicast group is not zero, the control device 100 omits the subsequent processing (end).
  • FIG. 26 is a flowchart showing details of step S314 in FIG. Referring to FIG. 26, first, the control device 100 performs a flow entry deletion process for deleting the sender external NW endpoint 503 (step S3141).
  • FIG. 27 is a flowchart showing details of step S3141 in FIG. Referring to FIG. 27, the control device 100 searches the mapping information storage unit 109 to search for a combination of the flow processing device 200, the external node 300 connection port, and the VLAN ID corresponding to the sender external NW endpoint 503 (step S3531).
  • the control device 100 searches the IP multicast group information storage unit 114, and the external NW endpoint 503 belongs. It is confirmed whether or not the destination IP multicast group of the IP multicast packet of the virtual network 500 has been registered (step S3532).
  • the control device 100 refers to the flow entry storage unit 107 and connects the corresponding external node 300 connection port to the searched flow processing device 200. Is set as an input port and whether or not the IP multicast sender flow entry 450 of the IP multicast group with the corresponding VLAN ID as a match condition is set (step S3533).
  • the control device 100 omits the subsequent processing (end).
  • the control apparatus 100 displays the IPMC sender flow entry 450 set in the flow processing apparatus 200. It deletes from the flow processing apparatus 200 (step S3534). Further, the control device 100 deletes the IPMC encapsulated flow entry 451 using the IP multicast group and the VLANID as a match condition from the flow processing device 200 (step S3535).
  • the control device 100 performs the following process. In order to prevent an unnecessary IP multicast packet from flowing into the flow network, the control device 100 uses the flow processing device 200 retrieved from the frame input information of the IP multicast packet as the input port of the corresponding external node 300 and Check whether or not the IP multicast sender flow entry 450 (Drop) has been set with the VLAN ID and the layer 3 destination address being the corresponding IP multicast group as the match condition and the action as frame discard (Drop). To do. As a result of the confirmation, if the IP multicast sender flow entry 450 (Drop) has already been set in the corresponding flow processing apparatus 200, the control apparatus 100 deletes the flow entry (step S3536).
  • step S3141 ends, the control device 100 performs a flow entry setting process for deleting the recipient external NW endpoint 503 (step S3142). Details of the processing in step S3142 have been described with reference to FIG.
  • step S3141 the control device 100 releases (returns) the assigned multicast label ID so that it can be assigned to an IP multicast group of another virtual network 500 (step S3143).
  • step S303 a process when the control apparatus 100 determines that the input frame information received from the flow processing apparatus 200 is IGMP Query in step S303 will be described.
  • the control device 100 transmits an IGMP Report toward the external node 300 connection port and the VLAN ID that input the IGMP Query from the input frame information.
  • the multicast router (sender) is prevented from stopping the IP multicast packet transmission (step S307).
  • the control device 100 reads the corresponding frame from the recipient list of the destination IP multicast group stored in the IP multicast sender information storage unit 113.
  • the input external NW end point 503 is deleted (step S352).
  • step S353 the control device 100 performs a flow entry deletion process for deleting the sender external NW endpoint 503 (step S353). Details of step S353 are described in FIG. 27 and are the same as step S3141 of FIG.
  • step S353 the control device 100 confirms whether or not the IP multicast sender information for the destination IP multicast group has become 0 by deleting the sender external NW endpoint (step S354). As a result of the confirmation in step S354, if the IP multicast sender information for the corresponding destination IP multicast group is 0, the control device executes the processes after step S314 already described.
  • the sender of IP multicast group address MC1 is external NW endpoint 503A
  • the receiver's external NW endpoint is 503B, 503C, and 503D.
  • the external NW endpoints 503B and 503C are both connected to the external node 300B, but the external node 300B may be a host that performs multiple communication with the VLAN ID 20/30, or the external node 300B is under the control of the VLAN ID 20 Multiple hosts with VLAN ID30 may be connected.
  • FIG. 28 shows the flow entries set in the flow processors 200A, 200B, and 200C when the IP multicast group address MC1 shown in FIG. 16 is configured, the forwarding path of the IP multicast packet MC1 set by these flow entries, and the IP multicast.
  • the transfer path between the flow processing apparatuses 200 is determined by the transfer path and the input port shown in FIG. 4 being the external node 300A connection port. As described above, whether to output the IP multicast packet MC1 to the port specified by the external node 300 connection port and the VLAN ID depends on the presence or absence of the receiver.
  • FIG. 29 is an example of the contents of the flow entry setting to the flow processing apparatus 200A of FIG.
  • the IPMC sender flow entry 450A-MC1 includes an action to be sent to table 2 when an untagged vlan and a packet addressed to the IP multicast group MC1 input from the external node 300A connection port, that is, the IP multicast packet MC1 is detected. Is set. Since there is no IP multicast MC1 packet receiver in the flow processing device 200A, the output destination in the flow processing device 200A is not set in the action of the IPMC sender flow entry 450A-MC1. In addition, the IPMC encapsulation flow entry 451A-MC1 in FIG.
  • FIG. 30 is an example of the contents of the flow entry setting for the flow processing apparatus 200B of FIG.
  • the IPMC receiver flow entry 452B-MC1 is a flow entry that is referred to when a packet addressed to the BCMC address input from the flow processing apparatus 200A connection port is transmitted to table 3.
  • the MPLS label is 65536 corresponding to the IP multicast group MC1
  • the layer 2 address of the virtual L3SW 501A is set as the layer 2 destination address of the received frame, and the VLAN ID 20
  • the action to output to the external node 300B connection port and the action to output the packet to each of the external node 300B connection port and the external node 300C connection port after changing the frame to VLAN ID 30 are specified.
  • FIG. 31 is an example of the contents of the flow entry setting to the flow processing apparatus 200C of FIG.
  • the IPMC receiver flow entry 452C-MC1 is a flow entry that is referred to when a packet addressed to the BCMC address input from the flow processing apparatus 200A connection port is transmitted to table 3.
  • the layer 2 address of the virtual L3SW 501A is set as the layer 2 destination address of the received frame, and the VLAN ID 20
  • the action to be output to the external node 300D connection port after the change is defined. Further, in the example of FIG. 31, output to the external node 300E connection port where no receiver exists is not performed. As described above, the output from the flow processing device 200C in FIG. 28 and the external NW end point 503D in FIG. 16 is realized.
  • FIG. 32 shows the flow entries set in the flow processors 200A, 200B, and 200C when the IP multicast group address MC2 shown in FIG. 17 is configured, the forwarding path of the IP multicast packet MC2 set by these flow entries, and the IP multicast.
  • the transfer path between the flow processing devices 200 is determined by the transfer path and the input port shown in FIG. 4 being the external node 300C connection port.
  • whether or not to output the IP multicast packet MC2 to the port specified by the external node 300 connection port and the VLAN ID depends on the presence or absence of the receiver.
  • two IP multicast packets MC1 assigned VLAN ID20 and IP multicast packets MC1 assigned VLAN ID30 are transmitted.
  • FIG. 33 is an example of the contents of the flow entry setting to the flow processing apparatus 200B of FIG.
  • 2 Set the layer 2 address of the virtual L3SW 501A as the destination address and change it to VLAN ID 20, then output to the external node 300B connection port, and change the frame to VLAN ID 30, then the external node
  • An action to be output to the 300B connection port and an action for instructing processing in table 2 are defined.
  • An action for instructing output to the flow processing apparatus 200A connection port after inserting a Shim header assigned a multicast label ID 65537 corresponding to MC2 is defined. Thereafter, transfer is performed according to the BCMC base flow entry of FIG.
  • the IPMC receiver flow entry 452B-MC2 is a flow entry that is referred to after a packet addressed to the BCMC address input from the flow processing apparatus 200A connection port is transmitted to table 3.
  • the IPMC receiver flow entry 452B-MC2 when the MPLS label is 65537 corresponding to the IP multicast group MC2, the shim header is deleted, and the layer 2 address of the virtual L3SW 501A is set as the layer 2 destination address.
  • an action to output to the external node 300B connection port after changing to VLAN ID 20 and an action to output to the external node 300B connection port after changing the frame to VLAN ID 30 are defined.
  • the IP multicast receiver flow entry 452B-MC2 is set.
  • the IP multicast sender addressed to the IP multicast packet MC1 is only 1
  • the IP multicast sender flow entry 450B-MC2 outputs to the external node 300B connection port, so the IPMC receiver flow entry 452B-MC2 is set. It becomes unnecessary.
  • FIG. 34 is an example of the contents of the flow entry setting to the flow processing apparatus 200C of FIG.
  • the IPMC receiver flow entry 452C-MC2 is a flow entry that is referred to after a packet addressed to the BCMC address input from the flow processing apparatus 200A connection port is transmitted to table 3.
  • the layer 2 address of the virtual L3SW 501A is set to the layer 2 destination address of the received frame, and the VLAN ID 20
  • the action to be output to the external node 300E connection port after the change is defined. Note that no output is made to the external node 300D connection port where no recipient exists.
  • outputs from the flow processing apparatuses 200B and 200C in FIG. 32 and the external NW end points 503B, 503C, and 503F in FIG. 17 are realized.
  • FIG. 35 shows the flow entries set in the flow processors 200A, 200B, and 200C when the IP multicast group address MC3 shown in FIG. 18 is configured, the forwarding path of the IP multicast packet MC3 set by these flow entries, and the IP multicast.
  • the transfer path between the flow processing devices 200 is determined by the transfer path and the input port shown in FIG. 4 being the external node 300B connection port.
  • whether to output the IP multicast packet MC3 to the port specified by the external node 300 connection port and the VLAN ID depends on the presence or absence of the receiver.
  • the VLAN ID is changed to 30 in the IPMC sender flow entry 450B-MC3 and output.
  • FIG. 36 is an example of the contents of the flow entry setting to the flow processing apparatus 200A of FIG.
  • the IPMC receiver flow entry 452A-MC3 is a flow entry that is referred to after a packet addressed to the BCMC address input from the flow processing device 200B connection port is transmitted to table 3, and an MPLS label is assigned to the IP multicast group MC3. If the corresponding header is 65538, the shim header is deleted, the layer 2 address of the virtual L3SW 501A is set as the layer 2 destination address, and the VLAN ID is deleted, and then the action to be output to the external node 300A connection port is It is prescribed.
  • FIG. 37 is an example of the contents of the flow entry setting to the flow processing apparatus 200B of FIG.
  • An action for instructing output to the flow processing apparatus 200A connection port after inserting a Shim header to which the corresponding multicast label ID 65538 is assigned is defined. Thereafter, transfer is performed according to the BCMC base flow entry of FIG.
  • the IPMC receiver flow entry 452B-MC3 is a flow entry that is processed after a packet addressed to the BCMC address input from the flow processing apparatus 200A connection port is transmitted to table 3. In the example of FIG.
  • IP multicast reception is possible at the receiver external NW end point 503 to which the receiver belongs. That is, IP multicast reception exceeding the virtual L3SW 501A becomes possible.
  • the IP multicast receiver external for MC1 of the virtual network 500A when the same IP multicast group address MC1 is used in a virtual network (for example, virtual network 500B) different from the virtual network 500A, the IP multicast receiver external for MC1 of the virtual network 500A are independently managed and IP multicast distributed Is possible.
  • both the external NW endpoint 503 (sender) that inputs the IP multicast packet to the flow control apparatus 100 and the external NW endpoint 503 (receiver) to which the IP multicast packet receiver node 300 is connected are Since the IPMC encapsulated flow entry 451 and the IPMC receiver flow entry 452 are set on the condition that they are aligned, it is possible to save flow entry resources.
  • a Drop flow entry is set as the IPMC sender flow entry 450 in the case of only the sender.
  • the number of flow entries on the flow processing device 200 that need to be dynamically controlled when communication is started or stopped is the sender external NW. 2 if there is an end point 503, 1 if there is at least one receiver external NW end point 503, 0 if there is no external NW end point 503 of the sender or receiver, and the sender external NW end point 503 and the receiver external When there is an NW end point 503, the number is 3 and few. For this reason, high-speed IP multicast communication can be started and stopped.
  • IPv4 is used, but application to IPv6 is also possible.
  • IPv6 is also possible.
  • the format of the IP multicast group address (including Well Known) can be changed from IPv4 to IPv6.
  • the IP multicast default flow entry specifies MLD instead of IGMP, it can be handled by using ICMPv6 and referring to the message type as a match condition.
  • the reception of IGMP Leave has been exemplified as the trigger for deleting the recipient.
  • the reception timeout of IGMP Report can also be used as a trigger. Also, considering that there is not one receiving node for each external NW endpoint 503, send an IGMP Query to check if there is another host when receiving IGMP Leave, and delete the external NW endpoint 503 from the output destination when the response times out You may make it do.
  • the sender external endpoint 503 and the receiver external endpoint 503 have been described as being dynamically detected. However, even when these are fixedly set, of course, the multicast label ID is used. Communication beyond the virtual L3SW 501 can be realized.
  • the device controller is Control information for the sender that defines the processing contents to be executed by the flow processor on the ingress side that receives the IP multicast packet from the sending node (sender) of the IP multicast group, and a predetermined header for the IP multicast packet After adding or rewriting the header, the transfer control information for transferring the IP multicast packet along the route and the flow processing device on the exit side of the route perform deletion of the predetermined header or restoration of the header.
  • Control device for generating the receiver control information for forwarding the IP multicast packet to the receiving node, and setting the control information for each of the flow processing devices on the route.
  • the ingress side flow processing device has the A control device for adding contents instructing packet transfer to a receiving node.
  • a control device having a function of executing broadcast in units of VLANs by adding a predetermined header to the broadcast packet or rewriting the header.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention permet d'empêcher une augmentation des ressources utilisées et de la charge d'un dispositif de commande ou d'un dispositif de traitement de flux lors d'une multidiffusion, dans un réseau de commande centralisé. Un dispositif de commande comprend : une unité de gestion émettrice destinée à gérer des noeuds de transmission dans un groupe de multidiffusion IP (protocole Internet) sur un réseau virtuel, sur la base d'une notification reçue à partir d'un dispositif de traitement de flux subordonné ; une unité de gestion réceptrice destinée à gérer des noeuds de réception dans le groupe de multidiffusion IP sur le réseau virtuel, sur la base de la notification reçue à partir d'un dispositif de traitement de paquet subordonné ; une unité de calcul de chemin destinée à calculer un chemin de multidiffusion IP pour un groupe de multidiffusion IP dans lequel au moins une paire d'un noeud de transmission et d'un noeud de réception est présente ; et une unité de commande de dispositif destinée à installer des informations de commande pour transférer la multidiffusion IP le long du chemin calculé vers un dispositif de traitement de flux situé sur ledit chemin.
PCT/JP2014/065122 2013-06-10 2014-06-06 Dispositif de commande, système de communication, et procédé et programme de commande d'un dispositif relais WO2014199924A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013121952 2013-06-10
JP2013-121952 2013-06-10

Publications (1)

Publication Number Publication Date
WO2014199924A1 true WO2014199924A1 (fr) 2014-12-18

Family

ID=52022220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/065122 WO2014199924A1 (fr) 2013-06-10 2014-06-06 Dispositif de commande, système de communication, et procédé et programme de commande d'un dispositif relais

Country Status (1)

Country Link
WO (1) WO2014199924A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016042385A1 (fr) * 2014-09-18 2016-03-24 パナソニックIpマネジメント株式会社 Dispositif de commande, et programme
WO2016110897A1 (fr) * 2015-01-09 2016-07-14 日本電気株式会社 Système de communication, dispositif de communication, procédé de communication et programme de commande
WO2019150826A1 (fr) * 2018-02-05 2019-08-08 ソニー株式会社 Contrôleur de système, système de réseau et procédé dans un système de réseau

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095391A1 (fr) * 2005-03-04 2006-09-14 Fujitsu Limited Dispositif de relais de paquets
JP2007036961A (ja) * 2005-07-29 2007-02-08 Kddi Corp ハンドオフを制御する無線アクセスネットワークシステム及びハンドオフ制御方法
JP2010081471A (ja) * 2008-09-29 2010-04-08 Yokogawa Electric Corp ネットワークシステム
JP2011101082A (ja) * 2009-11-04 2011-05-19 Yokogawa Electric Corp 情報転送システム
WO2012090993A1 (fr) * 2010-12-28 2012-07-05 日本電気株式会社 Système d'information, dispositif de commande, procédé et programme de communication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095391A1 (fr) * 2005-03-04 2006-09-14 Fujitsu Limited Dispositif de relais de paquets
JP2007036961A (ja) * 2005-07-29 2007-02-08 Kddi Corp ハンドオフを制御する無線アクセスネットワークシステム及びハンドオフ制御方法
JP2010081471A (ja) * 2008-09-29 2010-04-08 Yokogawa Electric Corp ネットワークシステム
JP2011101082A (ja) * 2009-11-04 2011-05-19 Yokogawa Electric Corp 情報転送システム
WO2012090993A1 (fr) * 2010-12-28 2012-07-05 日本電気株式会社 Système d'information, dispositif de commande, procédé et programme de communication

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016042385A1 (fr) * 2014-09-18 2016-03-24 パナソニックIpマネジメント株式会社 Dispositif de commande, et programme
JP2016063418A (ja) * 2014-09-18 2016-04-25 パナソニックIpマネジメント株式会社 制御装置およびプログラム
WO2016110897A1 (fr) * 2015-01-09 2016-07-14 日本電気株式会社 Système de communication, dispositif de communication, procédé de communication et programme de commande
WO2019150826A1 (fr) * 2018-02-05 2019-08-08 ソニー株式会社 Contrôleur de système, système de réseau et procédé dans un système de réseau
JPWO2019150826A1 (ja) * 2018-02-05 2021-01-14 ソニー株式会社 システムコントローラ、ネットワークシステム、及びネットワークシステムにおける方法
US11516127B2 (en) 2018-02-05 2022-11-29 Sony Corporation System controller, controlling an IP switch including plural SDN switches

Similar Documents

Publication Publication Date Title
JP6418261B2 (ja) 通信システム、ノード、制御装置、通信方法及びプログラム
JP5862769B2 (ja) 通信システム、制御装置、通信方法及びプログラム
CN113364610B (zh) 网络设备的管理方法、装置及系统
US10645006B2 (en) Information system, control apparatus, communication method, and program
US9504016B2 (en) Optimized multicast routing in a Clos-like network
CN104335537A (zh) 用于层2多播多路径传送的系统和方法
WO2011053290A1 (fr) Procédé et appareil de traçage d'un flux de multidiffusion
EP2989755B1 (fr) Distribution multidiffusion efficace vers des hôtes connectés de façon double (vpc) dans des réseaux de recouvrement
WO2016116939A1 (fr) Moteurs pour élaguer un trafic de réseau superposé
WO2013114489A1 (fr) Procédé de contrôle, appareil de contrôle, système de communication et programme associé
US20190007279A1 (en) Control apparatus, communication system, virtual network management method, and program
WO2014199924A1 (fr) Dispositif de commande, système de communication, et procédé et programme de commande d'un dispositif relais
JP6206493B2 (ja) 制御装置、通信システム、中継装置の制御方法及びプログラム
KR102024545B1 (ko) 오버레이 네트워크 기반에서의 오리지널 패킷 플로우 매핑 장치 및 그 방법
JP2015192391A (ja) ネットワークシステム、パケット伝送装置、パケット伝送方法、及び情報処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14811687

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14811687

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP