WO2017018989A1 - Simultaneous processing of flow tables - Google Patents

Simultaneous processing of flow tables Download PDF

Info

Publication number
WO2017018989A1
WO2017018989A1 PCT/US2015/041980 US2015041980W WO2017018989A1 WO 2017018989 A1 WO2017018989 A1 WO 2017018989A1 US 2015041980 W US2015041980 W US 2015041980W WO 2017018989 A1 WO2017018989 A1 WO 2017018989A1
Authority
WO
WIPO (PCT)
Prior art keywords
flow tables
data packet
flow
field
operations
Prior art date
Application number
PCT/US2015/041980
Other languages
French (fr)
Inventor
Diego Valverde Garro
Claudio Enrique VIQUEZ CALDERON
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/041980 priority Critical patent/WO2017018989A1/en
Publication of WO2017018989A1 publication Critical patent/WO2017018989A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7452Multiple parallel or consecutive lookup operations

Definitions

  • OpenFlow is a communication protocol that separates the data path and the control path functions.
  • the data path resides on a switch, which is responsible for packet forwarding.
  • the control path resides on a separate controller, which is responsible for high-level routing decisions.
  • the switch and the controller communicate using the OpenFlow protocol.
  • the switch maintains a set of flow tables, configured by the controller. Each entry in a flow table defines a "flow" or set of characteristics and a corresponding action to be taken on any data packet which matches the flow.
  • the set of flow tables is numbered, and incoming packets are processed against the flow tables in order, one-by-one, to locate matching flows.
  • FIG. 1 is a block diagram of an example system of the present disclosure
  • FIG. 2 illustrates a flowchart of an example method for simultaneous processing of flow tables
  • FIG. 3 illustrates a flowchart of another example method for simultaneous processing of flow tables
  • FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein.
  • the present disclosure broadly discloses a method, computer system, and non-transitory computer-readable medium for simultaneous processing of flow tables.
  • a data packet is received by a switch that supports the OpenFlow protocol
  • the data packet is processed against the switch's flow tables one-by-one, in a serial fashion. For instance, the data packet is first processed against the first flow table. If a match is found in the first flow table, and the match included an instruction to process the data packet against the second flow table, then the data packet is next processed against the second flow table. The process continues like this until the data packet has been processed against each of the flow tables, some of which may match and some of which may not match. Thus, for a set of n flow tables, it takes an average of O(n) table operations to complete processing of the data packet.
  • Examples of the present disclosure provide a novel method for accelerating the processing of flow tables using speculative execution. For instance, when a data packet is received by a switch in a software-defined networking (SDN) network, examples of the present disclosure perform operations simultaneously on a plurality of flow tables stored in the switch. Actions associated with the flow tables are automatically retrieved and accumulated in a stack. If any of the flow tables are determined not to match the data packet, then the corresponding actions are removed from the stack or are replaced with the correspondent actions for a "miss" case. Thus, the total time taken to process a packet against a plurality of flow tables can be reduced dramatically, e.g., as compared to methods that perform operations on the flow tables one-by-one. Examples of the present disclosure can be implemented directly in hardware, e.g., using an application specific integrated circuit (ASIC), or can be implemented in software, e.g., where a separate thread runs for each table element.
  • ASIC application specific integrated circuit
  • OpenFlow protocol is considered to be only one specific example in which the present disclosure is applicable.
  • examples of the present disclosure could be advantageously deployed to accelerate processing of any software-defined networking (SDN) pipeline.
  • SDN software-defined networking
  • examples of the disclosed techniques assume that there are no dependencies between flow tables.
  • FIG. 1 is a block diagram of an example system 100 of the present disclosure.
  • the system 100 generally comprises a switch 102 and a controller 104 that communicate over a secure connection 106, such as a secure sockets layer (SSL) connection.
  • SSL secure sockets layer
  • the switch 102 stores a plurality of flow tables 108i-108 n , hereinafter collectively referred to as "flow tables 108."
  • the flow tables 108 each comprises at least one entry.
  • each entry includes at least three fields: (1 ) a rule (R) or set of data packet characteristics that defines a "flow," for use in classifying incoming data packets; (2) an action (A) that defines how data packets that are part of the corresponding flow should be processed (e.g., forward to a specific port, encapsulate and forward to the controller 104, or drop); and (3) statistics (S) that track the number of data packets and/or bytes for the corresponding flow and the time elapsed since an incoming data packet was last matched to the flow.
  • R a rule
  • A an action
  • S statistics
  • the controller 104 provides commands to the switch 102 over the secure link 106, including commands to add or delete entries in the switch's flow tables 108.
  • the switch 102 may also forward data packets to the controller 104 over the secure link 106, for example when the data packets do not match any of the flows in the switch's flow tables 108.
  • the switch 102 and the controller 104 may communicate according to the OpenFlow communication protocol or other protocols compatible with SDN.
  • the switch 102 processes the incoming data packet against the flow tables 108 in order to determine what action or actions to take on the data packet. For instance, if the incoming data packet is matched to a flow in one of the flow tables 108, then the switch 102 will process the incoming data packet according to the actions that correspond to the flow. This may involve forwarding the incoming data packet to a particular port so that it can be delivered to an endpoint device 1 10i -1 10 m , hereinafter collectively referred to as "endpoint devices 1 10." In one example discussed in further detail in connection with FIGs. 2 and 3, the switch 102 processes the incoming data packet against a plurality of the flow tables 108 simultaneously, using a speculative execution technique.
  • the stack 1 12 is a data structure that includes a plurality of slots 1 14i -1 14 n , hereinafter referred to as "slots 1 14," for storing the actions.
  • the stack 1 12 includes one slot 1 14 for each of the flow tables 108 stored in the switch 102.
  • FIG. 2 illustrates a flowchart of an example method 200 for simultaneous processing of flow tables.
  • the method 200 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below.
  • the method 200 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 200 to elements of FIG. 1 and FIG. 2.
  • the method 200 begins.
  • the switch 102 obtains an incoming data packet.
  • the switch 102 performs operations on a plurality of flow tables simultaneously.
  • the operations performed on the plurality of flow tables include attempting to match a field of the incoming data packet to a field of an entry in each of the flow tables 108.
  • the method 200 performs operations on flow tables even when there is no guarantee that those flow tables will match the incoming data packet, or even that a path to the flow tables will exist based on matches in other flow tables.
  • the switch 102 identifies at least one action to be performed on the incoming data packet, based on the operations performed in block 206.
  • the method 200 ends in block 210.
  • FIG. 3 illustrates a flowchart of another example method 300 for simultaneous processing of flow tables.
  • the method 300 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below.
  • the method 300 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 300 to elements of FIG. 1 and FIG. 3.
  • the method 300 begins.
  • the switch 102 obtains an incoming data packet.
  • the switch 102 retrieves a plurality of flow tables.
  • the plurality of flow tables comprises a subset of all of the flow tables 108 stored by the switch. This subset is herein referred to as a "speculation window.”
  • the size of the speculation window i.e., the number of flow tables in the speculation window, may be configurable based on hardware and/or software constraints or requirements. For instance, referring to the example illustrated in FIG. 1 , the speculation window may include anywhere between two and n flow tables.
  • the switch 102 performs lookups to each of the flow tables in the speculation window simultaneously.
  • performing a lookup to a flow table involves retrieving the actions associated with each of the entries in the flow table and storing those actions in the stack 1 12.
  • the method 300 assumes, speculatively, that each flow table in the speculation window matches the incoming data packet.
  • the switch 102 simultaneously compares the incoming data packet to the entries in each of the flow tables in the speculation window to determine which, if any, of the flow tables matches the incoming data packet.
  • the comparisons are performed for each of the flow tables in the speculation window simultaneously.
  • the comparison involves removing from the stack 1 12 any actions corresponding to flow tables that do not match the incoming data packet.
  • the removed actions are replaced in the stack 1 12 with actions to be applied in cases of flow table misses (i.e., no matches).
  • the switch 102 confirms that there are no flow tables remaining to be processed. If there are flow tables stored in the switch 102 for which operations have not been performed, then the method 300 returns to block 306 and retrieves a plurality of the unprocessed flow tables. The method 300 then proceeds as described above to perform simultaneous lookups and comparisons for the retrieved plurality of unprocessed flow tables.
  • the method 300 ends in block 316.
  • At least one block, function, or operation of the method 200 or the method 300 described above may include a storing, displaying and/or outputting operation depending upon the particular application.
  • any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device depending upon the particular application.
  • blocks, functions, or operations in FIG. 2 and FIG. 3 that recite a determining operation, or involve a decision do not necessarily imply that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step.
  • FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein. Notably, no computer or machine currently exists that performs the functions as described herein. As a result, the examples of the present disclosure improve the operation and functioning of the computer to simultaneously process a plurality of flow tables, as disclosed herein.
  • the computer 400 comprises a hardware processor element 402, e.g., a central processing unit (CPU), a microprocessor, or a multi-core processor, a memory 404, e.g., random access memory (RAM) and/or read only memory (ROM), a module 405 for processing flow tables, and various input/output devices 406, e.g., storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, an input port and a user input device, such as a keyboard, a keypad, a mouse, a microphone, and the like.
  • a hardware processor element 402 e.g., a central processing unit (CPU), a microprocessor, or a multi-core processor
  • a memory 404 e.g., random access memory (RAM) and/or read only memory (ROM)
  • the computer may employ a plurality of processor elements.
  • the method(s) as discussed above is implemented in a distributed or parallel manner for a particular illustrative example, i.e., the blocks of the above method(s) or the entire method(s) are implemented across multiple or parallel computers, then the computer of this figure is intended to represent each of those multiple computers.
  • at least one hardware processor can be utilized in supporting a virtualized or shared computing environment.
  • the virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices.
  • hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented.
  • the present disclosure can be implemented by machine readable instructions and/or in a combination of machine readable instructions and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the blocks, functions and/or operations of the above disclosed methods.
  • ASIC application specific integrated circuits
  • PDA programmable logic array
  • FPGA field-programmable gate array
  • instructions and data for the present module or process 405 for simultaneous processing of flow tables can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300.
  • a hardware processor executes instructions to perform "operations" this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component, e.g., a co-processor and the like, to perform the operations
  • instructions and data for the present module or process 405 for simultaneous processing of flow tables can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300.
  • the module 405 may include a plurality of programming code components, including a lookup component 408, a comparison component 410, and a stack builder component 412. These programming code components may be included in a computing system that includes a switch and a controller that cooperate to perform packet forwarding, such as the system 100.
  • the lookup component 406 may be configured to simultaneously retrieve the actions associated with each of the entries in a plurality of flow tables and to send those actions to a stack or similar data structure.
  • the comparison component 410 may be configured to simultaneously compare an incoming data packet to the entries in each of a plurality of flow tables and to determine which, if any, of the flow tables matches the incoming data packet.
  • the comparison component 410 may be further configured to issue a command to remove any actions corresponding to flow tables that do not match the incoming data packet from a stack or similar data structure.
  • the stack builder component 412 may be configured to generate a stack of actions to be performed on an incoming data packet, based on commands received from the lookup component 408 and/or the comparison component 410.
  • the processor executing the machine readable instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor.
  • the present module 405 for simultaneous processing of flow tables, including associated data structures, of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, nonvolatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like.
  • the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

An example method is described in which a data packet is received at a network switch. Operations are then performed simultaneously on a plurality of flow tables stored in the network switch. The operations include attempting to match a field of the data packet to a field of an entry in each of the flow tables. An action to be performed on the data packet is identified based on these operations.

Description

SIMULTANEOUS PROCESSING OF FLOW TABLES
BACKGROUND
[0001] OpenFlow is a communication protocol that separates the data path and the control path functions. The data path resides on a switch, which is responsible for packet forwarding. The control path, however, resides on a separate controller, which is responsible for high-level routing decisions. The switch and the controller communicate using the OpenFlow protocol.
[0002] The switch maintains a set of flow tables, configured by the controller. Each entry in a flow table defines a "flow" or set of characteristics and a corresponding action to be taken on any data packet which matches the flow. The set of flow tables is numbered, and incoming packets are processed against the flow tables in order, one-by-one, to locate matching flows.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] FIG. 1 is a block diagram of an example system of the present disclosure;
[0004] FIG. 2 illustrates a flowchart of an example method for simultaneous processing of flow tables;
[0005] FIG. 3 illustrates a flowchart of another example method for simultaneous processing of flow tables; and
[0006] FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein. DETAILED DESCRIPTION
[0007] The present disclosure broadly discloses a method, computer system, and non-transitory computer-readable medium for simultaneous processing of flow tables. When a data packet is received by a switch that supports the OpenFlow protocol, the data packet is processed against the switch's flow tables one-by-one, in a serial fashion. For instance, the data packet is first processed against the first flow table. If a match is found in the first flow table, and the match included an instruction to process the data packet against the second flow table, then the data packet is next processed against the second flow table. The process continues like this until the data packet has been processed against each of the flow tables, some of which may match and some of which may not match. Thus, for a set of n flow tables, it takes an average of O(n) table operations to complete processing of the data packet.
[0008] Examples of the present disclosure provide a novel method for accelerating the processing of flow tables using speculative execution. For instance, when a data packet is received by a switch in a software-defined networking (SDN) network, examples of the present disclosure perform operations simultaneously on a plurality of flow tables stored in the switch. Actions associated with the flow tables are automatically retrieved and accumulated in a stack. If any of the flow tables are determined not to match the data packet, then the corresponding actions are removed from the stack or are replaced with the correspondent actions for a "miss" case. Thus, the total time taken to process a packet against a plurality of flow tables can be reduced dramatically, e.g., as compared to methods that perform operations on the flow tables one-by-one. Examples of the present disclosure can be implemented directly in hardware, e.g., using an application specific integrated circuit (ASIC), or can be implemented in software, e.g., where a separate thread runs for each table element.
[0009] Although the present disclosure is described within the context of the OpenFlow communication protocol, the OpenFlow protocol is considered to be only one specific example in which the present disclosure is applicable. For instance, examples of the present disclosure could be advantageously deployed to accelerate processing of any software-defined networking (SDN) pipeline. Moreover, examples of the disclosed techniques assume that there are no dependencies between flow tables.
[0010] FIG. 1 is a block diagram of an example system 100 of the present disclosure. In one example, the system 100 generally comprises a switch 102 and a controller 104 that communicate over a secure connection 106, such as a secure sockets layer (SSL) connection.
[0011] The switch 102 stores a plurality of flow tables 108i-108n, hereinafter collectively referred to as "flow tables 108." The flow tables 108 each comprises at least one entry. In one example, each entry includes at least three fields: (1 ) a rule (R) or set of data packet characteristics that defines a "flow," for use in classifying incoming data packets; (2) an action (A) that defines how data packets that are part of the corresponding flow should be processed (e.g., forward to a specific port, encapsulate and forward to the controller 104, or drop); and (3) statistics (S) that track the number of data packets and/or bytes for the corresponding flow and the time elapsed since an incoming data packet was last matched to the flow.
[0012] The controller 104 provides commands to the switch 102 over the secure link 106, including commands to add or delete entries in the switch's flow tables 108. The switch 102 may also forward data packets to the controller 104 over the secure link 106, for example when the data packets do not match any of the flows in the switch's flow tables 108. The switch 102 and the controller 104 may communicate according to the OpenFlow communication protocol or other protocols compatible with SDN.
[0013] When an incoming data packet is received by the switch 102, the switch 102 processes the incoming data packet against the flow tables 108 in order to determine what action or actions to take on the data packet. For instance, if the incoming data packet is matched to a flow in one of the flow tables 108, then the switch 102 will process the incoming data packet according to the actions that correspond to the flow. This may involve forwarding the incoming data packet to a particular port so that it can be delivered to an endpoint device 1 10i -1 10m, hereinafter collectively referred to as "endpoint devices 1 10." In one example discussed in further detail in connection with FIGs. 2 and 3, the switch 102 processes the incoming data packet against a plurality of the flow tables 108 simultaneously, using a speculative execution technique.
[0014] Actions to be performed on the incoming data packet are stored by the switch 102 in a stack 1 12. The stack 1 12 is a data structure that includes a plurality of slots 1 14i -1 14n, hereinafter referred to as "slots 1 14," for storing the actions. In one example, the stack 1 12 includes one slot 1 14 for each of the flow tables 108 stored in the switch 102.
[0015] FIG. 2 illustrates a flowchart of an example method 200 for simultaneous processing of flow tables. In one example, the method 200 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below. For simplicity of explanation, the method 200 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 200 to elements of FIG. 1 and FIG. 2.
[0016] At block 202, the method 200 begins. At block 204, the switch 102 obtains an incoming data packet.
[0017] At block 206, the switch 102 performs operations on a plurality of flow tables simultaneously. In one example, the operations performed on the plurality of flow tables include attempting to match a field of the incoming data packet to a field of an entry in each of the flow tables 108. Thus, the method 200 performs operations on flow tables even when there is no guarantee that those flow tables will match the incoming data packet, or even that a path to the flow tables will exist based on matches in other flow tables.
[0018] At block 208, the switch 102 identifies at least one action to be performed on the incoming data packet, based on the operations performed in block 206.
[0019] The method 200 ends in block 210.
[0020] FIG. 3 illustrates a flowchart of another example method 300 for simultaneous processing of flow tables. In one example, the method 300 may be performed by the switch 102 illustrated in FIG. 1 or by a computer as illustrated in FIG. 4 and discussed below. For simplicity of explanation, the method 300 is described as being performed by the switch 102 of FIG. 1 . Reference is made in the discussion of the method 300 to elements of FIG. 1 and FIG. 3.
[0021] At block 302, the method 300 begins. At block 304, the switch 102 obtains an incoming data packet.
[0022] At block 306, the switch 102 retrieves a plurality of flow tables. In one example, the plurality of flow tables comprises a subset of all of the flow tables 108 stored by the switch. This subset is herein referred to as a "speculation window." The size of the speculation window, i.e., the number of flow tables in the speculation window, may be configurable based on hardware and/or software constraints or requirements. For instance, referring to the example illustrated in FIG. 1 , the speculation window may include anywhere between two and n flow tables.
[0023] At block 308, the switch 102 performs lookups to each of the flow tables in the speculation window simultaneously. In one example, performing a lookup to a flow table involves retrieving the actions associated with each of the entries in the flow table and storing those actions in the stack 1 12. Thus, the method 300 assumes, speculatively, that each flow table in the speculation window matches the incoming data packet.
[0024] At block 310, the switch 102 simultaneously compares the incoming data packet to the entries in each of the flow tables in the speculation window to determine which, if any, of the flow tables matches the incoming data packet. In one example, the comparisons are performed for each of the flow tables in the speculation window simultaneously. In one example, the comparison involves removing from the stack 1 12 any actions corresponding to flow tables that do not match the incoming data packet. In another example, the removed actions are replaced in the stack 1 12 with actions to be applied in cases of flow table misses (i.e., no matches).
[0025] At block 312, the switch 102 confirms that there are no flow tables remaining to be processed. If there are flow tables stored in the switch 102 for which operations have not been performed, then the method 300 returns to block 306 and retrieves a plurality of the unprocessed flow tables. The method 300 then proceeds as described above to perform simultaneous lookups and comparisons for the retrieved plurality of unprocessed flow tables.
[0026] Alternatively, if operations have been performed for all of the flow tables stored in the switch 102, then the method 300 proceeds to block 314. In block 314, the switch 102 performs the actions stored in the stack 1 12.
[0027] The method 300 ends in block 316.
[0028] The methods 200 and 300 can decrease the processing time for an SDN pipeline. For example, assume that it takes one clock cycle to perform a table lookup and one clock cycle to accumulate actions in the stack. Further assume that the SDN pipeline includes fifteen flow tables, i.e., n = 15. If one were to perform lookups to the fifteen flow tables one-by-one, and if every flow table matched an incoming data packet, then two clock cycles will be consumed in the processing of each flow table: one clock cycle to perform the lookup, and one clock cycle to store the associated actions. Thus, a total of thirty clock cycles would be consumed. If, instead, lookups and comparisons are performed for each of the fifteen flow tables simultaneously as described herein, then only two clock cycles will be consumed: one clock cycle to perform all of the lookups simultaneously, and one clock cycle to accumulate the actions from the matching flow tables. It is noted that this example represents a best case scenario in which every flow table matches the incoming data packet. For any flow tables that do not match the incoming data packet, one additional clock cycle will be consumed to remove the corresponding actions from the stack.
[0029] It should be noted that although not explicitly specified, at least one block, function, or operation of the method 200 or the method 300 described above may include a storing, displaying and/or outputting operation depending upon the particular application. In other words, any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device depending upon the particular application. Furthermore, blocks, functions, or operations in FIG. 2 and FIG. 3 that recite a determining operation, or involve a decision, do not necessarily imply that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step.
[0030] FIG. 4 depicts an example high-level block diagram of a computer that can be transformed into a machine that is dedicated to perform the functions described herein. Notably, no computer or machine currently exists that performs the functions as described herein. As a result, the examples of the present disclosure improve the operation and functioning of the computer to simultaneously process a plurality of flow tables, as disclosed herein.
[0031] As depicted in FIG. 4, the computer 400 comprises a hardware processor element 402, e.g., a central processing unit (CPU), a microprocessor, or a multi-core processor, a memory 404, e.g., random access memory (RAM) and/or read only memory (ROM), a module 405 for processing flow tables, and various input/output devices 406, e.g., storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, an input port and a user input device, such as a keyboard, a keypad, a mouse, a microphone, and the like. Although only one processor element is shown, it should be noted that the computer may employ a plurality of processor elements. Furthermore, although one computer is shown in the figure, if the method(s) as discussed above is implemented in a distributed or parallel manner for a particular illustrative example, i.e., the blocks of the above method(s) or the entire method(s) are implemented across multiple or parallel computers, then the computer of this figure is intended to represent each of those multiple computers. Furthermore, at least one hardware processor can be utilized in supporting a virtualized or shared computing environment. The virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented.
[0032] It should be noted that the present disclosure can be implemented by machine readable instructions and/or in a combination of machine readable instructions and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the blocks, functions and/or operations of the above disclosed methods. In one example, instructions and data for the present module or process 405 for simultaneous processing of flow tables, e.g., machine readable instructions can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300. Furthermore, when a hardware processor executes instructions to perform "operations", this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component, e.g., a co-processor and the like, to perform the operations
[0033] In one example, instructions and data for the present module or process 405 for simultaneous processing of flow tables, e.g., machine readable instructions can be loaded into memory 404 and executed by hardware processor element 402 to implement the blocks, functions or operations as discussed above in connection with the exemplary method 200 or method 300. For instance, the module 405 may include a plurality of programming code components, including a lookup component 408, a comparison component 410, and a stack builder component 412. These programming code components may be included in a computing system that includes a switch and a controller that cooperate to perform packet forwarding, such as the system 100.
[0034] The lookup component 406 may be configured to simultaneously retrieve the actions associated with each of the entries in a plurality of flow tables and to send those actions to a stack or similar data structure.
[0035] The comparison component 410 may be configured to simultaneously compare an incoming data packet to the entries in each of a plurality of flow tables and to determine which, if any, of the flow tables matches the incoming data packet. The comparison component 410 may be further configured to issue a command to remove any actions corresponding to flow tables that do not match the incoming data packet from a stack or similar data structure.
[0036] The stack builder component 412 may be configured to generate a stack of actions to be performed on an incoming data packet, based on commands received from the lookup component 408 and/or the comparison component 410.
[0037] The processor executing the machine readable instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor. As such, the present module 405 for simultaneous processing of flow tables, including associated data structures, of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, nonvolatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like. More specifically, the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server.
[0038] It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.

Claims

What is claimed is:
1 . A method, comprising:
obtaining a data packet at a network switch;
performing operations on a plurality of flow tables stored in the network switch, wherein the operations include attempting to match a field of the data packet to a field of an entry in each of the plurality of flow tables, and wherein the operations are performed on all of the plurality of flow tables simultaneously; and
identifying an action to be performed on the data packet, based on the operations.
2. The method of claim 1 , wherein the operations comprise:
performing a plurality of lookups to each of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the plurality of flow tables; and comparing the field of the data packet to fields of each entry in each flow table of the plurality of flow tables.
3. The method of claim 2, wherein the identifying comprises:
removing from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the plurality of flow tables for which a field cannot be matched to the field of the data packet.
4. The method of claim 3, further comprising:
replacing an action that is removed from the data structure with a new action to be applied in an event that no field of a flow table of the plurality of flow tables matches the field of the data packet.
5. The method of claim 1 , further comprising:
performing the action on the data packet.
6. The method of claim 1 , wherein the network switch stores n flow tables, and the plurality of flow tables includes a number of flow tables between two and n.
7. The method of claim 6, wherein the number is configurable.
8. The method of claim 1 , wherein the operations are performed without knowing whether a path exists to each flow table in the plurality of flow tables.
9. A non-transitory machine-readable storage medium encoded with instructions executable by a processor, the machine-readable storage medium comprising:
instructions to perform operations on a plurality of flow tables stored in a network switch, wherein the operations include attempting to match a field of an incoming data packet to a field of an entry in each of the plurality of flow tables, and wherein the operations are performed on all of the plurality of flow tables simultaneously; and
instructions to identify an action to be performed on the incoming data packet, based on the operations.
10. The non-transitory machine-readable storage medium of claim 9, wherein the instructions to perform operations comprise:
instructions to perform a plurality of lookups to each of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the plurality of flow tables; and
instructions to compare the field of the incoming data packet to fields of each entry in each flow table of the plurality of flow tables.
1 1 . The non-transitory machine-readable storage medium of claim 10, wherein the instructions to identify comprise:
instructions to remove from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the plurality of flow tables for which a field cannot be matched to the field of the incoming data packet.
12. The non-transitory machine-readable storage medium of claim 1 1 , wherein the instructions to identify further comprise:
instructions to replace an action that is removed from the data structure with a new action to be applied in an event that no field of a flow table of the plurality of flow tables matches the field of the data packet.
13. The non-transitory machine-readable storage medium of claim 9, wherein the operations are performed without knowing whether a path exists to each flow table in the plurality of flow tables.
14. A computer system comprising:
a switch storing a plurality of flow tables; and
an instruction set to cooperate with the switch to:
perform operations on at least two of the plurality of flow tables stored in the switch, wherein the operations include attempting to match a field of an incoming data packet to a field of an entry in each of the at least two of the plurality of flow tables, and wherein the operations are performed on all of the at least two of the plurality of flow tables simultaneously; and
identify an action to be performed on the incoming data packet, based on the operations.
15. The computer system of claim 14, wherein the operations comprise : performing a plurality of lookups to each of the at least two of the plurality of flow tables simultaneously, wherein the plurality of lookups results in a plurality of actions being stored in a data structure, the plurality of actions including each action associated with each entry in each flow table of the at least two of the plurality of flow tables;
comparing the field of the incoming data packet to fields of each entry in each flow table of the at least two of the plurality of flow tables; and
removing from the data structure any action in the plurality of actions that corresponds to an entry in a flow table of the at least two of the plurality of flow tables for which a field cannot be matched to the field of the incoming data packet.
PCT/US2015/041980 2015-07-24 2015-07-24 Simultaneous processing of flow tables WO2017018989A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/041980 WO2017018989A1 (en) 2015-07-24 2015-07-24 Simultaneous processing of flow tables

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/041980 WO2017018989A1 (en) 2015-07-24 2015-07-24 Simultaneous processing of flow tables

Publications (1)

Publication Number Publication Date
WO2017018989A1 true WO2017018989A1 (en) 2017-02-02

Family

ID=57884982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/041980 WO2017018989A1 (en) 2015-07-24 2015-07-24 Simultaneous processing of flow tables

Country Status (1)

Country Link
WO (1) WO2017018989A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018165060A1 (en) * 2017-03-10 2018-09-13 Microsoft Technology Licensing, Llc Packet processor in virtual filtering platform
RU2748368C1 (en) * 2017-12-04 2021-05-24 Призмиан С.П.А. Electrical cable for vertical applications

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011148306A1 (en) * 2010-05-25 2011-12-01 Telefonaktiebolaget L M Ericsson (Publ) Method for enhancing table lookups with exact and wildcards matching for parallel computing environments
US20120099591A1 (en) * 2010-10-26 2012-04-26 Dell Products, Lp System and Method for Scalable Flow Aware Network Architecture for Openflow Based Network Virtualization
US20130170495A1 (en) * 2010-09-08 2013-07-04 Yoji Suzuki Switching system, switching control system, and storage medium
EP2667553A1 (en) * 2012-05-21 2013-11-27 Huawei Technologies Co. Ltd. Packet processing method, device and system
US20140146674A1 (en) * 2012-11-29 2014-05-29 Futurewei Technologies, Inc. Packet Prioritization in a Software-Defined Network Implementing OpenFlow

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011148306A1 (en) * 2010-05-25 2011-12-01 Telefonaktiebolaget L M Ericsson (Publ) Method for enhancing table lookups with exact and wildcards matching for parallel computing environments
US20130170495A1 (en) * 2010-09-08 2013-07-04 Yoji Suzuki Switching system, switching control system, and storage medium
US20120099591A1 (en) * 2010-10-26 2012-04-26 Dell Products, Lp System and Method for Scalable Flow Aware Network Architecture for Openflow Based Network Virtualization
EP2667553A1 (en) * 2012-05-21 2013-11-27 Huawei Technologies Co. Ltd. Packet processing method, device and system
US20140146674A1 (en) * 2012-11-29 2014-05-29 Futurewei Technologies, Inc. Packet Prioritization in a Software-Defined Network Implementing OpenFlow

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018165060A1 (en) * 2017-03-10 2018-09-13 Microsoft Technology Licensing, Llc Packet processor in virtual filtering platform
US10516728B2 (en) 2017-03-10 2019-12-24 Microsoft Technology Licensing, Llc Virtual filtering platform in distributed computing systems
US10715585B2 (en) 2017-03-10 2020-07-14 Microsoft Technology Licensing, Llc Packet processor in virtual filtering platform
RU2748368C1 (en) * 2017-12-04 2021-05-24 Призмиан С.П.А. Electrical cable for vertical applications
US11450455B2 (en) 2017-12-04 2022-09-20 Prysmian S.P.A. Electrical cable for vertical applications

Similar Documents

Publication Publication Date Title
US11929944B2 (en) Network forwarding element with key-value processing in the data plane
US9141373B2 (en) System and method for accelerated software upgrades
US10979288B2 (en) Distributed rules engine for processing events associated with internet of things devices
CN109582485B (en) Configuration change abnormity detection method and device
CN108762911A (en) Timing task management method, apparatus, computer equipment and storage medium
US10284458B2 (en) Flow table modifying method, flow table modifying apparatus, and openflow network system
US9742683B1 (en) Techniques for enabling packet prioritization without starvation in communications networks
US10404577B2 (en) Network compatibility determination based on flow requirements of an application and stored flow capabilities of a software-defined network
US20180129294A1 (en) Gesture-based control and usage of video relay systems
CN109067585B (en) Method and device for issuing query ACL (access control list) table items
US20230262006A1 (en) Packet classification with multiple classifiers
CN104866339A (en) Distributed persistent management method, system and device of FOTA data
KR20150042874A (en) Sorting
US9665626B1 (en) Sorted merge of streaming data
CN111224882A (en) Message processing method and device and storage medium
US20180167337A1 (en) Application of network flow rule action based on packet counter
US20150370599A1 (en) Processing tasks in a distributed system
WO2017018989A1 (en) Simultaneous processing of flow tables
US10769153B2 (en) Computer system and method for setting a stream data processing system
US9667533B2 (en) Creating and utilizing customized network applications
US10148532B2 (en) Leader state transition compression mechanism to efficiently compress DFA based regular expression signatures
CN106453144B (en) Message processing method and device in software defined network
CN105938438B (en) Data message processing method and device
US11960943B2 (en) Event log management
US20170288968A1 (en) Compiling network policies

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15899788

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15899788

Country of ref document: EP

Kind code of ref document: A1