US11700212B2 - Expansion of packet data within processing pipeline - Google Patents

Expansion of packet data within processing pipeline Download PDF

Info

Publication number
US11700212B2
US11700212B2 US17/494,515 US202117494515A US11700212B2 US 11700212 B2 US11700212 B2 US 11700212B2 US 202117494515 A US202117494515 A US 202117494515A US 11700212 B2 US11700212 B2 US 11700212B2
Authority
US
United States
Prior art keywords
data
packet
packet processing
integrated circuit
stages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/494,515
Other versions
US20220029935A1 (en
Inventor
Patrick Bosshart
Jay Evan Scott Peterson
Michael Gregory Ferrara
Michael E. Attig
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Barefoot Networks Inc
Original Assignee
Barefoot Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Barefoot Networks Inc filed Critical Barefoot Networks Inc
Priority to US17/494,515 priority Critical patent/US11700212B2/en
Publication of US20220029935A1 publication Critical patent/US20220029935A1/en
Priority to US18/201,060 priority patent/US20230300087A1/en
Application granted granted Critical
Publication of US11700212B2 publication Critical patent/US11700212B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3063Pipelined operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • H04L45/7453Address table lookup; Address filtering using hashing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/109Integrated on microchip, e.g. switch-on-chip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • Certain configurable hardware switches use a match-action paradigm, passing a packet header vector between multiple stages.
  • the packet header vector is used to also store metadata (including control and data flow) and instructions from one stage to the next.
  • metadata including control and data flow
  • additional storage may be needed.
  • simply enlarging the packet header vector may not be optimal, as this expands the number of wires and thus requires more physical design area.
  • Some embodiments of the invention provide a network forwarding integrated circuit (IC) with a packet processing pipeline that uses multiple different types of data containers to pass packet data through the pipeline.
  • the different types of data containers (i) have different availability to match-action stages of the pipeline and (ii) have their data values generated differently by the match-action stages of the pipeline.
  • the network forwarding IC of some embodiments includes a set of configurable packet processing pipeline resources that operate as both ingress pipelines (for packets received at the network forwarding IC) and egress pipelines (for packets being sent from the network forwarding IC), in addition to a traffic management unit that is responsible for receiving packets from an ingress pipeline and enqueuing the packet for a port associated with an egress pipeline.
  • a packet is processed by an ingress pipeline, enqueued by the traffic management unit (which may also perform packet replication, if necessary), and processed by an egress pipeline.
  • Each packet processing pipeline (whether acting as an ingress or egress pipeline) includes a parser, a match-action unit (a series of match-action stages), and a deparser, in some embodiments.
  • the parser receives a packet as an ordered stream of data, and based on its instructions and analysis of the packet, identifies packet header fields and stores the packet header fields in a set of data containers (a packet header vector (PHV)) to be sent to the match-action unit.
  • PSV packet header vector
  • the match-action unit performs various processing to determine actions to be taken on the packet, including modifying the PHV data, determining output instructions for the packet, etc. After the last match-action stage, the PHV is provided to the deparser, so that the deparser can reconstruct the packet.
  • this PHV includes multiple different types of data containers. Specifically, in some embodiments, a first type of data container is fully available for match-related operations, a second type of data container is only available for match-related operations in certain situations, and a third type of data container is not available at all for the match-related operations. Specifically, data containers of the first and second types can be used to match against match table entries and/or to generate hashes for matching against match table entries.
  • each match-action stage of some embodiments includes a set of data-plane stateful processing units (DSPUs) and stateful tables that these DSPUs access and modify, and the first and second types of data containers are made available to the DSPU for these operations, while the third type of data containers are not. Additional match-related operations, such as passing table addresses to later match-action stages, also cannot use the third type of data container in some embodiments.
  • DSPUs data-plane stateful processing units
  • the second type of data container is only available for certain stages. Specifically, when the operations of a match-action stage do not depend on the output of the previous match-action stage, some embodiments run the two stages in parallel. In this case, the data containers of the second type will not have been provided yet to the latter of these two subsequent stages, and thus are not available for the match-related operations. However, if the latter of these two stages is dependent on the previous stage, then the second-type data containers will have been populated for the stage and are available for the match-related operations.
  • each of the match-action stages generates output values for the different types of data containers in a different way in some embodiments.
  • Each match-action stage includes a set of arithmetic logic units (ALUs) that are used to generate the output values for the first-type data containers, while the output values for the second-type and third-type data containers are generated without the ALUs.
  • ALUs arithmetic logic units
  • each of the ALUs uses two operands output by a multiplexer (as well as a set of instructions) to generate one output value for a first-type data container.
  • the two operands output by the multiplexer are each an output data container value (one of the first type and one of the second type).
  • the multiplexer that generates the operands for the ALUs as well as the outputs for the second-type and third-type data containers enables the movement of data values between the different types of data containers.
  • these values can be moved to a different type of data container for use in match and/or action operations at a later stage.
  • the destination IP address value could be moved to a third-type data container at one of the early stages to free up room in the first-type and second-type data containers for values used in the earlier stages.
  • one of the stages would move the value to one of the first-type or second-type data containers.
  • the use of the multiple types of data containers enables the expansion of the size of the PHV within the match-action unit, without a corresponding expansion in either (i) the size of the PHV output by the parser or provided to the deparser or (ii) the number of wires required to transfer the PHV data from stage to stage.
  • the parser outputs a first number of PHV data containers (including first-type and second-type containers), and then the first match-action stage expands the PHV to a second (larger) number of PHV data containers (adding the third-type containers).
  • Each of the intermediate stages of the match-action unit receives the expanded PHV, potentially modifies the values of the PHV, and passes the expanded PHV to the next stage.
  • the PHV is reduced back to the first number of data containers (including first-type and second-type containers), and provided to the deparser.
  • the first-type and second-type containers are the same.
  • the last stage in some embodiments outputs all of the data containers as first-type containers, in that the ALUs are used to generate all of the outputs of the last stage.
  • each match-action on the network forwarding IC has a given number of wires passing over the stage, with the first set of wires for carrying the input PHV bits and a second set of wires for carrying the output PHV bits. This enables the PHV to be forwarded to the next stage before processing when that next stage is not dependent on the current stage outputs.
  • the second-type and/or third-type data container bits use some of the input wires as output wires (with fewer input wires needed due to the restrictions on the second-type and third-type data containers).
  • the MAU stages are configured by a controller according to a compiled program (or multiple compiled programs, such as an ingress program and an egress program).
  • the compiler receives a program or set of programs (e.g., P4 programs) and assigns different parameters to the various PHV data containers available for each stage.
  • P4 programs e.g., programs
  • not all of the parameters are needed for matching at each stage, and similarly not all of the parameters need to be used as operands for the ALUs at each stage.
  • the program requirements determine the specific types of data containers required at each stage for each program, and the expansion of the number of PHV data containers enables the compiler to accommodate a larger number of parameters without a significant hardware expansion.
  • FIG. 1 conceptually illustrates the structure of a network forwarding IC of some embodiments.
  • FIG. 2 illustrates an example of a match-action unit of some embodiments.
  • FIG. 3 illustrates the operation of a match-action stage with respect to second-type and third-type PHV data containers.
  • FIG. 4 illustrates a summary chart of the properties of the three types of PHV data containers of some embodiments.
  • FIG. 5 conceptually illustrates a process 500 of some embodiments for generating PHV output values.
  • FIG. 6 conceptually illustrates the idea of “metadata bloat” via a graph.
  • FIG. 7 conceptually illustrates the expansion of the PHV within the MAU.
  • FIG. 8 conceptually illustrates an example of the movement of a packet header field value between different data containers over the course of a packet processing pipeline.
  • FIG. 9 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
  • Some embodiments of the invention provide a network forwarding integrated circuit (IC) with a packet processing pipeline that uses multiple different types of data containers to pass packet data through the pipeline.
  • the different types of data containers (i) have different availability to match-action stages of the pipeline and (ii) have their data values generated differently by the match-action stages of the pipeline.
  • FIG. 1 conceptually illustrates the structure of such a network forwarding IC 100 of some embodiments (that is, e.g., incorporated into a hardware forwarding element).
  • FIG. 1 illustrates several ingress pipelines 105 , a traffic management unit (referred to as a traffic manager) 110 , and several egress pipelines 115 .
  • the ingress pipelines 105 and the egress pipelines 115 actually use the same circuitry resources, which is configured to handle both ingress and egress pipeline packets synchronously (possibly in addition to non-packet data). That is, a particular stage of the pipeline may process an ingress packet, an egress packet, both, or neither in the same clock cycle.
  • the ingress and egress pipelines are separate circuitry.
  • the packet is initially directed to one of the ingress pipelines 105 (each of which may correspond to one or more ports of the hardware forwarding element). After passing through the selected ingress pipeline 105 , the packet is sent to the traffic manager 110 , where the packet is enqueued and placed in the output buffer 117 .
  • the ingress pipeline 105 that processes the packet specifies into which queue the packet should be placed by the traffic manager 110 (e.g., based on the destination of the packet). The traffic manager 110 then dispatches the packet to the appropriate egress pipeline 115 (each of which may correspond to one or more ports of the forwarding element).
  • the traffic manager 110 dispatches the packet. That is, a packet might be initially processed by ingress pipeline 105 b after receipt through a first port, and then subsequently by egress pipeline 115 a to be sent out a second port, etc.
  • Each ingress pipeline 105 includes a parser 120 , a match-action unit (MAU) 125 , and a deparser 130 .
  • each egress pipeline 115 includes a parser 135 , a MAU 140 , and a deparser 145 .
  • the parser 120 or 135 receives a packet as a formatted collection of bits in a particular order, and parses the packet into its constituent header fields. The parser starts from the beginning of the packet and assigns these header field values to fields (e.g., data containers) of a packet header vector (PHV) for processing.
  • PSV packet header vector
  • the parser 120 or 135 separates out the packet headers (up to a designated point) from the payload of the packet, and sends the payload (or the entire packet, including the headers and payload) directly to the deparser without passing through the MAU processing.
  • the MAU 125 or 140 performs processing on the packet data (i.e., the PHV).
  • the MAU includes a sequence of stages, with each stage including one or more match tables, a set of stateful processing units, and an action engine.
  • Each match table includes a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries.
  • that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.).
  • the action engine of the stage performs the actions on the PHV, which is then sent to the next stage of the MAU.
  • the PHV includes different types of data containers, which have different properties in terms of the types of operations that can be performed using the data stored in the containers and how the values are output into the PHV containers for the next stage.
  • the MAU stages are described in more detail below by reference to FIGS. 2 and 3 .
  • the deparser 130 or 145 reconstructs the packet using the PHV as modified by the MAU 125 or 140 and the payload received directly from the parser 120 or 135 .
  • the deparser constructs a packet that can be sent out over the physical network, or to the traffic manager 110 . In some embodiments, the deparser constructs this packet based on data received along with the PHV that specifies the protocols to include in the packet header, as well as its own stored list of data container locations for each possible protocol's header fields.
  • the traffic manager 110 includes a packet replicator 119 and the previously-mentioned output buffer 117 .
  • the traffic manager 110 may include other components, such as a feedback generator for sending signals regarding output port failures, a series of queues and schedulers for these queues, queue state analysis components, as well as additional components.
  • the packet replicator 119 of some embodiments performs replication for broadcast/multicast packets, generating multiple packets to be added to the output buffer (e.g., to be distributed to different egress pipelines).
  • the output buffer 117 is part of a queuing and buffering system of the traffic manager in some embodiments.
  • the traffic manager 110 provides a shared buffer that accommodates any queuing delays in the egress pipelines.
  • this shared output buffer 117 stores packet data, while references (e.g., pointers) to that packet data are kept in different queues for each egress pipeline 115 .
  • the egress pipelines request their respective data from the common data buffer using a queuing policy that is control-plane configurable.
  • packet data may be referenced by multiple pipelines (e.g., for a multicast packet). In this case, the packet data is not removed from this output buffer 117 until all references to the packet data have cleared their respective queues.
  • FIG. 2 illustrates an example of a match-action unit of some embodiments.
  • a packet processing pipeline of some embodiments has several MAU stages, each of which includes packet-processing circuitry for forwarding received data packets and/or performing stateful operations based on these data packets. These operations are performed by processing values stored in the PHVs of the packets.
  • This figure illustrates, in part, the flow of data for a PHV container through the match-action unit 200 .
  • the PHV includes multiple data containers, possibly of different sizes, that are used to store packet header field values and other data (e.g., metadata, instructions, etc.) within the match-action unit.
  • packet header field values may be stored in a single container or may be mapped across containers (e.g., a 48-bit MAC address could be stored in a combination of a 32-bit data container and a 16-bit data container).
  • a single data container may store multiple packet header field values or other data (e.g., storing both the time to live and protocol field values of an IP header in a single 16-bit data container).
  • the PHV has a fixed number of data containers of specific sizes (e.g., 8-bit data containers, 16-bit data containers, and 32-bit data containers), which are described in further detail by reference to FIG. 7 .
  • These data containers have the capacity to carry data for both an ingress packet and an egress packet (and non-packet data processed separately from the ingress or egress packets, in some cases).
  • the PHV data containers are divided into groups in some embodiments, with the groups of data containers being processed together by certain parts of the match-action stages. Each group within the MAU, in some embodiments, includes three types of data containers.
  • FIG. 2 illustrates the operations of some embodiments applied to a first type of PHV data container.
  • the first type of data container is fully available for certain match-related operations
  • a second type of data container is only available for these match-related operations in certain situations
  • a third type of data container is not available at all for these match-related operations.
  • each of the match-action stages generates output values for the different types of PHV containers in some embodiments, as explained further below.
  • the MAU stage 200 in some embodiments has a set of one or more match tables 205 , a data plane stateful processing unit 210 (DSPU), a set of one or more stateful tables 215 , an action crossbar 230 , an action parameter memory 220 , an action instruction memory 225 , and an action arithmetic logic unit (ALU) 235 .
  • the match table set 205 can compare one or more fields in a received PHV (i.e., values stored in one or more PHV containers) to identify one or more matching flow entries (i.e., entries that match the PHV).
  • the match table set can be TCAM tables or exact match tables in some embodiments.
  • the match table set is accessed at a memory address that is a value extracted from one or more data containers of the PHV, or a hash of this extracted value or values.
  • the value stored in a match table record that matches a packet's flow identifier, or that is accessed at a hash-generated address provides addresses for locations in the action parameter memory 220 and action instruction memory 225 .
  • a value from the match table can provide an address and/or parameter for one or more records in the stateful table set 215 , and can provide an instruction and/or parameter for the DSPU 210 .
  • each action table 215 , 220 , and 225 can be addressed through a direct addressing scheme, an indirect addressing scheme, or an independent addressing scheme, depending on the configuration of the MAU stage.
  • the action table uses the same address that is used to address the matching flow entry in the match table set 205 (e.g., a hash generated address value or a value from the PHV).
  • the indirect addressing scheme accesses an action table by using an address value that is extracted from one or more records that are identified in the match table set 205 (i.e., identified in the match table set via direct addressing or record matching operations).
  • the independent address scheme of some embodiments is similar to the direct addressing scheme except that it does not use the same address that is used to access the match table set 205 .
  • the table address in the independent addressing scheme can either be a value extracted from the PHV, or a hash of this extracted value.
  • not all the action tables 215 , 220 and 225 can be accessed through these three addressing schemes (e.g., the action instruction memory 225 in some embodiments is accessed through only the direct and indirect addressing schemes).
  • the DSPU 210 and the stateful table set 215 also receive the input PHV values (at least the values from the first-type data containers, and in some cases values from the second-type data containers).
  • the PHV data containers can store instructions and/or parameters for the DSPU as well as memory addresses and/or parameters for the stateful table set 215 , in addition to packet header field values. Such instructions, parameters, and memory/addresses are calculated in previous match-action stages and stored to the PHV data containers in some embodiments.
  • the DSPU 210 in some embodiments performs one or more stateful operations, while a set of stateful tables 215 stores state data used and generated by the DSPU 210 .
  • the DSPU is a programmable arithmetic logic unit (ALU) or set of programmable ALUs that performs operations synchronously with the dataflow of the packet-processing pipeline (i.e., synchronously at the line rate).
  • ALU programmable arithmetic logic unit
  • the DSPU can process a different PHV (or one ingress and one egress PHV) every clock cycle, thus ensuring that the DSPU is able to operate synchronously with the dataflow of the packet-processing pipeline.
  • a DSPU performs every computation with fixed latency (e.g., fixed number of clock cycles).
  • the remote or local control plane provides configuration data to program the DSPU.
  • the DSPU 210 outputs an action parameter to the action crossbar 230 .
  • the action parameter memory 220 also outputs an action parameter to this crossbar 230 .
  • the action parameter memory 220 retrieves the action parameter that it outputs from the record identified by the address provided by the match table set 205 , based on a matched entry in the match table set 205 .
  • These action parameters are constants or other values that are either (i) output as PHV values or (ii) used to perform a calculation in order to generate an output PHV value.
  • the action crossbar 230 in some embodiments maps the action parameters received from the DSPU 210 and action parameter memory 220 to an action parameter bus 240 .
  • the action crossbar 230 can map the action parameters from DSPU 210 and memory 220 differently to this bus 240 .
  • the crossbar can supply the action parameters from either of these sources in their entirety to this bus 240 , or it can concurrently select different portions of these parameters for this bus.
  • This bus provides the action parameter to an operand multiplexer (MUX) 240 .
  • the operand MUX 240 receives the action parameter from the action crossbar 230 , as well as the values from the input PHV containers. Based on instructions from the action instruction memory 225 , the operand MUX 240 provides operands to the action ALU 235 .
  • the action ALU 235 also receives an instruction to execute from the action instruction memory 225 , which specifies the calculations to perform on the operands received from the operand MUX 240 .
  • the action instruction memory 225 like the action parameter memory 220 , retrieves the instructions that it outputs (to the operand MUX 240 and the action ALU 235 ) from the record identified by the address provided by the match table set 205 .
  • the action ALU 240 in some embodiments is a very large instruction word (VLIW) processor.
  • the action ALU 240 executes the instructions (from the instruction memory 235 or, in some embodiments, the PHV) using the operands received from the operand MUX 240 (i.e., from the action crossbar 230 or the PHV).
  • the action ALU 235 is actually a number of ALUs, one for each PHV data container of the first type. Specifically, each ALU 235 receives two operands from the operand MUX 240 , which it uses to calculate one output PHV data container in accordance with its received instructions.
  • the value stored in the output PHV data container may be as simple as one of the operands (e.g., the same value stored in the corresponding input PHV data container, a value from a different input PHV data container, a value from the stateful tables 215 or action parameter memory 220 ) or could involve a calculation involving both operands (e.g., decrementing the time to live value by subtracting a constant 1 received from the action parameter memory from the input time to live value).
  • This output PHV data container is passed to the next stage (i.e., as an input PHV data container for that next stage).
  • FIG. 2 illustrates the operation of the match-action stage 200 with respect to a first-type PHV data container of some embodiments.
  • FIG. 3 illustrates the operation of this match-action stage with respect to second-type and third-type PHV data containers.
  • the second-type PHV data is provided to the match tables 205 , DSPU 210 , and state tables 215 , as with the first-type PHV data.
  • the values in the second-type PHV data containers are only useable for matching operations (i.e., lookups in the match tables, stateful operations with the DSPU and stateful tables) if the match-action stage has a dependency on the previous stage.
  • matching operations i.e., lookups in the match tables, stateful operations with the DSPU and stateful tables
  • some embodiments run the two match-action stages in parallel, with the first-type data containers provided to both stages in the same clock cycle (or provided to the later of the stages within a small number of transport clock cycles of the provision to the prior of the stages).
  • the data containers of the second type will not have been provided yet to the latter of these two subsequent stages, and thus are not available for the match table, DSPU, and stateful table operations. However, if the latter of these two stages is dependent on the previous stage, then the second-type data containers will have been populated for the stage and are available at the same time as the first-type data containers.
  • the third-type data containers are not provided to the match-tables 205 , DSPU 210 , or state tables 215 , and instead are only provided to the operand MUX 240 .
  • the match tables 205 , DSPU 210 , and stateful tables 215 receive all of the first-type and second-type PHV containers (assuming the second-type PHV data is available based on a dependency) and can perform their operations using any of this data.
  • the match-action stage 200 generates output values for the second-type and third-type data containers differently than the first-type data containers in some embodiments.
  • the operand MUX 240 provides two operands to each action ALU 235 , which uses the operands to generate the output value for a first-type PHV data container.
  • no action ALU is present for the second-type and third-type PHVs.
  • two outputs of the operand MUX 240 are used as one second-type PHV output and one third-type PHV output.
  • the input third-type PHV data containers are only input to the operand MUX 240 , and thus the only operations that can be performed on this data is to copy the value to another data container (e.g., to a first-type container or second-type container, or in some cases to a different third-type container) for use in a later match-action stage.
  • the operand MUX 240 receives all of the input PHVs (first-type, second-type, and third-type) as well as the values from the action crossbar 230 , and outputs pairs of operands to either the action ALUs 235 (for calculation of the first-type output PHV values) or as second-type and third-type output PHV values.
  • additional restrictions require that the third-type PHV container output values can only be sourced from PHV container input values (of any of the three types), while the second-type PHV container output values (as well as the operands for the action ALUs calculating the first-type values) can be sourced from the PHV container input values as well as the action and constant values provided via the action crossbar 230 .
  • Some embodiments use multiple operand MUXes 240 for separate groups of PHV data containers (e.g., groups that include specific numbers of first-type, second-type, and third-type PHV data containers). In this case, values can be copied from one PHV data container to another within a group, but not between groups.
  • values can be copied from one PHV data container to a second PHV data container without affecting the value in the first container (i.e., the first and second output PHV containers would both store the value from the first input PHV container in this case).
  • FIG. 4 illustrates a summary chart 400 of the properties of the three types of PHV data containers of some embodiments, which may also be referred to as regular PHV (first-type), mocha PHV (second-type), and dark PHV (third-type).
  • regular PHV first-type
  • mocha PHV second-type
  • dark PHV third-type
  • these PHV data container types differ in terms of (i) whether the input PHV can be used for match-related operations, whether the data containers are visible to the parser and deparser, (iii) whether the output value is generated by the VLIW action ALUs, and which input operands may be used for the output values.
  • the first-type PHV containers fully participate in match-related operations, while the second-type PHV containers only participate in these operations in a particular stage so long as there is a dependency on the previous stage (so that the particular stage does not execute concurrently with the previous stage), and the third-type PHV containers are not used for these match-related operations.
  • the match-related operations include generating hashes with the values in PHV data containers for exact-match addresses, selector tables, hash-addressed stateful tables, etc., using the values directly by stateful tables, generating hash digests for hardware learning, passing the values to the action crossbar as action constants, and passing table addresses to later MAU stages.
  • the output for the first-type PHV containers is generated by the action ALUs, using both input PHV values (from the same group of PHV data containers, in some embodiments) as well as the action constants.
  • the output for the second-type and third-type PHV containers is generated without the action ALUs (i.e., using the operands directly from the operand MUX).
  • the second-type PHV container output can be based on input PHV values or action constants, whereas the third-type PHV container output can only be sourced from the input PHV values.
  • first-type and second-type PHV containers are visible to the deparser (i.e., generated by the parser, and received by the deparser), while the third-type PHV containers only exist within the match-action unit.
  • FIG. 5 conceptually illustrates a process 500 of some embodiments for generating PHV output values.
  • the process 500 is performed by a match-action stage of a network forwarding IC of some embodiments. This process assumes that the second-type PHV containers are available for match-related operations (i.e., that due to dependencies, the stage is not executing synchronously with the previous stage).
  • the process 500 is conceptual, and represents operations performed by various components within the match-action stage. In some embodiments, some of the operations are performed as a linear process, while some operations are performed synchronously with other operations of the process.
  • the process 500 begins by receiving (at 505 ) a set of input PHV containers including three types of container (i.e., the three types of containers described in FIG. 4 above).
  • the first-type and second-type containers are received by numerous match-related components of the stage (e.g., the match tables, DSPU, stateful tables, and operand MUX), while the third-type containers are only directed to the operand MUXes.
  • the process 500 then performs (at 510 ) match and action operations using the first-type and second-type PHV data containers.
  • These operations performed by, e.g., the match-tables (and associated components, such as hash generators), DSPU, stateful tables, action instruction memory, and action parameter memory, include generating hashes with the values in PHV data containers for exact-match addresses, selector tables, hash-addressed stateful tables, etc., using the values directly by stateful tables, generating hash digests for hardware learning, and passing the values to the action crossbar as action constants, and passing table addresses to later MAU stages.
  • the process 500 uses (at 515 ) the operand multiplexer(s) to route the values from the various action operations and input PHV to the appropriate action ALUs (or to the output PHV containers).
  • the process generates (at 520 ) the output values for the first-type PHV data containers using the action ALUs, based on the operands received from the multiplexer(s).
  • the process generates (at 525 ) the output values for the second-type and third-type PHV containers as the operand multiplexer output without the use of the action ALUs.
  • the operand MUX of some embodiments outputs two operands to each action ALU which the action ALUs use to generate the output values for the first-type data containers, while additional pairs of operands are used directly as the second-type and third-type data containers.
  • the PHV is used to not only carry packet header field values between match-action stages (and allow the stages to modify these values), but also to carry metadata (e.g., the ingress port at which a packet is received, the egress port out of which a packet should be sent, multicast group identifiers, etc.) as well as instructions for subsequent stages, memory addresses for table lookups in subsequent stages, control/data flow required for MAU processing, etc.
  • metadata e.g., the ingress port at which a packet is received, the egress port out of which a packet should be sent, multicast group identifiers, etc.
  • FIG. 6 conceptually illustrates this idea of “metadata bloat” via a graph 600 .
  • This graph illustrates the relative amount of metadata required for the PHV to carry (where metadata here also includes instructions, memory addresses, etc.) as a function of the MAU stage. It should be understood that this is a conceptual graph and represents a typical packet, rather than any specific measurements or exact amounts of metadata.
  • the match-action unit 605 includes N stages.
  • the amount of metadata required to be carried by the PHV starts out low at the initial stage (because the only metadata will have come from the parser, such as ingress port, etc.). This increases to the middle stage M (if there are an even number of stages, the peak may be at the output of stage N/2), although in different configurations the amount of metadata required may peak before or after the exact middle of the match-action stage sequence.
  • the PHV will be carrying the most instructions/addresses/etc., as the earlier stages generate this metadata for use by the later stages. In the later stages, these instructions have been carried out, so the amount of metadata required decreases.
  • the metadata required is generally limited to instructions for the deparser (e.g., the list of protocols that make up the packet header) or traffic manager (e.g., a multicast group identifier, an egress queue, etc.).
  • the use of the multiple types of data containers enables the expansion of the size of the PHV within the match-action unit in some embodiments, without a corresponding expansion in either (i) the size of the PHV output by the parser or provided to the deparser or (ii) the number of wires required to transfer the PHV data from stage to stage.
  • the parser outputs a first number of PHV data containers (including first-type and second-type containers), and then the first match-action stage expands the PHV to a second (larger) number of PHV data containers (adding the third-type containers).
  • Each of the intermediate stages of the match-action unit receives the expanded PHV, potentially modifies the values of the PHV, and passes the expanded PHV to the next stage.
  • the PHV is reduced back to the first number of data containers (including first-type and second-type containers), and provided to the deparser.
  • FIG. 7 conceptually illustrates the expansion of the PHV within the MAU. Specifically, FIG. 7 conceptually illustrates the types of PHV data containers and relative numbers of each type of container sent from the parser 705 to the first MAU stage 710 , between subsequent MAU stages, and from the last MAU stage 720 to the deparser 725 .
  • the parser 705 outputs the PHV for a packet (or for one ingress and one egress packet) to the first MAU stage 710 .
  • the PHV as output by the parser includes specific groups (e.g., groups of four PHVs containers, with every fourth group being a second-type PHV data container. Each such group may be assigned to the ingress packet or egress packet, in some embodiments.
  • a PHV could have 224 data containers, including sixty-four 32-bit containers, (sixteen of which are second-type containers), sixty-four 8-bit containers (sixteen of which are second-type containers), and ninety-six 16-bit containers (twenty-four of which are second-type containers).
  • Other embodiments may use different numbers and/or different sizes of PHV containers.
  • the first MAU stage 710 receives the PHV from the parser 705 , and outputs to the second stage 715 an expanded PHV. As shown, this includes three arrows representing first-type PHV data containers, one arrow representing second-type PHV data containers, and one arrow representing third-type PHV data containers. Essentially, for each second-type container received from the parser 705 , the first MAU stage 710 outputs both a second-type container and a third-type container.
  • the PHV from the parser is divided into groups that each have sixteen containers (in the 224-container example above, this would include four 8-bit groups, four 32-bit groups, and six 16-bit groups), with twelve first-type and four second-type containers per group.
  • the expanded PHV in this example has twenty containers per group, with twelve first-type, four second-type, and four third-type containers per group (for a total of 280 total data containers).
  • Each of the subsequent intermediate MAU stages receives the expanded PHV from the previous stages and outputs this expanded PHV (typically modified in some way) to the next stage.
  • the final MAU stage 720 receives the expanded PHV and outputs a reduced PHV to the deparser 725 having the same number of data containers as those output by the parser 705 .
  • this reduced PHV is shown by three arrows representing first-type containers and one arrow representing second-type data containers. In the example above, this would again be 224 total data containers provided to the deparser 725 , the same as were output by the parser.
  • the final stage 720 outputs all of the data containers as first-type containers, in that the action ALUs are used to generate all of the outputs at this stage. Irrespective of whether the data containers are output as first-type or second-type containers, in some embodiments these are equivalent from the perspective of the parser and deparser.
  • each match-action on the network forwarding IC has a given number of wires passing over the stage, with the first set of wires for carrying the input PHV bits and a second set of wires for carrying the output PHV bits. This enables the PHV to be forwarded to the next stage before processing when that next stage is not dependent on the current stage outputs.
  • the second-type and/or third-type data container bits use some of the input wires as output wires (with fewer input wires needed due to the restrictions on the second-type and third-type data containers).
  • the MAU stages are configured by a controller according to a compiled program (or multiple compiled programs, such as an ingress program and an egress program).
  • the compiler receives a program or set of programs (e.g., P4 programs) and assigns different parameters to the various PHV data containers available for each stage.
  • P4 programs e.g., programs
  • not all of the parameters are needed for matching at each stage, and similarly not all of the parameters need to be used as operands for the ALUs at each stage.
  • the program requirements determine the specific types of data containers required at each stage for each program, and the expansion of the number of PHV data containers enables the compiler to accommodate a larger number of parameters without a significant hardware expansion.
  • the multiplexer that generates the operands for the ALUs as well as the outputs for the second-type and third-type data containers enables MAU stages to copy data values between the different types of data containers (within a group of data containers, in some embodiments).
  • these values can be moved to a different type of data container for use in match and/or action operations at a later stage.
  • a compiler determines the configuration data that indicates to which PHV data container each header field or piece of metadata is written at each match-action stage, in order for the packet-processing pipeline to execute a specific packet-processing program provided to the compiler. The compiler thus ensures that, when needed for match operations, specific packet header field or metadata values are stored in a PHV data container that is accessible for the match operations.
  • FIG. 8 conceptually illustrates an example of the movement of a packet header field value (specifically, the destination IP address of a packet) between different data containers over the course of a packet processing pipeline 800 .
  • the parser 805 outputs a PHV for a received packet including first-type and second-type containers, with the destination IP address of the received packet stored in one of the first-type containers.
  • the destination IP address is not needed by the first several stages, and thus the first MAU stage 810 moves this data into a third-type data container (which is not accessible for match-operations), thereby freeing up the first-type and second-type containers for data that is required at earlier stages (e.g., other packet header fields for match operations, instructions for earlier stages, etc.).
  • the seventh match-action stage 815 in the pipeline 800 moves the destination IP address value back to a first-type PHV data container, so that the eighth match-action stage 820 can use this value in a match operation.
  • the eighth stage 820 performs a routing operation using the match tables, matching on the destination IP address.
  • next hop address is read from, e.g., the action parameter memory and written to another PHV data container (a first-type container).
  • next hop address is moved to a second-type PHV data container, and the first-type and second-type data containers are provided to the deparser 830 .
  • FIG. 9 conceptually illustrates an electronic system 900 with which some embodiments of the invention are implemented.
  • the electronic system 900 can be used to execute any of the control, virtualization, or operating system applications described above.
  • the electronic system 900 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device.
  • Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
  • Electronic system 900 includes a bus 905 , processing unit(s) 910 , a system memory 925 , a read-only memory 930 , a permanent storage device 935 , input devices 940 , and output devices 945 .
  • the bus 905 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 900 .
  • the bus 905 communicatively connects the processing unit(s) 910 with the read-only memory 930 , the system memory 925 , and the permanent storage device 935 .
  • the processing unit(s) 910 retrieves instructions to execute and data to process in order to execute the processes of the invention.
  • the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
  • the read-only-memory (ROM) 930 stores static data and instructions that are needed by the processing unit(s) 910 and other modules of the electronic system.
  • the permanent storage device 935 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 900 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 935 .
  • the system memory 925 is a read-and-write memory device. However, unlike storage device 935 , the system memory is a volatile read-and-write memory, such a random access memory.
  • the system memory stores some of the instructions and data that the processor needs at runtime.
  • the invention's processes are stored in the system memory 925 , the permanent storage device 935 , and/or the read-only memory 930 . From these various memory units, the processing unit(s) 910 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
  • the bus 905 also connects to the input and output devices 940 and 945 .
  • the input devices enable the user to communicate information and select commands to the electronic system.
  • the input devices 940 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
  • the output devices 945 display images generated by the electronic system.
  • the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
  • bus 905 also couples electronic system 900 to a network 965 through a network adapter (not shown).
  • the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 900 may be used in conjunction with the invention.
  • Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
  • computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks.
  • CD-ROM compact discs
  • CD-R recordable compact discs
  • the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
  • Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • integrated circuits execute instructions that are stored on the circuit itself.
  • the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
  • display or displaying means displaying on an electronic device.
  • the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Some embodiments provide a network forwarding IC with packet processing pipelines, at least one of which includes a parser, a set of match-action stages, and a deparser. The parser is configured to receive a packet and generate a PHV including a first number of data containers storing data for the packet. A first match-action stage is configured to receive the PHV from the parser and expand the PHV to a second, larger number of data containers storing data for the packet. Each of a set of intermediate match-action stage is configured to receive the expanded PHV from a previous stage and provide the expanded PHV to a subsequent stage. A final match-action stage is configured to receive the expanded PHV and reduce the PHV to the first number of data containers. The deparser is configured to receive the reduced PHV from the final match-action stage and reconstruct the packet.

Description

CLAIM OF BENEFIT TO PRIOR APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 15/835,233, filed Dec. 7, 2017. U.S. patent application Ser. No. 15/835,233 claims the benefit of U.S. Provisional Patent Application 62/564,659, filed Sep. 28, 2017. The entire specifications of all of those patent applications are hereby incorporated herein by reference in their entirety.
BACKGROUND
Certain configurable hardware switches use a match-action paradigm, passing a packet header vector between multiple stages. To properly carry out the configured instructions, in addition to carrying packet header values, the packet header vector is used to also store metadata (including control and data flow) and instructions from one stage to the next. As packet size becomes larger and/or operations become more complex, additional storage may be needed. However, simply enlarging the packet header vector may not be optimal, as this expands the number of wires and thus requires more physical design area.
BRIEF SUMMARY
Some embodiments of the invention provide a network forwarding integrated circuit (IC) with a packet processing pipeline that uses multiple different types of data containers to pass packet data through the pipeline. In some embodiments, the different types of data containers (i) have different availability to match-action stages of the pipeline and (ii) have their data values generated differently by the match-action stages of the pipeline.
The network forwarding IC of some embodiments includes a set of configurable packet processing pipeline resources that operate as both ingress pipelines (for packets received at the network forwarding IC) and egress pipelines (for packets being sent from the network forwarding IC), in addition to a traffic management unit that is responsible for receiving packets from an ingress pipeline and enqueuing the packet for a port associated with an egress pipeline. Typically, a packet is processed by an ingress pipeline, enqueued by the traffic management unit (which may also perform packet replication, if necessary), and processed by an egress pipeline.
Each packet processing pipeline (whether acting as an ingress or egress pipeline) includes a parser, a match-action unit (a series of match-action stages), and a deparser, in some embodiments. The parser receives a packet as an ordered stream of data, and based on its instructions and analysis of the packet, identifies packet header fields and stores the packet header fields in a set of data containers (a packet header vector (PHV)) to be sent to the match-action unit. The match-action unit performs various processing to determine actions to be taken on the packet, including modifying the PHV data, determining output instructions for the packet, etc. After the last match-action stage, the PHV is provided to the deparser, so that the deparser can reconstruct the packet.
As mentioned, in some embodiments, this PHV includes multiple different types of data containers. Specifically, in some embodiments, a first type of data container is fully available for match-related operations, a second type of data container is only available for match-related operations in certain situations, and a third type of data container is not available at all for the match-related operations. Specifically, data containers of the first and second types can be used to match against match table entries and/or to generate hashes for matching against match table entries. In addition, each match-action stage of some embodiments includes a set of data-plane stateful processing units (DSPUs) and stateful tables that these DSPUs access and modify, and the first and second types of data containers are made available to the DSPU for these operations, while the third type of data containers are not. Additional match-related operations, such as passing table addresses to later match-action stages, also cannot use the third type of data container in some embodiments.
While the first type of data container is available for all match-related operations, in some embodiments the second type of data container is only available for certain stages. Specifically, when the operations of a match-action stage do not depend on the output of the previous match-action stage, some embodiments run the two stages in parallel. In this case, the data containers of the second type will not have been provided yet to the latter of these two subsequent stages, and thus are not available for the match-related operations. However, if the latter of these two stages is dependent on the previous stage, then the second-type data containers will have been populated for the stage and are available for the match-related operations.
In addition, each of the match-action stages generates output values for the different types of data containers in a different way in some embodiments. Each match-action stage includes a set of arithmetic logic units (ALUs) that are used to generate the output values for the first-type data containers, while the output values for the second-type and third-type data containers are generated without the ALUs. In some embodiments, each of the ALUs uses two operands output by a multiplexer (as well as a set of instructions) to generate one output value for a first-type data container. For the second-type and third-type data containers, the two operands output by the multiplexer are each an output data container value (one of the first type and one of the second type).
The multiplexer that generates the operands for the ALUs as well as the outputs for the second-type and third-type data containers enables the movement of data values between the different types of data containers. Thus, while the values stored in the third-type data containers are not available for matching in a particular stage, these values can be moved to a different type of data container for use in match and/or action operations at a later stage. As an example, if a routing decision using a destination IP address is not made until late in the match-action unit, then the destination IP address value could be moved to a third-type data container at one of the early stages to free up room in the first-type and second-type data containers for values used in the earlier stages. Before the stage(s) at which the destination IP address is required, one of the stages would move the value to one of the first-type or second-type data containers.
The use of the multiple types of data containers enables the expansion of the size of the PHV within the match-action unit, without a corresponding expansion in either (i) the size of the PHV output by the parser or provided to the deparser or (ii) the number of wires required to transfer the PHV data from stage to stage. The parser outputs a first number of PHV data containers (including first-type and second-type containers), and then the first match-action stage expands the PHV to a second (larger) number of PHV data containers (adding the third-type containers). Each of the intermediate stages of the match-action unit receives the expanded PHV, potentially modifies the values of the PHV, and passes the expanded PHV to the next stage. At the last stage, the PHV is reduced back to the first number of data containers (including first-type and second-type containers), and provided to the deparser. In some embodiments, to the parser and deparser, the first-type and second-type containers are the same. In addition, the last stage in some embodiments outputs all of the data containers as first-type containers, in that the ALUs are used to generate all of the outputs of the last stage.
As mentioned, the use of these different types of containers avoids expanding the number of wires required to transfer the PHV data from stage to stage. In some embodiments, each match-action on the network forwarding IC has a given number of wires passing over the stage, with the first set of wires for carrying the input PHV bits and a second set of wires for carrying the output PHV bits. This enables the PHV to be forwarded to the next stage before processing when that next stage is not dependent on the current stage outputs. In addition, the second-type and/or third-type data container bits use some of the input wires as output wires (with fewer input wires needed due to the restrictions on the second-type and third-type data containers).
This expansion of the PHV data enables a compiler to make optimal use of the different types of PHV data containers for different applications. In some embodiments, the MAU stages are configured by a controller according to a compiled program (or multiple compiled programs, such as an ingress program and an egress program). The compiler receives a program or set of programs (e.g., P4 programs) and assigns different parameters to the various PHV data containers available for each stage. In a typical program, not all of the parameters are needed for matching at each stage, and similarly not all of the parameters need to be used as operands for the ALUs at each stage. The program requirements determine the specific types of data containers required at each stage for each program, and the expansion of the number of PHV data containers enables the compiler to accommodate a larger number of parameters without a significant hardware expansion.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
FIG. 1 conceptually illustrates the structure of a network forwarding IC of some embodiments.
FIG. 2 illustrates an example of a match-action unit of some embodiments.
FIG. 3 illustrates the operation of a match-action stage with respect to second-type and third-type PHV data containers.
FIG. 4 illustrates a summary chart of the properties of the three types of PHV data containers of some embodiments.
FIG. 5 conceptually illustrates a process 500 of some embodiments for generating PHV output values.
FIG. 6 conceptually illustrates the idea of “metadata bloat” via a graph.
FIG. 7 conceptually illustrates the expansion of the PHV within the MAU.
FIG. 8 conceptually illustrates an example of the movement of a packet header field value between different data containers over the course of a packet processing pipeline.
FIG. 9 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
DETAILED DESCRIPTION
Some embodiments of the invention provide a network forwarding integrated circuit (IC) with a packet processing pipeline that uses multiple different types of data containers to pass packet data through the pipeline. In some embodiments, the different types of data containers (i) have different availability to match-action stages of the pipeline and (ii) have their data values generated differently by the match-action stages of the pipeline.
FIG. 1 conceptually illustrates the structure of such a network forwarding IC 100 of some embodiments (that is, e.g., incorporated into a hardware forwarding element). Specifically, FIG. 1 illustrates several ingress pipelines 105, a traffic management unit (referred to as a traffic manager) 110, and several egress pipelines 115. Though shown as separate structures, in some embodiments the ingress pipelines 105 and the egress pipelines 115 actually use the same circuitry resources, which is configured to handle both ingress and egress pipeline packets synchronously (possibly in addition to non-packet data). That is, a particular stage of the pipeline may process an ingress packet, an egress packet, both, or neither in the same clock cycle. However, in other embodiments, the ingress and egress pipelines are separate circuitry.
Generally, when the network forwarding IC 100 receives a packet, in some embodiments the packet is initially directed to one of the ingress pipelines 105 (each of which may correspond to one or more ports of the hardware forwarding element). After passing through the selected ingress pipeline 105, the packet is sent to the traffic manager 110, where the packet is enqueued and placed in the output buffer 117. In some embodiments, the ingress pipeline 105 that processes the packet specifies into which queue the packet should be placed by the traffic manager 110 (e.g., based on the destination of the packet). The traffic manager 110 then dispatches the packet to the appropriate egress pipeline 115 (each of which may correspond to one or more ports of the forwarding element). In some embodiments, there is no necessary correlation between which of the ingress pipelines 105 processes a packet and to which of the egress pipelines 115 the traffic manager 110 dispatches the packet. That is, a packet might be initially processed by ingress pipeline 105 b after receipt through a first port, and then subsequently by egress pipeline 115 a to be sent out a second port, etc.
Each ingress pipeline 105 includes a parser 120, a match-action unit (MAU) 125, and a deparser 130. Similarly, each egress pipeline 115 includes a parser 135, a MAU 140, and a deparser 145. The parser 120 or 135, in some embodiments, receives a packet as a formatted collection of bits in a particular order, and parses the packet into its constituent header fields. The parser starts from the beginning of the packet and assigns these header field values to fields (e.g., data containers) of a packet header vector (PHV) for processing. In some embodiments, the parser 120 or 135 separates out the packet headers (up to a designated point) from the payload of the packet, and sends the payload (or the entire packet, including the headers and payload) directly to the deparser without passing through the MAU processing.
The MAU 125 or 140 performs processing on the packet data (i.e., the PHV). In some embodiments, the MAU includes a sequence of stages, with each stage including one or more match tables, a set of stateful processing units, and an action engine. Each match table includes a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries. When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage performs the actions on the PHV, which is then sent to the next stage of the MAU. In some embodiments, the PHV includes different types of data containers, which have different properties in terms of the types of operations that can be performed using the data stored in the containers and how the values are output into the PHV containers for the next stage. The MAU stages are described in more detail below by reference to FIGS. 2 and 3 .
The deparser 130 or 145 reconstructs the packet using the PHV as modified by the MAU 125 or 140 and the payload received directly from the parser 120 or 135. The deparser constructs a packet that can be sent out over the physical network, or to the traffic manager 110. In some embodiments, the deparser constructs this packet based on data received along with the PHV that specifies the protocols to include in the packet header, as well as its own stored list of data container locations for each possible protocol's header fields.
The traffic manager 110, as shown, includes a packet replicator 119 and the previously-mentioned output buffer 117. In some embodiments, the traffic manager 110 may include other components, such as a feedback generator for sending signals regarding output port failures, a series of queues and schedulers for these queues, queue state analysis components, as well as additional components. The packet replicator 119 of some embodiments performs replication for broadcast/multicast packets, generating multiple packets to be added to the output buffer (e.g., to be distributed to different egress pipelines).
The output buffer 117 is part of a queuing and buffering system of the traffic manager in some embodiments. The traffic manager 110 provides a shared buffer that accommodates any queuing delays in the egress pipelines. In some embodiments, this shared output buffer 117 stores packet data, while references (e.g., pointers) to that packet data are kept in different queues for each egress pipeline 115. The egress pipelines request their respective data from the common data buffer using a queuing policy that is control-plane configurable. When a packet data reference reaches the head of its queue and is scheduled for dequeuing, the corresponding packet data is read out of the output buffer 117 and into the corresponding egress pipeline 115. In some embodiments, packet data may be referenced by multiple pipelines (e.g., for a multicast packet). In this case, the packet data is not removed from this output buffer 117 until all references to the packet data have cleared their respective queues.
FIG. 2 illustrates an example of a match-action unit of some embodiments. As mentioned above, a packet processing pipeline of some embodiments has several MAU stages, each of which includes packet-processing circuitry for forwarding received data packets and/or performing stateful operations based on these data packets. These operations are performed by processing values stored in the PHVs of the packets. This figure illustrates, in part, the flow of data for a PHV container through the match-action unit 200.
As mentioned, the PHV includes multiple data containers, possibly of different sizes, that are used to store packet header field values and other data (e.g., metadata, instructions, etc.) within the match-action unit. In some embodiments, packet header field values may be stored in a single container or may be mapped across containers (e.g., a 48-bit MAC address could be stored in a combination of a 32-bit data container and a 16-bit data container). In some cases, a single data container may store multiple packet header field values or other data (e.g., storing both the time to live and protocol field values of an IP header in a single 16-bit data container).
In some embodiments, the PHV has a fixed number of data containers of specific sizes (e.g., 8-bit data containers, 16-bit data containers, and 32-bit data containers), which are described in further detail by reference to FIG. 7 . These data containers, in some embodiments, have the capacity to carry data for both an ingress packet and an egress packet (and non-packet data processed separately from the ingress or egress packets, in some cases). In addition, the PHV data containers are divided into groups in some embodiments, with the groups of data containers being processed together by certain parts of the match-action stages. Each group within the MAU, in some embodiments, includes three types of data containers.
FIG. 2 illustrates the operations of some embodiments applied to a first type of PHV data container. In some embodiments, the first type of data container is fully available for certain match-related operations, a second type of data container is only available for these match-related operations in certain situations, and a third type of data container is not available at all for these match-related operations. In addition, each of the match-action stages generates output values for the different types of PHV containers in some embodiments, as explained further below.
As shown in FIG. 2 , the MAU stage 200 in some embodiments has a set of one or more match tables 205, a data plane stateful processing unit 210 (DSPU), a set of one or more stateful tables 215, an action crossbar 230, an action parameter memory 220, an action instruction memory 225, and an action arithmetic logic unit (ALU) 235. The match table set 205 can compare one or more fields in a received PHV (i.e., values stored in one or more PHV containers) to identify one or more matching flow entries (i.e., entries that match the PHV). The match table set can be TCAM tables or exact match tables in some embodiments. In some embodiments, the match table set is accessed at a memory address that is a value extracted from one or more data containers of the PHV, or a hash of this extracted value or values.
In some embodiments, the value stored in a match table record that matches a packet's flow identifier, or that is accessed at a hash-generated address, provides addresses for locations in the action parameter memory 220 and action instruction memory 225. In addition, such a value from the match table can provide an address and/or parameter for one or more records in the stateful table set 215, and can provide an instruction and/or parameter for the DSPU 210.
In some embodiments, each action table 215, 220, and 225 can be addressed through a direct addressing scheme, an indirect addressing scheme, or an independent addressing scheme, depending on the configuration of the MAU stage. In the direct addressing scheme, the action table uses the same address that is used to address the matching flow entry in the match table set 205 (e.g., a hash generated address value or a value from the PHV). On the other hand, the indirect addressing scheme accesses an action table by using an address value that is extracted from one or more records that are identified in the match table set 205 (i.e., identified in the match table set via direct addressing or record matching operations). The independent address scheme of some embodiments is similar to the direct addressing scheme except that it does not use the same address that is used to access the match table set 205. Like the direct addressing scheme, the table address in the independent addressing scheme can either be a value extracted from the PHV, or a hash of this extracted value. In some embodiments, not all the action tables 215, 220 and 225 can be accessed through these three addressing schemes (e.g., the action instruction memory 225 in some embodiments is accessed through only the direct and indirect addressing schemes).
As shown, the DSPU 210 and the stateful table set 215 also receive the input PHV values (at least the values from the first-type data containers, and in some cases values from the second-type data containers). The PHV data containers can store instructions and/or parameters for the DSPU as well as memory addresses and/or parameters for the stateful table set 215, in addition to packet header field values. Such instructions, parameters, and memory/addresses are calculated in previous match-action stages and stored to the PHV data containers in some embodiments.
The DSPU 210 in some embodiments performs one or more stateful operations, while a set of stateful tables 215 stores state data used and generated by the DSPU 210. In some embodiments, the DSPU is a programmable arithmetic logic unit (ALU) or set of programmable ALUs that performs operations synchronously with the dataflow of the packet-processing pipeline (i.e., synchronously at the line rate). As such, the DSPU can process a different PHV (or one ingress and one egress PHV) every clock cycle, thus ensuring that the DSPU is able to operate synchronously with the dataflow of the packet-processing pipeline. In some embodiments, a DSPU performs every computation with fixed latency (e.g., fixed number of clock cycles). In some embodiments, the remote or local control plane provides configuration data to program the DSPU.
The DSPU 210 outputs an action parameter to the action crossbar 230. The action parameter memory 220 also outputs an action parameter to this crossbar 230. The action parameter memory 220 retrieves the action parameter that it outputs from the record identified by the address provided by the match table set 205, based on a matched entry in the match table set 205. These action parameters, in some embodiments, are constants or other values that are either (i) output as PHV values or (ii) used to perform a calculation in order to generate an output PHV value.
The action crossbar 230 in some embodiments maps the action parameters received from the DSPU 210 and action parameter memory 220 to an action parameter bus 240. For different data packets, the action crossbar 230 can map the action parameters from DSPU 210 and memory 220 differently to this bus 240. The crossbar can supply the action parameters from either of these sources in their entirety to this bus 240, or it can concurrently select different portions of these parameters for this bus.
This bus provides the action parameter to an operand multiplexer (MUX) 240. The operand MUX 240 receives the action parameter from the action crossbar 230, as well as the values from the input PHV containers. Based on instructions from the action instruction memory 225, the operand MUX 240 provides operands to the action ALU 235.
The action ALU 235 also receives an instruction to execute from the action instruction memory 225, which specifies the calculations to perform on the operands received from the operand MUX 240. The action instruction memory 225, like the action parameter memory 220, retrieves the instructions that it outputs (to the operand MUX 240 and the action ALU 235) from the record identified by the address provided by the match table set 205. In some embodiments, the action ALU 240 in some embodiments is a very large instruction word (VLIW) processor. The action ALU 240 executes the instructions (from the instruction memory 235 or, in some embodiments, the PHV) using the operands received from the operand MUX 240 (i.e., from the action crossbar 230 or the PHV).
In some embodiments, the action ALU 235 is actually a number of ALUs, one for each PHV data container of the first type. Specifically, each ALU 235 receives two operands from the operand MUX 240, which it uses to calculate one output PHV data container in accordance with its received instructions. The value stored in the output PHV data container may be as simple as one of the operands (e.g., the same value stored in the corresponding input PHV data container, a value from a different input PHV data container, a value from the stateful tables 215 or action parameter memory 220) or could involve a calculation involving both operands (e.g., decrementing the time to live value by subtracting a constant 1 received from the action parameter memory from the input time to live value). This output PHV data container is passed to the next stage (i.e., as an input PHV data container for that next stage).
As mentioned, FIG. 2 illustrates the operation of the match-action stage 200 with respect to a first-type PHV data container of some embodiments. FIG. 3 illustrates the operation of this match-action stage with respect to second-type and third-type PHV data containers. As shown, the second-type PHV data is provided to the match tables 205, DSPU 210, and state tables 215, as with the first-type PHV data.
However, in some embodiments, the values in the second-type PHV data containers are only useable for matching operations (i.e., lookups in the match tables, stateful operations with the DSPU and stateful tables) if the match-action stage has a dependency on the previous stage. When the operations of a match-action stage do not depend on the output of the previous match-action stage, some embodiments run the two match-action stages in parallel, with the first-type data containers provided to both stages in the same clock cycle (or provided to the later of the stages within a small number of transport clock cycles of the provision to the prior of the stages). In this case, the data containers of the second type will not have been provided yet to the latter of these two subsequent stages, and thus are not available for the match table, DSPU, and stateful table operations. However, if the latter of these two stages is dependent on the previous stage, then the second-type data containers will have been populated for the stage and are available at the same time as the first-type data containers. The third-type data containers are not provided to the match-tables 205, DSPU 210, or state tables 215, and instead are only provided to the operand MUX 240.
If the second-type PHV data containers are used for match operations, this data is used along with the data in the first-type containers. That is, although shown in two separate figures for explanatory purposes, in some embodiments the match tables 205, DSPU 210, and stateful tables 215 receive all of the first-type and second-type PHV containers (assuming the second-type PHV data is available based on a dependency) and can perform their operations using any of this data.
The match-action stage 200 generates output values for the second-type and third-type data containers differently than the first-type data containers in some embodiments. As described above, the operand MUX 240 provides two operands to each action ALU 235, which uses the operands to generate the output value for a first-type PHV data container. On the other hand, no action ALU is present for the second-type and third-type PHVs. Instead, two outputs of the operand MUX 240 are used as one second-type PHV output and one third-type PHV output.
The input third-type PHV data containers, as shown, are only input to the operand MUX 240, and thus the only operations that can be performed on this data is to copy the value to another data container (e.g., to a first-type container or second-type container, or in some cases to a different third-type container) for use in a later match-action stage. The operand MUX 240, in some embodiments, receives all of the input PHVs (first-type, second-type, and third-type) as well as the values from the action crossbar 230, and outputs pairs of operands to either the action ALUs 235 (for calculation of the first-type output PHV values) or as second-type and third-type output PHV values. In some embodiments, additional restrictions require that the third-type PHV container output values can only be sourced from PHV container input values (of any of the three types), while the second-type PHV container output values (as well as the operands for the action ALUs calculating the first-type values) can be sourced from the PHV container input values as well as the action and constant values provided via the action crossbar 230. Some embodiments use multiple operand MUXes 240 for separate groups of PHV data containers (e.g., groups that include specific numbers of first-type, second-type, and third-type PHV data containers). In this case, values can be copied from one PHV data container to another within a group, but not between groups. In addition, in some embodiments, values can be copied from one PHV data container to a second PHV data container without affecting the value in the first container (i.e., the first and second output PHV containers would both store the value from the first input PHV container in this case).
FIG. 4 illustrates a summary chart 400 of the properties of the three types of PHV data containers of some embodiments, which may also be referred to as regular PHV (first-type), mocha PHV (second-type), and dark PHV (third-type). As described above, in some embodiments these PHV data container types differ in terms of (i) whether the input PHV can be used for match-related operations, whether the data containers are visible to the parser and deparser, (iii) whether the output value is generated by the VLIW action ALUs, and which input operands may be used for the output values.
As shown, the first-type PHV containers fully participate in match-related operations, while the second-type PHV containers only participate in these operations in a particular stage so long as there is a dependency on the previous stage (so that the particular stage does not execute concurrently with the previous stage), and the third-type PHV containers are not used for these match-related operations. The match-related operations include generating hashes with the values in PHV data containers for exact-match addresses, selector tables, hash-addressed stateful tables, etc., using the values directly by stateful tables, generating hash digests for hardware learning, passing the values to the action crossbar as action constants, and passing table addresses to later MAU stages.
In terms of generating output, as explained by reference to FIGS. 2 and 3 , the output for the first-type PHV containers is generated by the action ALUs, using both input PHV values (from the same group of PHV data containers, in some embodiments) as well as the action constants. The output for the second-type and third-type PHV containers is generated without the action ALUs (i.e., using the operands directly from the operand MUX). The second-type PHV container output can be based on input PHV values or action constants, whereas the third-type PHV container output can only be sourced from the input PHV values. Lastly, as described further below, the first-type and second-type PHV containers are visible to the deparser (i.e., generated by the parser, and received by the deparser), while the third-type PHV containers only exist within the match-action unit.
FIG. 5 conceptually illustrates a process 500 of some embodiments for generating PHV output values. In some embodiments, the process 500 is performed by a match-action stage of a network forwarding IC of some embodiments. This process assumes that the second-type PHV containers are available for match-related operations (i.e., that due to dependencies, the stage is not executing synchronously with the previous stage). In addition, it should be understood that the process 500 is conceptual, and represents operations performed by various components within the match-action stage. In some embodiments, some of the operations are performed as a linear process, while some operations are performed synchronously with other operations of the process.
As shown, the process 500 begins by receiving (at 505) a set of input PHV containers including three types of container (i.e., the three types of containers described in FIG. 4 above). In some embodiments, the first-type and second-type containers are received by numerous match-related components of the stage (e.g., the match tables, DSPU, stateful tables, and operand MUX), while the third-type containers are only directed to the operand MUXes.
The process 500 then performs (at 510) match and action operations using the first-type and second-type PHV data containers. These operations, performed by, e.g., the match-tables (and associated components, such as hash generators), DSPU, stateful tables, action instruction memory, and action parameter memory, include generating hashes with the values in PHV data containers for exact-match addresses, selector tables, hash-addressed stateful tables, etc., using the values directly by stateful tables, generating hash digests for hardware learning, and passing the values to the action crossbar as action constants, and passing table addresses to later MAU stages.
Next, the process 500 uses (at 515) the operand multiplexer(s) to route the values from the various action operations and input PHV to the appropriate action ALUs (or to the output PHV containers). The process generates (at 520) the output values for the first-type PHV data containers using the action ALUs, based on the operands received from the multiplexer(s). In addition, the process generates (at 525) the output values for the second-type and third-type PHV containers as the operand multiplexer output without the use of the action ALUs. As described above, the operand MUX of some embodiments outputs two operands to each action ALU which the action ALUs use to generate the output values for the first-type data containers, while additional pairs of operands are used directly as the second-type and third-type data containers.
The PHV, as noted above, is used to not only carry packet header field values between match-action stages (and allow the stages to modify these values), but also to carry metadata (e.g., the ingress port at which a packet is received, the egress port out of which a packet should be sent, multicast group identifiers, etc.) as well as instructions for subsequent stages, memory addresses for table lookups in subsequent stages, control/data flow required for MAU processing, etc. As the number of match-action stages on the network forwarding IC is limited in some embodiments (for both packet latency and physical area reasons), configuring the match-action unit to perform all desired packet-processing operations may be difficult. The expansion of packet headers (e.g., due to larger encapsulation lengths) only adds to this difficulty.
FIG. 6 conceptually illustrates this idea of “metadata bloat” via a graph 600. This graph illustrates the relative amount of metadata required for the PHV to carry (where metadata here also includes instructions, memory addresses, etc.) as a function of the MAU stage. It should be understood that this is a conceptual graph and represents a typical packet, rather than any specific measurements or exact amounts of metadata.
As shown, the match-action unit 605 includes N stages. The amount of metadata required to be carried by the PHV starts out low at the initial stage (because the only metadata will have come from the parser, such as ingress port, etc.). This increases to the middle stage M (if there are an even number of stages, the peak may be at the output of stage N/2), although in different configurations the amount of metadata required may peak before or after the exact middle of the match-action stage sequence. Around the middle of the sequence of match-action stages, the PHV will be carrying the most instructions/addresses/etc., as the earlier stages generate this metadata for use by the later stages. In the later stages, these instructions have been carried out, so the amount of metadata required decreases. By stage N, the metadata required is generally limited to instructions for the deparser (e.g., the list of protocols that make up the packet header) or traffic manager (e.g., a multicast group identifier, an egress queue, etc.).
The use of the multiple types of data containers enables the expansion of the size of the PHV within the match-action unit in some embodiments, without a corresponding expansion in either (i) the size of the PHV output by the parser or provided to the deparser or (ii) the number of wires required to transfer the PHV data from stage to stage. The parser outputs a first number of PHV data containers (including first-type and second-type containers), and then the first match-action stage expands the PHV to a second (larger) number of PHV data containers (adding the third-type containers). Each of the intermediate stages of the match-action unit receives the expanded PHV, potentially modifies the values of the PHV, and passes the expanded PHV to the next stage. At the last stage, the PHV is reduced back to the first number of data containers (including first-type and second-type containers), and provided to the deparser.
FIG. 7 conceptually illustrates the expansion of the PHV within the MAU. Specifically, FIG. 7 conceptually illustrates the types of PHV data containers and relative numbers of each type of container sent from the parser 705 to the first MAU stage 710, between subsequent MAU stages, and from the last MAU stage 720 to the deparser 725.
As shown, the parser 705 outputs the PHV for a packet (or for one ingress and one egress packet) to the first MAU stage 710. This includes three arrows representing first-type PHV data containers and one arrow representing second-type PHV data containers. From the parser perspective, all of these data containers are of the same type, with the configuration specifying which packet data to store in which data containers. In some embodiments, the PHV as output by the parser includes specific groups (e.g., groups of four PHVs containers, with every fourth group being a second-type PHV data container. Each such group may be assigned to the ingress packet or egress packet, in some embodiments. As one example, a PHV could have 224 data containers, including sixty-four 32-bit containers, (sixteen of which are second-type containers), sixty-four 8-bit containers (sixteen of which are second-type containers), and ninety-six 16-bit containers (twenty-four of which are second-type containers). Other embodiments may use different numbers and/or different sizes of PHV containers.
The first MAU stage 710 receives the PHV from the parser 705, and outputs to the second stage 715 an expanded PHV. As shown, this includes three arrows representing first-type PHV data containers, one arrow representing second-type PHV data containers, and one arrow representing third-type PHV data containers. Essentially, for each second-type container received from the parser 705, the first MAU stage 710 outputs both a second-type container and a third-type container. In some embodiments, within the MAU, the PHV from the parser is divided into groups that each have sixteen containers (in the 224-container example above, this would include four 8-bit groups, four 32-bit groups, and six 16-bit groups), with twelve first-type and four second-type containers per group. The expanded PHV in this example has twenty containers per group, with twelve first-type, four second-type, and four third-type containers per group (for a total of 280 total data containers). Through the operand MUXes and action ALUs, data can be copied from one container in a group to another container in a group at an MAU stage.
Each of the subsequent intermediate MAU stages receives the expanded PHV from the previous stages and outputs this expanded PHV (typically modified in some way) to the next stage. The final MAU stage 720, however, receives the expanded PHV and outputs a reduced PHV to the deparser 725 having the same number of data containers as those output by the parser 705. As shown, this reduced PHV is shown by three arrows representing first-type containers and one arrow representing second-type data containers. In the example above, this would again be 224 total data containers provided to the deparser 725, the same as were output by the parser. In some embodiments, the final stage 720 outputs all of the data containers as first-type containers, in that the action ALUs are used to generate all of the outputs at this stage. Irrespective of whether the data containers are output as first-type or second-type containers, in some embodiments these are equivalent from the perspective of the parser and deparser.
As mentioned, although the size of the PHV is expanded within the MAU, the use of these different types of containers avoids expanding the number of wires required to transfer the PHV data from stage to stage. In some embodiments, each match-action on the network forwarding IC has a given number of wires passing over the stage, with the first set of wires for carrying the input PHV bits and a second set of wires for carrying the output PHV bits. This enables the PHV to be forwarded to the next stage before processing when that next stage is not dependent on the current stage outputs. In addition, the second-type and/or third-type data container bits use some of the input wires as output wires (with fewer input wires needed due to the restrictions on the second-type and third-type data containers).
This expansion of the PHV data enables a compiler to make optimal use of the different types of PHV data containers for different applications. In some embodiments, the MAU stages are configured by a controller according to a compiled program (or multiple compiled programs, such as an ingress program and an egress program). The compiler receives a program or set of programs (e.g., P4 programs) and assigns different parameters to the various PHV data containers available for each stage. In a typical program, not all of the parameters are needed for matching at each stage, and similarly not all of the parameters need to be used as operands for the ALUs at each stage. The program requirements determine the specific types of data containers required at each stage for each program, and the expansion of the number of PHV data containers enables the compiler to accommodate a larger number of parameters without a significant hardware expansion.
As noted above, the multiplexer that generates the operands for the ALUs as well as the outputs for the second-type and third-type data containers enables MAU stages to copy data values between the different types of data containers (within a group of data containers, in some embodiments). Thus, while the values stored in the third-type data containers are not available for matching in a particular stage, these values can be moved to a different type of data container for use in match and/or action operations at a later stage. In some embodiments, a compiler determines the configuration data that indicates to which PHV data container each header field or piece of metadata is written at each match-action stage, in order for the packet-processing pipeline to execute a specific packet-processing program provided to the compiler. The compiler thus ensures that, when needed for match operations, specific packet header field or metadata values are stored in a PHV data container that is accessible for the match operations.
FIG. 8 conceptually illustrates an example of the movement of a packet header field value (specifically, the destination IP address of a packet) between different data containers over the course of a packet processing pipeline 800. As shown, the parser 805 outputs a PHV for a received packet including first-type and second-type containers, with the destination IP address of the received packet stored in one of the first-type containers.
The destination IP address is not needed by the first several stages, and thus the first MAU stage 810 moves this data into a third-type data container (which is not accessible for match-operations), thereby freeing up the first-type and second-type containers for data that is required at earlier stages (e.g., other packet header fields for match operations, instructions for earlier stages, etc.). The seventh match-action stage 815 in the pipeline 800, however, moves the destination IP address value back to a first-type PHV data container, so that the eighth match-action stage 820 can use this value in a match operation. Specifically, the eighth stage 820 performs a routing operation using the match tables, matching on the destination IP address. This generates a next hop address, which is read from, e.g., the action parameter memory and written to another PHV data container (a first-type container). Finally, in the last match-action stage 825, the next hop address is moved to a second-type PHV data container, and the first-type and second-type data containers are provided to the deparser 830.
FIG. 9 conceptually illustrates an electronic system 900 with which some embodiments of the invention are implemented. The electronic system 900 can be used to execute any of the control, virtualization, or operating system applications described above. The electronic system 900 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 900 includes a bus 905, processing unit(s) 910, a system memory 925, a read-only memory 930, a permanent storage device 935, input devices 940, and output devices 945.
The bus 905 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 900. For instance, the bus 905 communicatively connects the processing unit(s) 910 with the read-only memory 930, the system memory 925, and the permanent storage device 935.
From these various memory units, the processing unit(s) 910 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 930 stores static data and instructions that are needed by the processing unit(s) 910 and other modules of the electronic system. The permanent storage device 935, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 900 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 935.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 935, the system memory 925 is a read-and-write memory device. However, unlike storage device 935, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 925, the permanent storage device 935, and/or the read-only memory 930. From these various memory units, the processing unit(s) 910 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 905 also connects to the input and output devices 940 and 945. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 940 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 945 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in FIG. 9 , bus 905 also couples electronic system 900 to a network 965 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 900 may be used in conjunction with the invention.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including FIG. 5 ) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims (26)

The invention claimed is:
1. An integrated circuit for use in network packet forwarding, the integrated circuit comprising:
configurable packet processing circuitry to receive at least one packet, the at least one packet comprising header fields, the configurable packet processing circuitry being configurable, when the integrated circuit is in operation, to comprise a plurality of packet processing stages, the plurality of packet processing stages comprising:
at least one parser stage to identify the header fields, the at least one parser stage also to store, in data containers, header field data of the header fields, the data containers belonging to a plurality of data container types, the plurality of data container types comprising at least one data container type and at least one other data container type, the at least one data container type and the at least one other data container type having two different sizes and being configurable based upon parameters specified in compiled program instructions to be received by the integrated circuit; and
match-action stages to modify at least certain of the header field data;
wherein:
the plurality of data container types comprises another data container type whose stored data is unavailable for modification by the match-action stages; and
at least a portion of the header field data is to be stored in the another data container type.
2. The integrated circuit of claim 1, wherein:
the plurality of packet processing stages also comprises at least one other stage for use in generating at least one egress packet, based upon modified header field data from the match-action stages.
3. The integrated circuit of claim 2, wherein:
the match-action stages also implement table data look up operations involving the at least certain of the header field data.
4. The integrated circuit of claim 3, wherein:
the configurable packet processing circuitry is configurable, when the integrated circuit is in the operation, to comprise at least one configurable packet processing pipeline that comprises the plurality of packet processing stages; and
the integrated circuit also comprises packet traffic management, queuing, and shared buffering circuitry between certain stages of the plurality of packet processing stages.
5. The integrated circuit of claim 4, wherein:
the at least one configurable packet processing pipeline comprises a plurality of configurable packet processing pipelines; and
the plurality of configurable packet processing pipelines comprise at least one ingress pipeline and at least one egress pipeline.
6. The integrated circuit of claim 5, wherein:
the compiled program instructions are to be received by the integrated circuit, when the integrated circuit is in the operation, from a controller associated with a remote control plane.
7. The integrated circuit of claim 6, wherein:
the compiled program instructions are to be generated by a compiler based upon at least one P4 program.
8. The integrated circuit of claim 7, wherein:
the data containers are associated, at least in part, with packet header vector data that is to be provided to the plurality of packet processing stages.
9. One or more non-transient computer readable media storing instructions for being executed by an integrated circuit, the integrated circuit being for use in network packet forwarding, the instructions when executed by the integrated circuit resulting in the integrated circuit being configured to perform operations comprising:
receiving, by configurable packet processing circuitry of the integrated circuit, at least one packet, the at least one packet comprising header fields, the configurable packet processing circuitry being configurable, when the integrated circuit is in operation, to comprise a plurality of packet processing stages, the plurality of packet processing stages comprising at least one parser stage and match-action stages;
identifying, by the at least one parser stage, the header fields;
storing, by the at least one parser stage, in data containers, header field data of the header fields, the data containers belonging to a plurality of data container types, the plurality of data container types comprising at least one data container type and at least one other data container type, the at least one data container type and the at least one other data container type having two different sizes and being configurable based upon parameters specified in compiled program instructions to be received by the integrated circuit; and
modifying, by the match-action stages, at least certain of the header field data;
wherein:
the plurality of data container types comprises another data container type whose stored data is unavailable for modification by the match-action stages; and
at least a portion of the header field data is to be stored in the another data container type.
10. The one or more non-transient computer readable media of claim 9, wherein:
the plurality of packet processing stages also comprises at least one other stage for use in generating at least one egress packet, based upon modified header field data from the match-action stages.
11. The one or more non-transient computer readable media of claim 10, wherein:
the match-action stages also implement table data look up operations involving the at least certain of the header field data.
12. The one or more non-transient computer readable media of claim 11, wherein:
the configurable packet processing circuitry is configurable, when the integrated circuit is in the operation, to comprise at least one configurable packet processing pipeline that comprises the plurality of packet processing stages; and
the integrated circuit also comprises packet traffic management, queuing, and shared buffering circuitry between certain stages of the plurality of packet processing stages.
13. The one or more non-transient computer readable media of claim 12, wherein:
the at least one configurable packet processing pipeline comprises a plurality of configurable packet processing pipelines; and
the plurality of configurable packet processing pipelines comprise at least one ingress pipeline and at least one egress pipeline.
14. The one or more non-transient computer readable media of claim 13, wherein:
the compiled program instructions are to be received by the integrated circuit, when the integrated circuit is in the operation, from a controller associated with a remote control plane.
15. The one or more non-transient computer readable media of claim 14, wherein:
the compiled program instructions are to be generated by a compiler based upon at least one P4 program.
16. The one or more non-transient computer readable media of claim 15, wherein:
the data containers are associated, at least in part, with packet header vector data that is to be provided to the plurality of packet processing stages.
17. A method implemented using an integrated circuit, the integrated circuit being for use in network packet forwarding, the method comprising:
receiving, by configurable packet processing circuitry of the integrated circuit, at least one packet, the at least one packet comprising header fields, the configurable packet processing circuitry being configurable, when the integrated circuit is in operation, to comprise a plurality of packet processing stages, the plurality of packet processing stages comprising at least one parser stage and match-action stages;
identifying, by the at least one parser stage, the header fields;
storing, by the at least one parser stage, in data containers, header field data of the header fields, the data containers belonging to a plurality of data container types, the plurality of data container types comprising at least one data container type and at least one other data container type, the at least one data container type and the at least one other data container type having two different sizes and being configurable based upon parameters specified in compiled program instructions to be received by the integrated circuit; and
modifying, by the match-action stages, at least certain of the header field data;
wherein:
the plurality of data container types comprises another data container type whose stored data is unavailable for modification by the match-action stages; and
at least a portion of the header field data is to be stored in the another data container type.
18. The method of claim 17, wherein:
the plurality of packet processing stages also comprises at least one other stage for use in generating at least one egress packet, based upon modified header field data from the match-action stages.
19. The method of claim 18, wherein:
the match-action stages also implement table data look up operations involving the at least certain of the header field data.
20. The method of claim 19, wherein:
the configurable packet processing circuitry is configurable, when the integrated circuit is in the operation, to comprise at least one configurable packet processing pipeline that comprises the plurality of packet processing stages; and
the integrated circuit also comprises packet traffic management, queuing, and shared buffering circuitry between certain stages of the plurality of packet processing stages.
21. The method of claim 20, wherein:
the at least one configurable packet processing pipeline comprises a plurality of configurable packet processing pipelines; and
the plurality of configurable packet processing pipelines comprise at least one ingress pipeline and at least one egress pipeline.
22. The method of claim 21, wherein:
the compiled program instructions are to be received by the integrated circuit, when the integrated circuit is in the operation, from a controller associated with a remote control plane.
23. The method of claim 22, wherein:
the compiled program instructions are to be generated by a compiler based upon at least one P4 program.
24. The method of claim 23, wherein:
the data containers are associated, at least in part, with packet header vector data that is to be provided to the plurality of packet processing stages.
25. A network switch for use in network packet forwarding, the network switch comprising:
ports for being coupled to at least one network;
an integrated circuit coupled to the ports, the integrated circuit comprising:
configurable packet processing circuitry to receive, via one or more of the ports, at least one packet, the at least one packet comprising header fields, the configurable packet processing circuitry being configurable, when the integrated circuit is in operation, to comprise a plurality of packet processing stages, the plurality of packet processing stages comprising:
at least one parser stage to identify the header fields, the at least one parser stage also to store, in data containers, header field data of the header fields, the data containers belonging to a plurality of data container types, the plurality of data container types comprising at least one data container type and at least one other data container type, the at least one data container type and the at least one other data container type having two different sizes and being configurable based upon parameters specified in compiled program instructions to be received by the integrated circuit; and
match-action stages to modify at least certain of the header field data;
wherein:
the plurality of data container types comprises another data container type whose stored data is unavailable for modification by the match-action stages; and
at least a portion of the header field data is to be stored in the another data container type.
26. The network switch of claim 25, wherein:
the configurable packet processing circuitry is configurable, when the integrated circuit is in the operation, to comprise at least one configurable packet processing pipeline that comprises the plurality of packet processing stages;
the integrated circuit also comprises packet traffic management, queuing, and shared buffering circuitry between certain stages of the plurality of packet processing stages;
the plurality of packet processing stages comprises at least one other packet processing stage to generate, based upon the at least certain of the header field data as modified by the match-action stages, at least one egress packet; and
the at least one egress packet is to be forwarded via at least one other of the ports.
US17/494,515 2017-09-28 2021-10-05 Expansion of packet data within processing pipeline Active 2037-12-08 US11700212B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/494,515 US11700212B2 (en) 2017-09-28 2021-10-05 Expansion of packet data within processing pipeline
US18/201,060 US20230300087A1 (en) 2017-09-28 2023-05-23 Expansion of packet data within processing pipeline

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762564659P 2017-09-28 2017-09-28
US15/835,233 US10594630B1 (en) 2017-09-28 2017-12-07 Expansion of packet data within processing pipeline
US16/789,339 US11362967B2 (en) 2017-09-28 2020-02-12 Expansion of packet data within processing pipeline
US17/494,515 US11700212B2 (en) 2017-09-28 2021-10-05 Expansion of packet data within processing pipeline

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/789,339 Continuation US11362967B2 (en) 2017-09-28 2020-02-12 Expansion of packet data within processing pipeline

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/201,060 Continuation US20230300087A1 (en) 2017-09-28 2023-05-23 Expansion of packet data within processing pipeline

Publications (2)

Publication Number Publication Date
US20220029935A1 US20220029935A1 (en) 2022-01-27
US11700212B2 true US11700212B2 (en) 2023-07-11

Family

ID=69778910

Family Applications (5)

Application Number Title Priority Date Filing Date
US15/835,233 Active US10594630B1 (en) 2017-09-28 2017-12-07 Expansion of packet data within processing pipeline
US15/835,235 Active US10771387B1 (en) 2017-09-28 2017-12-07 Multiple packet data container types for a processing pipeline
US16/789,339 Active US11362967B2 (en) 2017-09-28 2020-02-12 Expansion of packet data within processing pipeline
US17/494,515 Active 2037-12-08 US11700212B2 (en) 2017-09-28 2021-10-05 Expansion of packet data within processing pipeline
US18/201,060 Pending US20230300087A1 (en) 2017-09-28 2023-05-23 Expansion of packet data within processing pipeline

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US15/835,233 Active US10594630B1 (en) 2017-09-28 2017-12-07 Expansion of packet data within processing pipeline
US15/835,235 Active US10771387B1 (en) 2017-09-28 2017-12-07 Multiple packet data container types for a processing pipeline
US16/789,339 Active US11362967B2 (en) 2017-09-28 2020-02-12 Expansion of packet data within processing pipeline

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/201,060 Pending US20230300087A1 (en) 2017-09-28 2023-05-23 Expansion of packet data within processing pipeline

Country Status (1)

Country Link
US (5) US10594630B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200314011A1 (en) * 2020-06-16 2020-10-01 Manasi Deval Flexible scheme for adding rules to a nic pipeline
US11836225B1 (en) * 2020-08-26 2023-12-05 T-Mobile Innovations Llc System and methods for preventing unauthorized replay of a software container
CN116569533A (en) * 2020-12-11 2023-08-08 华为技术有限公司 Network device and method for switching, routing and/or gateway of data
CN112732241B (en) * 2021-01-08 2022-04-01 烽火通信科技股份有限公司 Programmable analyzer under multistage parallel high-speed processing and analysis method thereof
US12003418B2 (en) * 2021-06-25 2024-06-04 New H3C Technologies Co., Ltd. Method and apparatus for packet matching, network device, and medium
US20230064845A1 (en) * 2021-08-31 2023-03-02 Pensando Systems Inc. Methods and systems for orchestrating network flow tracing within packet processing pipelines across multiple network appliances

Citations (284)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6127900U (en) 1984-07-23 1986-02-19 株式会社 富永製作所 Refueling device
US5243596A (en) 1992-03-18 1993-09-07 Fischer & Porter Company Network architecture suitable for multicasting and resource locking
US5642483A (en) 1993-07-30 1997-06-24 Nec Corporation Method for efficiently broadcast messages to all concerned users by limiting the number of messages that can be sent at one time
US5784003A (en) 1996-03-25 1998-07-21 I-Cube, Inc. Network switch with broadcast support
US6157955A (en) 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US20010043611A1 (en) 1998-07-08 2001-11-22 Shiri Kadambi High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
US20020001356A1 (en) 1999-12-28 2002-01-03 Kishan Shenoi Clock recovery and detection of rapid phase transients
US6442172B1 (en) 1996-07-11 2002-08-27 Alcatel Internetworking, Inc. Input buffering and queue status-based output control for a digital traffic switch
US6453360B1 (en) 1999-03-01 2002-09-17 Sun Microsystems, Inc. High performance network interface
US20020136163A1 (en) 2000-11-24 2002-09-26 Matsushita Electric Industrial Co., Ltd. Apparatus and method for flow control
US20020172210A1 (en) 2001-05-18 2002-11-21 Gilbert Wolrich Network device switch
US20030009466A1 (en) 2001-06-21 2003-01-09 Ta John D. C. Search engine with pipeline structure
US20030046414A1 (en) 2001-01-25 2003-03-06 Crescent Networks, Inc. Operation of a multiplicity of time sorted queues with reduced memory
US20030043825A1 (en) 2001-09-05 2003-03-06 Andreas Magnussen Hash-based data frame distribution for web switches
US20030046429A1 (en) 2001-08-30 2003-03-06 Sonksen Bradley Stephen Static data item processing
US20030063345A1 (en) 2001-10-01 2003-04-03 Dan Fossum Wayside user communications over optical supervisory channel
US20030107996A1 (en) 1998-11-19 2003-06-12 Black Alistair D. Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost
US20030118022A1 (en) 2001-12-21 2003-06-26 Chip Engines Reconfigurable data packet header processor
US20030147401A1 (en) 2000-05-10 2003-08-07 Jukka Kyronaho Resource allocation in packet network
US20030154358A1 (en) 2002-02-08 2003-08-14 Samsung Electronics Co., Ltd. Apparatus and method for dispatching very long instruction word having variable length
US20030167373A1 (en) 2002-03-01 2003-09-04 Derek Winters Method and system for reducing storage requirements for program code in a communication device
US20030219026A1 (en) 2002-05-23 2003-11-27 Yea-Li Sun Method and multi-queue packet scheduling system for managing network packet traffic with minimum performance guarantees and maximum service rate control
US20040024894A1 (en) 2002-08-02 2004-02-05 Osman Fazil Ismet High data rate stateful protocol processing
US20040031029A1 (en) 2002-08-06 2004-02-12 Kyu-Woong Lee Methods and systems for automatically updating software components in a network
US20040042477A1 (en) 2002-08-30 2004-03-04 Nabil Bitar Buffer management based on buffer sharing across ports and per-port minimum buffer guarantee
US6735679B1 (en) 1998-07-08 2004-05-11 Broadcom Corporation Apparatus and method for optimizing access to memory
US20040105384A1 (en) 2002-11-28 2004-06-03 International Business Machines Corporation Event-driven flow control for a very high-speed switching node
US20040123220A1 (en) 2002-12-18 2004-06-24 Johnson Erik J. Framer
US20040165588A1 (en) 2002-06-11 2004-08-26 Pandya Ashish A. Distributed network security system and a hardware processor therefor
US20040213156A1 (en) 2003-04-25 2004-10-28 Alcatel Ip Networks, Inc. Assigning packet queue priority
US6836483B1 (en) 1998-06-24 2004-12-28 Research Investment Network, Inc. Message system for asynchronous transfer
US20050013251A1 (en) 2003-07-18 2005-01-20 Hsuan-Wen Wang Flow control hub having scoreboard memory
US20050041590A1 (en) 2003-08-22 2005-02-24 Joseph Olakangil Equal-cost source-resolved routing system and method
CN1589551A (en) 2001-09-24 2005-03-02 艾利森公司 System and method for processing packets
US20050060428A1 (en) 2003-09-11 2005-03-17 International Business Machines Corporation Apparatus and method for caching lookups based upon TCP traffic flow characteristics
US20050078651A1 (en) 2003-08-16 2005-04-14 Samsung Electronics Co., Ltd. Method and apparatus for assigning scheduling for uplink packet transmission in a mobile communication system
US20050086353A1 (en) 1999-09-20 2005-04-21 Kabushiki Kaisha Toshiba Fast and adaptive packet processing device and method using digest information of input packet
US20050108518A1 (en) 2003-06-10 2005-05-19 Pandya Ashish A. Runtime adaptable security processor
US20050120173A1 (en) 2003-11-27 2005-06-02 Nobuyuki Minowa Device and method for performing information processing using plurality of processors
US20050129059A1 (en) 2003-12-03 2005-06-16 Zhangzhen Jiang Method of implementing PSEUDO wire emulation edge-to-edge protocol
US20050135399A1 (en) 2003-11-10 2005-06-23 Baden Eric A. Field processor for a network device
US20050149823A1 (en) 2003-12-10 2005-07-07 Samsung Electrionics Co., Ltd. Apparatus and method for generating checksum
US20050198531A1 (en) 2004-03-02 2005-09-08 Marufa Kaniz Two parallel engines for high speed transmit IPSEC processing
US6948099B1 (en) 1999-07-30 2005-09-20 Intel Corporation Re-loading operating systems
US20050243852A1 (en) 2004-05-03 2005-11-03 Bitar Nabil N Variable packet-size backplanes for switching and routing systems
US6976149B1 (en) 2001-02-22 2005-12-13 Cisco Technology, Inc. Mapping technique for computing addresses in a memory of an intermediate network node
US6980552B1 (en) 2000-02-14 2005-12-27 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US20060002386A1 (en) 2004-06-30 2006-01-05 Zarlink Semiconductor Inc. Combined pipelined classification and address search method and apparatus for switching environments
US20060072480A1 (en) 2004-09-29 2006-04-06 Manasi Deval Method to provide high availability in network elements using distributed architectures
US20060092857A1 (en) 2004-11-01 2006-05-04 Lucent Technologies Inc. Softrouter dynamic binding protocol
US7046685B1 (en) 1998-12-15 2006-05-16 Fujitsu Limited Scheduling control system and switch
US20060114914A1 (en) 2004-11-30 2006-06-01 Broadcom Corporation Pipeline architecture of a network device
US20060114895A1 (en) 2004-11-30 2006-06-01 Broadcom Corporation CPU transmission of unmodified packets
US20060117126A1 (en) 2001-07-30 2006-06-01 Cisco Technology, Inc. Processing unit for efficiently determining a packet's destination in a packet-switched network
US7062571B1 (en) 2000-06-30 2006-06-13 Cisco Technology, Inc. Efficient IP load-balancing traffic distribution using ternary CAMs
US20060174242A1 (en) 2005-02-01 2006-08-03 Microsoft Corporation Publishing the status of and updating firmware components
US20060277346A1 (en) 2003-10-06 2006-12-07 David Doak Port adapter for high-bandwidth bus
US20070008985A1 (en) 2005-06-30 2007-01-11 Sridhar Lakshmanamurthy Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices
US20070050426A1 (en) 2005-06-20 2007-03-01 Dubal Scott P Platform with management agent to receive software updates
US20070055664A1 (en) 2005-09-05 2007-03-08 Cisco Technology, Inc. Pipeline sequential regular expression matching
US7203740B1 (en) 1999-12-22 2007-04-10 Intel Corporation Method and apparatus for allowing proprietary forwarding elements to interoperate with standard control elements in an open architecture for network devices
US20070104102A1 (en) 2005-11-10 2007-05-10 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding
US20070104211A1 (en) 2005-11-10 2007-05-10 Broadcom Corporation Interleaved processing of dropped packets in a network device
US20070153796A1 (en) 2005-12-30 2007-07-05 Intel Corporation Packet processing utilizing cached metadata to support forwarding and non-forwarding operations on parallel paths
US20070195773A1 (en) 2006-02-21 2007-08-23 Tatar Mohammed I Pipelined packet switching and queuing architecture
US20070208876A1 (en) 2002-05-06 2007-09-06 Davis Ian E Method and apparatus for efficiently processing data packets in a computer network
US20070230493A1 (en) 2006-03-31 2007-10-04 Qualcomm Incorporated Memory management for high speed media access control
US20070236734A1 (en) 2006-04-05 2007-10-11 Sharp Kabushiki Kaisha Image processing apparatus
US20070280277A1 (en) 2006-05-30 2007-12-06 Martin Lund Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US20080082792A1 (en) 2006-10-03 2008-04-03 Vincent Melanie Emanuelle Luci Register renaming in a data processing system
US20080130670A1 (en) 2006-12-05 2008-06-05 Samsung Electronics Co. Ltd. Method and apparatus for managing a buffer in a communication system
US7389462B1 (en) 2003-02-14 2008-06-17 Istor Networks, Inc. System and methods for high rate hardware-accelerated network protocol processing
US20080144662A1 (en) 2006-12-14 2008-06-19 Sun Microsystems, Inc. Method and system for offloaded transport layer protocol switching
US20080175449A1 (en) 2007-01-19 2008-07-24 Wison Technology Corp. Fingerprint-based network authentication method and system thereof
US20080285571A1 (en) 2005-10-07 2008-11-20 Ambalavanar Arulambalam Media Data Processing Using Distinct Elements for Streaming and Control Processes
US20090006605A1 (en) 2007-06-26 2009-01-01 International Business Machines Corporation Extended write combining using a write continuation hint flag
CN101352012A (en) 2005-10-07 2009-01-21 安吉尔系统公司 Media data processing using distinct elements for streaming and control processes
US7492714B1 (en) 2003-02-04 2009-02-17 Pmc-Sierra, Inc. Method and apparatus for packet grooming and aggregation
US20090096797A1 (en) 2007-10-11 2009-04-16 Qualcomm Incorporated Demand based power control in a graphics processing unit
US20090106523A1 (en) 2007-10-18 2009-04-23 Cisco Technology Inc. Translation look-aside buffer with variable page sizes
US7539777B1 (en) 2002-10-25 2009-05-26 Cisco Technology, Inc. Method and system for network time protocol forwarding
US20090180475A1 (en) 2008-01-10 2009-07-16 Fujitsu Limited Packet forwarding apparatus and controlling method
US7633880B2 (en) 2004-01-05 2009-12-15 Samsung Electronics Co., Ltd. Access network device for managing queue corresponding to real time multimedia traffic characteristics and method thereof
US20100085891A1 (en) 2006-12-19 2010-04-08 Andreas Kind Apparatus and method for analysing a network
US20100128735A1 (en) 2006-01-30 2010-05-27 Juniper Networks, Inc. Processing of partial frames and partial superframes
US20100135158A1 (en) 2008-12-01 2010-06-03 Razoom, Inc. Flow State Aware QoS Management Without User Signalling
US20100140364A1 (en) 2008-12-10 2010-06-10 Honeywell International, Inc. User interface for building controller
US20100145475A1 (en) 2008-12-10 2010-06-10 Honeywell International, Inc. Building appliance controller with safety feature
US20100150164A1 (en) 2006-06-23 2010-06-17 Juniper Networks, Inc. Flow-based queuing of network traffic
US20100182920A1 (en) 2009-01-21 2010-07-22 Fujitsu Limited Apparatus and method for controlling data communication
US20100191951A1 (en) 2009-01-26 2010-07-29 Assa Abloy Ab Provisioned firmware updates using object identifiers
US20100228733A1 (en) 2008-11-12 2010-09-09 Collective Media, Inc. Method and System For Semantic Distance Measurement
US20100238812A1 (en) 2009-03-23 2010-09-23 Cisco Technology, Inc. Operating MPLS label switched paths and MPLS pseudowire in loopback mode
US7826470B1 (en) 2004-10-19 2010-11-02 Broadcom Corp. Network interface device with flow-oriented bus interface
US7889750B1 (en) 2004-04-28 2011-02-15 Extreme Networks, Inc. Method of extending default fixed number of processing cycles in pipelined packet processor architecture
US7904642B1 (en) 2007-02-08 2011-03-08 Netlogic Microsystems, Inc. Method for combining and storing access control lists
US7961734B2 (en) 2008-09-30 2011-06-14 Juniper Networks, Inc. Methods and apparatus related to packet classification associated with a multi-stage switch
US20110149960A1 (en) 2009-12-17 2011-06-23 Media Patents, S.L. Method and apparatus for filtering multicast packets
US8077611B2 (en) 2006-07-27 2011-12-13 Cisco Technology, Inc. Multilevel coupled policer
US8094659B1 (en) 2007-07-09 2012-01-10 Marvell Israel (M.I.S.L) Ltd. Policy-based virtual routing and forwarding (VRF) assignment
US20120033550A1 (en) 2010-08-06 2012-02-09 Alaxala Networks Corporation Packet relay device and congestion control method
US20120159235A1 (en) 2010-12-20 2012-06-21 Josephine Suganthi Systems and Methods for Implementing Connection Mirroring in a Multi-Core System
US20120173661A1 (en) 2011-01-04 2012-07-05 Cisco Technology, Inc. System and method for exchanging information in a mobile wireless network environment
US20120170585A1 (en) 2010-12-29 2012-07-05 Juniper Networks, Inc. Methods and apparatus for standard protocol validation mechanisms deployed over a switch fabric system
US20120177047A1 (en) 2011-01-06 2012-07-12 Amir Roitshtein Network device with a programmable core
US20120284438A1 (en) 2011-05-06 2012-11-08 Xcelemor, Inc. Computing system with data and control planes and method of operation thereof
US20130003556A1 (en) 2011-06-28 2013-01-03 Xelerated Ab Scheduling packets in a packet-processing pipeline
US20130028265A1 (en) 2010-04-23 2013-01-31 Luigi Ronchetti Update of a cumulative residence time of a packet in a packet-switched communication network
US20130100951A1 (en) 2010-06-23 2013-04-25 Nec Corporation Communication system, control apparatus, node controlling method and node controlling program
US20130108264A1 (en) 2011-11-01 2013-05-02 Plexxi Inc. Hierarchy of control in a data center network
US20130124491A1 (en) 2011-11-11 2013-05-16 Gerald Pepper Efficient Pipelined Binary Search
US20130166703A1 (en) 2011-12-27 2013-06-27 Michael P. Hammer System And Method For Management Of Network-Based Services
US20130163426A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau Forwarding element for flexible and extensible flow processing in software-defined networks
US20130163427A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau System for flexible and extensible flow processing in software-defined networks
US20130163475A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau Controller for flexible and extensible flow processing in software-defined networks
WO2013101024A1 (en) 2011-12-29 2013-07-04 Intel Corporation Imaging task pipeline acceleration
US8514855B1 (en) 2010-05-04 2013-08-20 Sandia Corporation Extensible packet processing architecture
US20130227051A1 (en) 2012-01-10 2013-08-29 Edgecast Networks, Inc. Multi-Layer Multi-Hit Caching for Long Tail Content
US20130227519A1 (en) 2012-02-27 2013-08-29 Joel John Maleport Methods and systems for parsing data objects
US20130290622A1 (en) 2012-04-27 2013-10-31 Suddha Sekhar Dey Tcam action updates
US20130315054A1 (en) 2012-05-24 2013-11-28 Marvell World Trade Ltd. Flexible queues in a network switch
US20130318107A1 (en) 2012-05-23 2013-11-28 International Business Machines Corporation Generating data feed specific parser circuits
US20130346814A1 (en) 2012-06-21 2013-12-26 Timothy Zadigian Jtag-based programming and debug
US8638793B1 (en) 2009-04-06 2014-01-28 Marvell Israle (M.I.S.L) Ltd. Enhanced parsing and classification in a packet processor
US20140040527A1 (en) 2011-04-21 2014-02-06 Ineda Systems Pvt. Ltd Optimized multi-root input output virtualization aware switch
US20140033489A1 (en) 2002-04-23 2014-02-06 Piedek Technical Laboratory Method for manufacturing quartz crystal resonator, quartz crystal unit and quartz crystal oscillator
US20140043974A1 (en) 2012-08-07 2014-02-13 Broadcom Corporation Low-latency switching
US20140082302A1 (en) 2012-09-14 2014-03-20 Xerox Corporation Systems and methods for employing an electronically-readable monitoring module associated with a customer replaceable component to update a non-volatile memory in an image forming device
US8693374B1 (en) 2012-12-18 2014-04-08 Juniper Networks, Inc. Centralized control of an aggregation network with a reduced control plane
US20140115666A1 (en) 2011-06-10 2014-04-24 Koninklijke Philips N.V. Secure protocol execution in a network
US20140115571A1 (en) 2012-10-23 2014-04-24 Asus Technology Pte Ltd. Electronic device, non-transient readable medium and method thereof
US8738860B1 (en) 2010-10-25 2014-05-27 Tilera Corporation Computing in parallel processing environments
US20140181232A1 (en) 2012-12-20 2014-06-26 Oracle International Corporation Distributed queue pair state on a host channel adapter
US20140181818A1 (en) 2011-09-07 2014-06-26 Amazon Technologies, Inc. Optimization of packet processing by delaying a processor from entering an idle state
US20140204943A1 (en) 2013-01-24 2014-07-24 Douglas A. Palmer Systems and methods for packet routing
US8798047B1 (en) 2011-08-29 2014-08-05 Qlogic, Corporation Methods and systems for processing network information
US20140233568A1 (en) 2012-03-19 2014-08-21 Intel Corporation Techniques for packet management in an input/output virtualization system
US20140241359A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing vliw action unit with or-multi-ported instruction memory
US20140241358A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with a vliw action engine
US20140241361A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with configurable memory allocation
US20140244966A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with stateful actions
US20140241362A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with configurable bit allocation
US20140269432A1 (en) 2013-03-15 2014-09-18 Cisco Technology, Inc. vPC AUTO CONFIGURATION
US20140301192A1 (en) 2013-04-05 2014-10-09 Futurewei Technologies, Inc. Software Defined Networking (SDN) Controller Orchestration and Network Virtualization for Data Center Interconnection
US20140321473A1 (en) 2013-04-26 2014-10-30 Mediatek Inc. Active output buffer controller for controlling packet data output of main buffer in network device and related method
US20140321476A1 (en) 2013-04-26 2014-10-30 Mediatek Inc. Packet output controller and method for dequeuing multiple packets from one scheduled output queue and/or using over-scheduling to schedule output queues
US20140328344A1 (en) 2013-02-01 2014-11-06 Texas Instruments Incorporated Packet processing match and action pipeline structure with dependency calculation removing false dependencies
US20140328180A1 (en) 2012-11-08 2014-11-06 Texas Instruments Incorporated Structure for implementing openflow all group buckets using egress flow table entries
US20140334489A1 (en) 2012-11-08 2014-11-13 Texas Instruments Incorporated Openflow match and action pipeline structure
US20150003259A1 (en) 2012-01-30 2015-01-01 Nec Corporation Network system and method of managing topology
US20150010000A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Hybrid Packet Processing
US20150009796A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Reconciliation of Network State Across Physical Domains
US20150020060A1 (en) 2011-11-11 2015-01-15 Wyse Technology L.L.C. Robust firmware update with recovery logic
US20150023147A1 (en) 2013-07-17 2015-01-22 Kt Corporation Methods for managing transaction in software defined network
US20150043589A1 (en) 2013-08-09 2015-02-12 Futurewei Technologies, Inc. Extending OpenFlow to Support Packet Encapsulation for Transport over Software-Defined Networks
US8971338B2 (en) 2012-01-09 2015-03-03 Telefonaktiebolaget L M Ericsson (Publ) Expanding network functionalities for openflow based split-architecture networks
US20150081833A1 (en) 2013-09-15 2015-03-19 Nicira, Inc. Dynamically Generating Flows with Wildcard Fields
US20150092539A1 (en) 2013-09-30 2015-04-02 Cisco Technology, Inc. Data-plane driven fast protection mechanism for mpls pseudowire services
US20150109913A1 (en) 2013-10-18 2015-04-23 Fujitsu Limited Packet processing apparatus, packet processing method, and non-transitory computer-readable storage medium
US20150110114A1 (en) 2013-10-17 2015-04-23 Marvell Israel (M.I.S.L) Ltd. Processing Concurrency in a Network Device
US20150121355A1 (en) 2013-10-28 2015-04-30 International Business Machines Corporation Unified update tool for multi-protocol network adapter
US20150131666A1 (en) 2013-11-08 2015-05-14 Electronics And Telecommunications Research Institute Apparatus and method for transmitting packet
US20150131667A1 (en) 2013-11-14 2015-05-14 Electronics And Telecommunications Research Institute Sdn-based network device with extended function and method of processing packet in the same device
US20150142932A1 (en) 2013-11-18 2015-05-21 Tellabs Oy Network element and a controller for managing the network element
US20150142991A1 (en) 2011-04-21 2015-05-21 Efficiency3 Corp. Electronic hub appliances used for collecting, storing, and processing potentially massive periodic data streams indicative of real-time or other measuring parameters
US20150146527A1 (en) 2013-11-26 2015-05-28 Broadcom Corporation System, Method and Apparatus for Network Congestion Management and Network Resource Isolation
US9049271B1 (en) 2009-07-16 2015-06-02 Teradici Corporation Switch-initiated congestion management method
US9049153B2 (en) 2010-07-06 2015-06-02 Nicira, Inc. Logical packet processing pipeline that retains state information to effectuate efficient processing of packets
US20150156288A1 (en) 2013-12-04 2015-06-04 Mediatek Inc. Parser for parsing header in packet and related packet processing apparatus
US9055114B1 (en) 2011-12-22 2015-06-09 Juniper Networks, Inc. Packet parsing and control packet classification
US9055004B2 (en) 2012-09-18 2015-06-09 Cisco Technology, Inc. Scalable low latency multi-protocol networking device
US20150172198A1 (en) 2013-12-18 2015-06-18 Marvell Israel (M.I.S.L) Ltd. Methods and network device for oversubscription handling
US20150178395A1 (en) 2013-12-20 2015-06-25 Zumur, LLC System and method for idempotent interactive disparate object discovery, retrieval and display
US20150180769A1 (en) 2013-12-20 2015-06-25 Alcatel-Lucent Usa Inc. Scale-up of sdn control plane using virtual switch based overlay
US20150195206A1 (en) 2008-06-24 2015-07-09 Intel Corporation Packet switching
US20150194215A1 (en) 2014-01-09 2015-07-09 Netronome Systems, Inc. Dedicated egress fast path for non-matching packets in an openflow switch
US20150222560A1 (en) 2014-02-05 2015-08-06 Verizon Patent And Licensing Inc. Capacity management based on backlog information
US9112818B1 (en) 2010-02-05 2015-08-18 Marvell Isreal (M.I.S.L) Ltd. Enhanced tail dropping in a switch
US9124644B2 (en) 2013-07-14 2015-09-01 Netronome Systems, Inc. Script-controlled egress packet modifier
US20150249572A1 (en) 2014-03-03 2015-09-03 Futurewei Technologies, Inc. Software-Defined Network Control Using Functional Objects
US20150256465A1 (en) 2014-03-04 2015-09-10 Futurewei Technologies, Inc. Software-Defined Network Control Using Control Macros
US20150271011A1 (en) 2014-03-21 2015-09-24 Nicira, Inc. Dynamic routing for logical routers
US20150281125A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Caching of service decisions
US20150319086A1 (en) 2014-04-30 2015-11-05 Broadcom Corporation System for Accelerated Network Route Update
US20150363522A1 (en) 2013-01-31 2015-12-17 Hewlett-Packard Development Company, L.P. Network switch simulation
US20150381418A1 (en) 2014-06-27 2015-12-31 iPhotonix Remote Orchestration of Virtual Machine Updates
US20150381495A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems for providing multi-tenancy support for single root i/o virtualization
US20160006654A1 (en) 2014-07-07 2016-01-07 Cisco Technology, Inc. Bi-directional flow stickiness in a network environment
US20160014073A1 (en) 2014-07-11 2016-01-14 Wmware, Inc. Methods and apparatus to configure hardware management systems for use in virtual server rack deployments for virtual computing environments
US20160019161A1 (en) 2013-03-12 2016-01-21 Hewlett-Packard Development Company, L.P. Programmable address mapping and memory access operations
US9276846B2 (en) 2013-03-15 2016-03-01 Cavium, Inc. Packet extraction optimization in a network processor
US20160094460A1 (en) 2014-09-30 2016-03-31 Vmware, Inc. Packet Key Parser for Flow-Based Forwarding Elements
US20160139892A1 (en) 2014-11-14 2016-05-19 Xpliant, Inc. Parser engine programming tool for programmable network devices
US20160149784A1 (en) 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (Publ) Passive Performance Measurement for Inline Service Chaining
US20160173371A1 (en) 2014-12-11 2016-06-16 Brocade Communications Systems, Inc. Multilayered distributed router architecture
US20160173383A1 (en) 2014-12-11 2016-06-16 Futurewei Technologies, Inc. Method and apparatus for priority flow and congestion control in ethernet network
US20160188313A1 (en) 2014-12-27 2016-06-30 Scott P. Dubal Technologies for reprogramming network interface cards over a network
US20160191361A1 (en) 2014-12-31 2016-06-30 Nicira, Inc. System for aggregating statistics associated with interfaces
US20160191370A1 (en) 2014-12-29 2016-06-30 Juniper Networks, Inc. Network topology optimization
US20160191306A1 (en) 2014-12-27 2016-06-30 Iosif Gasparakis Programmable protocol parser for nic classification and queue assignments
US20160191384A1 (en) 2014-12-24 2016-06-30 Nicira, Inc. Batch Processing of Packets
US20160191406A1 (en) 2013-07-31 2016-06-30 Zte Corporation Method and Device for Implementing QoS in OpenFlow Network
US20160197852A1 (en) 2013-12-30 2016-07-07 Cavium, Inc. Protocol independent programmable switch (pips) software defined data center networks
US20160212012A1 (en) 2013-08-30 2016-07-21 Clearpath Networks, Inc. System and method of network functions virtualization of network services within and across clouds
US20160234067A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for identifying an outgoing interface using openflow protocol
US20160232019A1 (en) 2015-02-09 2016-08-11 Broadcom Corporation Network Interface Controller with Integrated Network Flow Processing
US20160234097A1 (en) 2013-08-12 2016-08-11 Hangzhou H3C Technologies Co., Ltd. Packet forwarding in software defined networking
US20160234102A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for inserting an openflow flow entry into a flow table using openflow protocol
US20160234103A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for inserting an openflow flow entry into a flow table using openflow protocol
US20160241459A1 (en) 2013-10-26 2016-08-18 Huawei Technologies Co.,Ltd. Method for acquiring, by sdn switch, exact flow entry, and sdn switch, controller, and system
US9450817B1 (en) 2013-03-15 2016-09-20 Juniper Networks, Inc. Software defined network controller
US20160301601A1 (en) 2015-04-09 2016-10-13 Telefonaktiebolaget L M Ericsson (Publ) Method and system for traffic pattern generation in a software-defined networking (sdn) system
US20160315866A1 (en) 2015-04-27 2016-10-27 Telefonaktiebolaget L M Ericsson (Publ) Service based intelligent packet-in mechanism for openflow switches
US20160323243A1 (en) 2015-05-01 2016-11-03 Cirius Messaging Inc. Data leak protection system and processing methods thereof
US20160330128A1 (en) 2013-12-30 2016-11-10 Sanechips Technology Co., Ltd. Queue scheduling method and device, and computer storage medium
US20160337329A1 (en) 2015-05-11 2016-11-17 Kapil Sood Technologies for secure bootstrapping of virtual network functions
US20160342510A1 (en) 2012-01-17 2016-11-24 Google Inc. Remote management of data planes and configuration of networking devices
US20160344629A1 (en) 2015-05-22 2016-11-24 Gray Research LLC Directional two-dimensional router and interconnection network for field programmable gate arrays, and other circuits and applications of the router and network
US20160357534A1 (en) 2015-06-03 2016-12-08 The Mathworks, Inc. Data type reassignment
US20160359685A1 (en) 2015-06-04 2016-12-08 Cisco Technology, Inc. Method and apparatus for computing cell density based rareness for use in anomaly detection
US20170005951A1 (en) 2015-07-02 2017-01-05 Arista Networks, Inc. Network data processor having per-input port virtual output queues
US20170013452A1 (en) 2014-04-29 2017-01-12 Hewlett-Packard Development Company, L.P. Network re-convergence point
US20170019329A1 (en) 2015-07-15 2017-01-19 Argela-USA, Inc. Method for forwarding rule hopping based secure communication
US20170019302A1 (en) 2015-07-13 2017-01-19 Telefonaktiebolaget L M Ericsson (Publ) Analytics-driven dynamic network design and configuration
US20170034082A1 (en) 2015-07-31 2017-02-02 Nicira, Inc. Managed Forwarding Element With Conjunctive Match Flow Entries
US20170041209A1 (en) 2015-08-03 2017-02-09 Telefonaktiebolaget L M Ericsson (Publ) Method and system for path monitoring in a software-defined networking (sdn) system
US20170048144A1 (en) 2015-08-13 2017-02-16 Futurewei Technologies, Inc. Congestion Avoidance Traffic Steering (CATS) in Datacenter Networks
US20170053012A1 (en) 2015-08-17 2017-02-23 Mellanox Technologies Tlv Ltd. High-performance bloom filter array
US20170063690A1 (en) 2015-08-26 2017-03-02 Barefoot Networks, Inc. Packet header field extraction
US20170064047A1 (en) 2015-08-26 2017-03-02 Barefoot Networks, Inc. Configuring a switch for extracting packet header fields
US20170070416A1 (en) 2015-09-04 2017-03-09 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for modifying forwarding states in a network device of a software defined network
US20170075692A1 (en) 2015-09-11 2017-03-16 Qualcomm Incorporated Selective flushing of instructions in an instruction pipeline in a processor back to an execution-resolved target address, in response to a precise interrupt
US20170085477A1 (en) 2014-05-30 2017-03-23 Huawei Technologies Co., Ltd. Packet Edit Processing Method and Related Device
US20170085479A1 (en) 2014-02-19 2017-03-23 Nec Corporation Network control method, network system, apparatus, and program
US20170085414A1 (en) 2014-05-27 2017-03-23 Telecom Italia S.P.A. System and method for network apparatus management
US20170091258A1 (en) 2015-09-30 2017-03-30 Nicira, Inc. Packet Processing Rule Versioning
US20170093986A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170093987A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170093707A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170111275A1 (en) 2014-06-30 2017-04-20 Huawei Technologies Co., Ltd. Data processing method executed by network apparatus, and related device
US20170118041A1 (en) 2015-10-21 2017-04-27 Brocade Communications Systems, Inc. Distributed rule provisioning in an extended bridge
US20170118042A1 (en) 2015-10-21 2017-04-27 Brocade Communications Systems, Inc. High availability for distributed network services in an extended bridge
US20170126588A1 (en) 2014-07-25 2017-05-04 Telefonaktiebolaget Lm Ericsson (Publ) Packet Processing in an OpenFlow Switch
US20170134282A1 (en) 2015-11-10 2017-05-11 Ciena Corporation Per queue per service differentiation for dropping packets in weighted random early detection
US20170134310A1 (en) 2015-11-05 2017-05-11 Dell Products, L.P. Dynamic allocation of queue depths for virtual functions in a converged infrastructure
US20170142000A1 (en) 2014-08-11 2017-05-18 Huawei Technologies Co., Ltd. Packet control method, switch, and controller
US20170149632A1 (en) 2013-11-26 2017-05-25 Telefonaktiebolaget Lm Ericsson (Publ) A method and system of supporting service chaining in a data network
US20170180273A1 (en) 2015-12-22 2017-06-22 Daniel Daly Accelerated network packet processing
US20170195229A1 (en) 2015-12-30 2017-07-06 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. System, method and article of manufacture for using control plane for critical data communications in software-defined networks
US20170208015A1 (en) 2014-06-04 2017-07-20 Lantiq Beteiligun-Grmbh & Co. KG Data Packet Processing System on a Chip
US20170222881A1 (en) 2016-01-28 2017-08-03 Arista Networks, Inc. Network Data Stream Tracer
US20170220499A1 (en) 2016-01-04 2017-08-03 Gray Research LLC Massively parallel computer, accelerated computing clusters, and two-dimensional router and interconnection network for field programmable gate arrays, and applications
US20170223575A1 (en) 2016-01-29 2017-08-03 Arista Networks, Inc. System and method of a pause watchdog
US20170251077A1 (en) 2016-02-26 2017-08-31 Arista Networks, Inc. Per-input port, per-control plane network data traffic class control plane policing
US9755932B1 (en) 2014-09-26 2017-09-05 Juniper Networks, Inc. Monitoring packet residence time and correlating packet residence time to input sources
US20170264571A1 (en) 2016-03-08 2017-09-14 Mellanox Technologies Tlv Ltd. Flexible buffer allocation in a network switch
EP3229424A1 (en) 2014-12-03 2017-10-11 Sanechips Technology Co., Ltd. Improved wred-based congestion control method and device
US9838268B1 (en) 2014-06-27 2017-12-05 Juniper Networks, Inc. Distributed, adaptive controller for multi-domain networks
US20180006945A1 (en) 2016-07-01 2018-01-04 Mario Flajslik Technologies for adaptive routing using throughput estimation
US20180006950A1 (en) 2016-07-01 2018-01-04 Intel Corporation Technologies for adaptive routing using aggregated congestion information
US9888033B1 (en) 2014-06-19 2018-02-06 Sonus Networks, Inc. Methods and apparatus for detecting and/or dealing with denial of service attacks
US9891898B1 (en) 2015-06-04 2018-02-13 Netronome Systems, Inc. Low-level programming language plugin to augment high-level programming language setup of an SDN switch
US20180054385A1 (en) 2016-08-17 2018-02-22 Cisco Technology, Inc. Re-configurable lookup pipeline architecture for packet forwarding
US20180115478A1 (en) 2016-10-20 2018-04-26 Gatesair, Inc. Extended time reference generation
US9960956B1 (en) 2014-10-15 2018-05-01 The United States Of America, As Represented By The Secretary Of The Navy Network monitoring method using phantom nodes
US20180124183A1 (en) 2016-11-03 2018-05-03 Futurewei Technologies, Inc. Method and Apparatus for Stateful Control of Forwarding Elements
US20180191640A1 (en) 2015-06-30 2018-07-05 Hewlett Packard Enterprise Development Lp Action references
US10044646B1 (en) 2014-12-02 2018-08-07 Adtran, Inc. Systems and methods for efficiently storing packet data in network switches
US20180262424A1 (en) 2015-01-12 2018-09-13 Telefonaktiebolaget Lm Ericsson (Publ) Methods and Modules for Managing Packets in a Software Defined Network
US10091137B2 (en) 2017-01-30 2018-10-02 Cavium, Inc. Apparatus and method for scalable and flexible wildcard matching in a network switch
US20180287819A1 (en) 2017-03-28 2018-10-04 Marvell World Trade Ltd. Flexible processor of a port extender device
US10135734B1 (en) 2015-12-28 2018-11-20 Amazon Technologies, Inc. Pipelined evaluations for algorithmic forwarding route lookup
US20180375755A1 (en) 2016-01-05 2018-12-27 Telefonaktiebolaget Lm Ericsson (Publ) Mechanism to detect control plane loops in a software defined networking (sdn) network
US10291555B2 (en) 2015-11-17 2019-05-14 Telefonaktiebolaget Lm Ericsson (Publ) Service based intelligent packet-in buffering mechanism for openflow switches by having variable buffer timeouts
US10341242B2 (en) 2016-12-13 2019-07-02 Oracle International Corporation System and method for providing a programmable packet classification framework for use in a network device
US10412018B1 (en) 2017-03-21 2019-09-10 Barefoot Networks, Inc. Hierarchical queue scheduler
US10419366B1 (en) 2017-01-31 2019-09-17 Barefoot Networks, Inc. Mechanism for communicating to remote control plane from forwarding element
US10419242B1 (en) 2015-06-04 2019-09-17 Netronome Systems, Inc. Low-level programming language plugin to augment high-level programming language setup of an SDN switch
US10686735B1 (en) 2017-04-23 2020-06-16 Barefoot Networks, Inc. Packet reconstruction at deparser
US20200228433A1 (en) 2019-01-15 2020-07-16 Fujitsu Limited Computer-readable recording medium including monitoring program, programmable device, and monitoring method
US20200244576A1 (en) 2019-01-29 2020-07-30 Cisco Technology, Inc. Supporting asynchronous packet operations in a deterministic network
US20200280518A1 (en) 2020-01-28 2020-09-03 Intel Corporation Congestion management techniques
US20200280428A1 (en) 2019-10-18 2020-09-03 Intel Corporation Configuration scheme for link establishment
US20220091992A1 (en) 2020-09-23 2022-03-24 Intel Corporation Device, system and method to provide line level tagging of data at a processor cache

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9276851B1 (en) * 2011-12-20 2016-03-01 Marvell Israel (M.I.S.L.) Ltd. Parser and modifier for processing network packets
US11503141B1 (en) 2017-07-23 2022-11-15 Barefoot Networks, Inc. Stateful processing unit with min/max capability

Patent Citations (339)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6127900U (en) 1984-07-23 1986-02-19 株式会社 富永製作所 Refueling device
US5243596A (en) 1992-03-18 1993-09-07 Fischer & Porter Company Network architecture suitable for multicasting and resource locking
US5642483A (en) 1993-07-30 1997-06-24 Nec Corporation Method for efficiently broadcast messages to all concerned users by limiting the number of messages that can be sent at one time
US5784003A (en) 1996-03-25 1998-07-21 I-Cube, Inc. Network switch with broadcast support
US6442172B1 (en) 1996-07-11 2002-08-27 Alcatel Internetworking, Inc. Input buffering and queue status-based output control for a digital traffic switch
US20140140342A1 (en) 1998-06-15 2014-05-22 Charles E. Narad Pipeline for handling network packets
US6157955A (en) 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US9294386B2 (en) 1998-06-15 2016-03-22 Intel Corporation Apparatus and computer program product for handling network packets using a pipeline of elements
US6836483B1 (en) 1998-06-24 2004-12-28 Research Investment Network, Inc. Message system for asynchronous transfer
US20010043611A1 (en) 1998-07-08 2001-11-22 Shiri Kadambi High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
US6735679B1 (en) 1998-07-08 2004-05-11 Broadcom Corporation Apparatus and method for optimizing access to memory
US20030107996A1 (en) 1998-11-19 2003-06-12 Black Alistair D. Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost
US7046685B1 (en) 1998-12-15 2006-05-16 Fujitsu Limited Scheduling control system and switch
US6453360B1 (en) 1999-03-01 2002-09-17 Sun Microsystems, Inc. High performance network interface
US6948099B1 (en) 1999-07-30 2005-09-20 Intel Corporation Re-loading operating systems
US20050086353A1 (en) 1999-09-20 2005-04-21 Kabushiki Kaisha Toshiba Fast and adaptive packet processing device and method using digest information of input packet
US7203740B1 (en) 1999-12-22 2007-04-10 Intel Corporation Method and apparatus for allowing proprietary forwarding elements to interoperate with standard control elements in an open architecture for network devices
US20020001356A1 (en) 1999-12-28 2002-01-03 Kishan Shenoi Clock recovery and detection of rapid phase transients
US6980552B1 (en) 2000-02-14 2005-12-27 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US20060039374A1 (en) 2000-02-14 2006-02-23 David Belz Pipelined packet switching and queuing architecture
US20060050690A1 (en) 2000-02-14 2006-03-09 Epps Garry P Pipelined packet switching and queuing architecture
US7177276B1 (en) 2000-02-14 2007-02-13 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US7643486B2 (en) 2000-02-14 2010-01-05 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US20030147401A1 (en) 2000-05-10 2003-08-07 Jukka Kyronaho Resource allocation in packet network
US7062571B1 (en) 2000-06-30 2006-06-13 Cisco Technology, Inc. Efficient IP load-balancing traffic distribution using ternary CAMs
US20020136163A1 (en) 2000-11-24 2002-09-26 Matsushita Electric Industrial Co., Ltd. Apparatus and method for flow control
US20030046414A1 (en) 2001-01-25 2003-03-06 Crescent Networks, Inc. Operation of a multiplicity of time sorted queues with reduced memory
US6976149B1 (en) 2001-02-22 2005-12-13 Cisco Technology, Inc. Mapping technique for computing addresses in a memory of an intermediate network node
US20020172210A1 (en) 2001-05-18 2002-11-21 Gilbert Wolrich Network device switch
US20030009466A1 (en) 2001-06-21 2003-01-09 Ta John D. C. Search engine with pipeline structure
US20060117126A1 (en) 2001-07-30 2006-06-01 Cisco Technology, Inc. Processing unit for efficiently determining a packet's destination in a packet-switched network
US20030046429A1 (en) 2001-08-30 2003-03-06 Sonksen Bradley Stephen Static data item processing
US20030043825A1 (en) 2001-09-05 2003-03-06 Andreas Magnussen Hash-based data frame distribution for web switches
CN1589551A (en) 2001-09-24 2005-03-02 艾利森公司 System and method for processing packets
US20030063345A1 (en) 2001-10-01 2003-04-03 Dan Fossum Wayside user communications over optical supervisory channel
US20030118022A1 (en) 2001-12-21 2003-06-26 Chip Engines Reconfigurable data packet header processor
US20030154358A1 (en) 2002-02-08 2003-08-14 Samsung Electronics Co., Ltd. Apparatus and method for dispatching very long instruction word having variable length
US20030167373A1 (en) 2002-03-01 2003-09-04 Derek Winters Method and system for reducing storage requirements for program code in a communication device
US20140033489A1 (en) 2002-04-23 2014-02-06 Piedek Technical Laboratory Method for manufacturing quartz crystal resonator, quartz crystal unit and quartz crystal oscillator
US20070208876A1 (en) 2002-05-06 2007-09-06 Davis Ian E Method and apparatus for efficiently processing data packets in a computer network
US20030219026A1 (en) 2002-05-23 2003-11-27 Yea-Li Sun Method and multi-queue packet scheduling system for managing network packet traffic with minimum performance guarantees and maximum service rate control
US20040165588A1 (en) 2002-06-11 2004-08-26 Pandya Ashish A. Distributed network security system and a hardware processor therefor
US20040024894A1 (en) 2002-08-02 2004-02-05 Osman Fazil Ismet High data rate stateful protocol processing
US20040031029A1 (en) 2002-08-06 2004-02-12 Kyu-Woong Lee Methods and systems for automatically updating software components in a network
US20040042477A1 (en) 2002-08-30 2004-03-04 Nabil Bitar Buffer management based on buffer sharing across ports and per-port minimum buffer guarantee
US7539777B1 (en) 2002-10-25 2009-05-26 Cisco Technology, Inc. Method and system for network time protocol forwarding
US20040105384A1 (en) 2002-11-28 2004-06-03 International Business Machines Corporation Event-driven flow control for a very high-speed switching node
US20040123220A1 (en) 2002-12-18 2004-06-24 Johnson Erik J. Framer
US7492714B1 (en) 2003-02-04 2009-02-17 Pmc-Sierra, Inc. Method and apparatus for packet grooming and aggregation
US7389462B1 (en) 2003-02-14 2008-06-17 Istor Networks, Inc. System and methods for high rate hardware-accelerated network protocol processing
US20040213156A1 (en) 2003-04-25 2004-10-28 Alcatel Ip Networks, Inc. Assigning packet queue priority
US20050108518A1 (en) 2003-06-10 2005-05-19 Pandya Ashish A. Runtime adaptable security processor
US20050013251A1 (en) 2003-07-18 2005-01-20 Hsuan-Wen Wang Flow control hub having scoreboard memory
US20050078651A1 (en) 2003-08-16 2005-04-14 Samsung Electronics Co., Ltd. Method and apparatus for assigning scheduling for uplink packet transmission in a mobile communication system
US20050041590A1 (en) 2003-08-22 2005-02-24 Joseph Olakangil Equal-cost source-resolved routing system and method
US20050060428A1 (en) 2003-09-11 2005-03-17 International Business Machines Corporation Apparatus and method for caching lookups based upon TCP traffic flow characteristics
US20060277346A1 (en) 2003-10-06 2006-12-07 David Doak Port adapter for high-bandwidth bus
US20050135399A1 (en) 2003-11-10 2005-06-23 Baden Eric A. Field processor for a network device
US20050120173A1 (en) 2003-11-27 2005-06-02 Nobuyuki Minowa Device and method for performing information processing using plurality of processors
US20050129059A1 (en) 2003-12-03 2005-06-16 Zhangzhen Jiang Method of implementing PSEUDO wire emulation edge-to-edge protocol
US20050149823A1 (en) 2003-12-10 2005-07-07 Samsung Electrionics Co., Ltd. Apparatus and method for generating checksum
US7633880B2 (en) 2004-01-05 2009-12-15 Samsung Electronics Co., Ltd. Access network device for managing queue corresponding to real time multimedia traffic characteristics and method thereof
US20050198531A1 (en) 2004-03-02 2005-09-08 Marufa Kaniz Two parallel engines for high speed transmit IPSEC processing
US7889750B1 (en) 2004-04-28 2011-02-15 Extreme Networks, Inc. Method of extending default fixed number of processing cycles in pipelined packet processor architecture
US20050243852A1 (en) 2004-05-03 2005-11-03 Bitar Nabil N Variable packet-size backplanes for switching and routing systems
US20060002386A1 (en) 2004-06-30 2006-01-05 Zarlink Semiconductor Inc. Combined pipelined classification and address search method and apparatus for switching environments
US20060072480A1 (en) 2004-09-29 2006-04-06 Manasi Deval Method to provide high availability in network elements using distributed architectures
US20100312941A1 (en) 2004-10-19 2010-12-09 Eliezer Aloni Network interface device with flow-oriented bus interface
US7826470B1 (en) 2004-10-19 2010-11-02 Broadcom Corp. Network interface device with flow-oriented bus interface
US8155135B2 (en) 2004-10-19 2012-04-10 Broadcom Corporation Network interface device with flow-oriented bus interface
US20060092857A1 (en) 2004-11-01 2006-05-04 Lucent Technologies Inc. Softrouter dynamic binding protocol
US20060114914A1 (en) 2004-11-30 2006-06-01 Broadcom Corporation Pipeline architecture of a network device
US20060114895A1 (en) 2004-11-30 2006-06-01 Broadcom Corporation CPU transmission of unmodified packets
US20060174242A1 (en) 2005-02-01 2006-08-03 Microsoft Corporation Publishing the status of and updating firmware components
US7873959B2 (en) 2005-02-01 2011-01-18 Microsoft Corporation Publishing the status of and updating firmware components
US20070050426A1 (en) 2005-06-20 2007-03-01 Dubal Scott P Platform with management agent to receive software updates
US20070008985A1 (en) 2005-06-30 2007-01-11 Sridhar Lakshmanamurthy Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices
US20070055664A1 (en) 2005-09-05 2007-03-08 Cisco Technology, Inc. Pipeline sequential regular expression matching
US7499941B2 (en) 2005-09-05 2009-03-03 Cisco Technology, Inc. Pipeline regular expression matching
US20080285571A1 (en) 2005-10-07 2008-11-20 Ambalavanar Arulambalam Media Data Processing Using Distinct Elements for Streaming and Control Processes
CN101352012A (en) 2005-10-07 2009-01-21 安吉尔系统公司 Media data processing using distinct elements for streaming and control processes
US20070104102A1 (en) 2005-11-10 2007-05-10 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding
US20070104211A1 (en) 2005-11-10 2007-05-10 Broadcom Corporation Interleaved processing of dropped packets in a network device
US20070153796A1 (en) 2005-12-30 2007-07-05 Intel Corporation Packet processing utilizing cached metadata to support forwarding and non-forwarding operations on parallel paths
US20100128735A1 (en) 2006-01-30 2010-05-27 Juniper Networks, Inc. Processing of partial frames and partial superframes
US20070195761A1 (en) 2006-02-21 2007-08-23 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US20070195773A1 (en) 2006-02-21 2007-08-23 Tatar Mohammed I Pipelined packet switching and queuing architecture
US20070230493A1 (en) 2006-03-31 2007-10-04 Qualcomm Incorporated Memory management for high speed media access control
US20070236734A1 (en) 2006-04-05 2007-10-11 Sharp Kabushiki Kaisha Image processing apparatus
US7872774B2 (en) 2006-04-05 2011-01-18 Sharp Kabushiki Kaisha Image processing apparatus having an energization switching unit and control information updating unit
US20070280277A1 (en) 2006-05-30 2007-12-06 Martin Lund Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US20100150164A1 (en) 2006-06-23 2010-06-17 Juniper Networks, Inc. Flow-based queuing of network traffic
US8077611B2 (en) 2006-07-27 2011-12-13 Cisco Technology, Inc. Multilevel coupled policer
US20080082792A1 (en) 2006-10-03 2008-04-03 Vincent Melanie Emanuelle Luci Register renaming in a data processing system
US20080130670A1 (en) 2006-12-05 2008-06-05 Samsung Electronics Co. Ltd. Method and apparatus for managing a buffer in a communication system
US20080144662A1 (en) 2006-12-14 2008-06-19 Sun Microsystems, Inc. Method and system for offloaded transport layer protocol switching
US20100085891A1 (en) 2006-12-19 2010-04-08 Andreas Kind Apparatus and method for analysing a network
US20080175449A1 (en) 2007-01-19 2008-07-24 Wison Technology Corp. Fingerprint-based network authentication method and system thereof
US7904642B1 (en) 2007-02-08 2011-03-08 Netlogic Microsystems, Inc. Method for combining and storing access control lists
US20090006605A1 (en) 2007-06-26 2009-01-01 International Business Machines Corporation Extended write combining using a write continuation hint flag
US8094659B1 (en) 2007-07-09 2012-01-10 Marvell Israel (M.I.S.L) Ltd. Policy-based virtual routing and forwarding (VRF) assignment
US20090096797A1 (en) 2007-10-11 2009-04-16 Qualcomm Incorporated Demand based power control in a graphics processing unit
US20090106523A1 (en) 2007-10-18 2009-04-23 Cisco Technology Inc. Translation look-aside buffer with variable page sizes
US20090180475A1 (en) 2008-01-10 2009-07-16 Fujitsu Limited Packet forwarding apparatus and controlling method
US20150195206A1 (en) 2008-06-24 2015-07-09 Intel Corporation Packet switching
US7961734B2 (en) 2008-09-30 2011-06-14 Juniper Networks, Inc. Methods and apparatus related to packet classification associated with a multi-stage switch
US20100228733A1 (en) 2008-11-12 2010-09-09 Collective Media, Inc. Method and System For Semantic Distance Measurement
US20100135158A1 (en) 2008-12-01 2010-06-03 Razoom, Inc. Flow State Aware QoS Management Without User Signalling
US20100145475A1 (en) 2008-12-10 2010-06-10 Honeywell International, Inc. Building appliance controller with safety feature
US20100140364A1 (en) 2008-12-10 2010-06-10 Honeywell International, Inc. User interface for building controller
US20100182920A1 (en) 2009-01-21 2010-07-22 Fujitsu Limited Apparatus and method for controlling data communication
US20100191951A1 (en) 2009-01-26 2010-07-29 Assa Abloy Ab Provisioned firmware updates using object identifiers
US8527613B2 (en) 2009-01-26 2013-09-03 Assa Abloy Ab Provisioned firmware updates using object identifiers
US20100238812A1 (en) 2009-03-23 2010-09-23 Cisco Technology, Inc. Operating MPLS label switched paths and MPLS pseudowire in loopback mode
US8638793B1 (en) 2009-04-06 2014-01-28 Marvell Israle (M.I.S.L) Ltd. Enhanced parsing and classification in a packet processor
US9049271B1 (en) 2009-07-16 2015-06-02 Teradici Corporation Switch-initiated congestion management method
US20110149960A1 (en) 2009-12-17 2011-06-23 Media Patents, S.L. Method and apparatus for filtering multicast packets
US9112818B1 (en) 2010-02-05 2015-08-18 Marvell Isreal (M.I.S.L) Ltd. Enhanced tail dropping in a switch
US9686209B1 (en) 2010-02-05 2017-06-20 Marvell Israel (M.I.S.L) Ltd. Method and apparatus for storing packets in a network device
US20130028265A1 (en) 2010-04-23 2013-01-31 Luigi Ronchetti Update of a cumulative residence time of a packet in a packet-switched communication network
US8514855B1 (en) 2010-05-04 2013-08-20 Sandia Corporation Extensible packet processing architecture
US20130100951A1 (en) 2010-06-23 2013-04-25 Nec Corporation Communication system, control apparatus, node controlling method and node controlling program
US9049153B2 (en) 2010-07-06 2015-06-02 Nicira, Inc. Logical packet processing pipeline that retains state information to effectuate efficient processing of packets
US20120033550A1 (en) 2010-08-06 2012-02-09 Alaxala Networks Corporation Packet relay device and congestion control method
US8593955B2 (en) 2010-08-06 2013-11-26 Alaxala Networks Corporation Packet relay device and congestion control method
US8738860B1 (en) 2010-10-25 2014-05-27 Tilera Corporation Computing in parallel processing environments
US20120159235A1 (en) 2010-12-20 2012-06-21 Josephine Suganthi Systems and Methods for Implementing Connection Mirroring in a Multi-Core System
US20120170585A1 (en) 2010-12-29 2012-07-05 Juniper Networks, Inc. Methods and apparatus for standard protocol validation mechanisms deployed over a switch fabric system
US20120173661A1 (en) 2011-01-04 2012-07-05 Cisco Technology, Inc. System and method for exchanging information in a mobile wireless network environment
US20120177047A1 (en) 2011-01-06 2012-07-12 Amir Roitshtein Network device with a programmable core
US20140040527A1 (en) 2011-04-21 2014-02-06 Ineda Systems Pvt. Ltd Optimized multi-root input output virtualization aware switch
US20150142991A1 (en) 2011-04-21 2015-05-21 Efficiency3 Corp. Electronic hub appliances used for collecting, storing, and processing potentially massive periodic data streams indicative of real-time or other measuring parameters
US20120284438A1 (en) 2011-05-06 2012-11-08 Xcelemor, Inc. Computing system with data and control planes and method of operation thereof
US20140115666A1 (en) 2011-06-10 2014-04-24 Koninklijke Philips N.V. Secure protocol execution in a network
US20130003556A1 (en) 2011-06-28 2013-01-03 Xelerated Ab Scheduling packets in a packet-processing pipeline
US8798047B1 (en) 2011-08-29 2014-08-05 Qlogic, Corporation Methods and systems for processing network information
US20140181818A1 (en) 2011-09-07 2014-06-26 Amazon Technologies, Inc. Optimization of packet processing by delaying a processor from entering an idle state
US20130108264A1 (en) 2011-11-01 2013-05-02 Plexxi Inc. Hierarchy of control in a data center network
US20150020060A1 (en) 2011-11-11 2015-01-15 Wyse Technology L.L.C. Robust firmware update with recovery logic
US20130124491A1 (en) 2011-11-11 2013-05-16 Gerald Pepper Efficient Pipelined Binary Search
US9213537B2 (en) 2011-11-11 2015-12-15 Wyse Technology L.L.C. Robust firmware update with recovery logic
US20130163426A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau Forwarding element for flexible and extensible flow processing in software-defined networks
US9055114B1 (en) 2011-12-22 2015-06-09 Juniper Networks, Inc. Packet parsing and control packet classification
US20130163427A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau System for flexible and extensible flow processing in software-defined networks
US20130163475A1 (en) 2011-12-22 2013-06-27 Ludovic Beliveau Controller for flexible and extensible flow processing in software-defined networks
US20130166703A1 (en) 2011-12-27 2013-06-27 Michael P. Hammer System And Method For Management Of Network-Based Services
WO2013101024A1 (en) 2011-12-29 2013-07-04 Intel Corporation Imaging task pipeline acceleration
US8971338B2 (en) 2012-01-09 2015-03-03 Telefonaktiebolaget L M Ericsson (Publ) Expanding network functionalities for openflow based split-architecture networks
US20130227051A1 (en) 2012-01-10 2013-08-29 Edgecast Networks, Inc. Multi-Layer Multi-Hit Caching for Long Tail Content
US20160342510A1 (en) 2012-01-17 2016-11-24 Google Inc. Remote management of data planes and configuration of networking devices
US9467363B2 (en) 2012-01-30 2016-10-11 Nec Corporation Network system and method of managing topology
US20150003259A1 (en) 2012-01-30 2015-01-01 Nec Corporation Network system and method of managing topology
US20130227519A1 (en) 2012-02-27 2013-08-29 Joel John Maleport Methods and systems for parsing data objects
US20140233568A1 (en) 2012-03-19 2014-08-21 Intel Corporation Techniques for packet management in an input/output virtualization system
US20130290622A1 (en) 2012-04-27 2013-10-31 Suddha Sekhar Dey Tcam action updates
US20130318107A1 (en) 2012-05-23 2013-11-28 International Business Machines Corporation Generating data feed specific parser circuits
US8788512B2 (en) 2012-05-23 2014-07-22 International Business Machines Corporation Generating data feed specific parser circuits
US20130315054A1 (en) 2012-05-24 2013-11-28 Marvell World Trade Ltd. Flexible queues in a network switch
US20130346814A1 (en) 2012-06-21 2013-12-26 Timothy Zadigian Jtag-based programming and debug
US20140043974A1 (en) 2012-08-07 2014-02-13 Broadcom Corporation Low-latency switching
US20140082302A1 (en) 2012-09-14 2014-03-20 Xerox Corporation Systems and methods for employing an electronically-readable monitoring module associated with a customer replaceable component to update a non-volatile memory in an image forming device
US9055004B2 (en) 2012-09-18 2015-06-09 Cisco Technology, Inc. Scalable low latency multi-protocol networking device
US20140115571A1 (en) 2012-10-23 2014-04-24 Asus Technology Pte Ltd. Electronic device, non-transient readable medium and method thereof
US20140328180A1 (en) 2012-11-08 2014-11-06 Texas Instruments Incorporated Structure for implementing openflow all group buckets using egress flow table entries
US20140334489A1 (en) 2012-11-08 2014-11-13 Texas Instruments Incorporated Openflow match and action pipeline structure
US20160330127A1 (en) 2012-11-08 2016-11-10 Texas Instruments Incorporated Structure for Implementing Openflow All Group Buckets Using Egress Flow Table Entries
US8693374B1 (en) 2012-12-18 2014-04-08 Juniper Networks, Inc. Centralized control of an aggregation network with a reduced control plane
US20140181232A1 (en) 2012-12-20 2014-06-26 Oracle International Corporation Distributed queue pair state on a host channel adapter
US20140204943A1 (en) 2013-01-24 2014-07-24 Douglas A. Palmer Systems and methods for packet routing
US20150363522A1 (en) 2013-01-31 2015-12-17 Hewlett-Packard Development Company, L.P. Network switch simulation
US20140328344A1 (en) 2013-02-01 2014-11-06 Texas Instruments Incorporated Packet processing match and action pipeline structure with dependency calculation removing false dependencies
US20140241359A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing vliw action unit with or-multi-ported instruction memory
US9712439B2 (en) 2013-02-28 2017-07-18 Texas Instruments Incorporated Packet processing match and action unit with configurable memory allocation
US20140241358A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with a vliw action engine
US9544231B2 (en) 2013-02-28 2017-01-10 Texas Instruments Incorporated Packet processing VLIW action unit with OR-multi-ported instruction memory
US20170289034A1 (en) 2013-02-28 2017-10-05 Texas Instruments Incorporated Packet Processing Match and Action Unit with Configurable Memory Allocation
US10009276B2 (en) 2013-02-28 2018-06-26 Texas Instruments Incorporated Packet processing match and action unit with a VLIW action engine
US20140241361A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with configurable memory allocation
US20140241362A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with configurable bit allocation
US20140244966A1 (en) 2013-02-28 2014-08-28 Texas Instruments Incorporated Packet processing match and action unit with stateful actions
US20160156557A1 (en) 2013-02-28 2016-06-02 Texas Instruments Incorporated Packet Processing VLIW Action Unit with Or-Multi-Ported Instruction Memory
US9258224B2 (en) 2013-02-28 2016-02-09 Texas Instruments Incorporated Packet processing VLIW action unit with or-multi-ported instruction memory
US20160019161A1 (en) 2013-03-12 2016-01-21 Hewlett-Packard Development Company, L.P. Programmable address mapping and memory access operations
US20140269432A1 (en) 2013-03-15 2014-09-18 Cisco Technology, Inc. vPC AUTO CONFIGURATION
US9450817B1 (en) 2013-03-15 2016-09-20 Juniper Networks, Inc. Software defined network controller
US9276846B2 (en) 2013-03-15 2016-03-01 Cavium, Inc. Packet extraction optimization in a network processor
US20140301192A1 (en) 2013-04-05 2014-10-09 Futurewei Technologies, Inc. Software Defined Networking (SDN) Controller Orchestration and Network Virtualization for Data Center Interconnection
US20140321473A1 (en) 2013-04-26 2014-10-30 Mediatek Inc. Active output buffer controller for controlling packet data output of main buffer in network device and related method
US20140321476A1 (en) 2013-04-26 2014-10-30 Mediatek Inc. Packet output controller and method for dequeuing multiple packets from one scheduled output queue and/or using over-scheduling to schedule output queues
US20150010000A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Hybrid Packet Processing
US20150009796A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Reconciliation of Network State Across Physical Domains
US20170142011A1 (en) 2013-07-08 2017-05-18 Nicira, Inc. Hybrid Packet Processing
US9124644B2 (en) 2013-07-14 2015-09-01 Netronome Systems, Inc. Script-controlled egress packet modifier
US20150023147A1 (en) 2013-07-17 2015-01-22 Kt Corporation Methods for managing transaction in software defined network
US20160191406A1 (en) 2013-07-31 2016-06-30 Zte Corporation Method and Device for Implementing QoS in OpenFlow Network
US20150043589A1 (en) 2013-08-09 2015-02-12 Futurewei Technologies, Inc. Extending OpenFlow to Support Packet Encapsulation for Transport over Software-Defined Networks
US20160234097A1 (en) 2013-08-12 2016-08-11 Hangzhou H3C Technologies Co., Ltd. Packet forwarding in software defined networking
US20160212012A1 (en) 2013-08-30 2016-07-21 Clearpath Networks, Inc. System and method of network functions virtualization of network services within and across clouds
US20150081833A1 (en) 2013-09-15 2015-03-19 Nicira, Inc. Dynamically Generating Flows with Wildcard Fields
US20150092539A1 (en) 2013-09-30 2015-04-02 Cisco Technology, Inc. Data-plane driven fast protection mechanism for mpls pseudowire services
US20150110114A1 (en) 2013-10-17 2015-04-23 Marvell Israel (M.I.S.L) Ltd. Processing Concurrency in a Network Device
JP2015080175A (en) 2013-10-18 2015-04-23 富士通株式会社 Device, method and program for packet processing
US9590925B2 (en) 2013-10-18 2017-03-07 Fujitsu Limited Packet processing apparatus, packet processing method, and non-transitory computer-readable storage medium
US20150109913A1 (en) 2013-10-18 2015-04-23 Fujitsu Limited Packet processing apparatus, packet processing method, and non-transitory computer-readable storage medium
JP6127900B2 (en) 2013-10-18 2017-05-17 富士通株式会社 Packet processing apparatus, packet processing method, and packet processing program
US20160241459A1 (en) 2013-10-26 2016-08-18 Huawei Technologies Co.,Ltd. Method for acquiring, by sdn switch, exact flow entry, and sdn switch, controller, and system
US20150121355A1 (en) 2013-10-28 2015-04-30 International Business Machines Corporation Unified update tool for multi-protocol network adapter
US20160188320A1 (en) 2013-10-28 2016-06-30 International Business Machines Corporation Unified update tool for multi-protocol network adapter
US9298446B2 (en) 2013-10-28 2016-03-29 International Business Machines Corporation Unified update tool for multi-protocol network adapter
US20150131666A1 (en) 2013-11-08 2015-05-14 Electronics And Telecommunications Research Institute Apparatus and method for transmitting packet
US20150131667A1 (en) 2013-11-14 2015-05-14 Electronics And Telecommunications Research Institute Sdn-based network device with extended function and method of processing packet in the same device
US20150142932A1 (en) 2013-11-18 2015-05-21 Tellabs Oy Network element and a controller for managing the network element
US20150146527A1 (en) 2013-11-26 2015-05-28 Broadcom Corporation System, Method and Apparatus for Network Congestion Management and Network Resource Isolation
US20170149632A1 (en) 2013-11-26 2017-05-25 Telefonaktiebolaget Lm Ericsson (Publ) A method and system of supporting service chaining in a data network
US20150156288A1 (en) 2013-12-04 2015-06-04 Mediatek Inc. Parser for parsing header in packet and related packet processing apparatus
US20150172198A1 (en) 2013-12-18 2015-06-18 Marvell Israel (M.I.S.L) Ltd. Methods and network device for oversubscription handling
US20150178395A1 (en) 2013-12-20 2015-06-25 Zumur, LLC System and method for idempotent interactive disparate object discovery, retrieval and display
US20150180769A1 (en) 2013-12-20 2015-06-25 Alcatel-Lucent Usa Inc. Scale-up of sdn control plane using virtual switch based overlay
US20160330128A1 (en) 2013-12-30 2016-11-10 Sanechips Technology Co., Ltd. Queue scheduling method and device, and computer storage medium
US20160197852A1 (en) 2013-12-30 2016-07-07 Cavium, Inc. Protocol independent programmable switch (pips) software defined data center networks
US20150194215A1 (en) 2014-01-09 2015-07-09 Netronome Systems, Inc. Dedicated egress fast path for non-matching packets in an openflow switch
US20150222560A1 (en) 2014-02-05 2015-08-06 Verizon Patent And Licensing Inc. Capacity management based on backlog information
US20170085479A1 (en) 2014-02-19 2017-03-23 Nec Corporation Network control method, network system, apparatus, and program
US20150249572A1 (en) 2014-03-03 2015-09-03 Futurewei Technologies, Inc. Software-Defined Network Control Using Functional Objects
US20150256465A1 (en) 2014-03-04 2015-09-10 Futurewei Technologies, Inc. Software-Defined Network Control Using Control Macros
US20150271011A1 (en) 2014-03-21 2015-09-24 Nicira, Inc. Dynamic routing for logical routers
US20150281125A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Caching of service decisions
US20170013452A1 (en) 2014-04-29 2017-01-12 Hewlett-Packard Development Company, L.P. Network re-convergence point
US20150319086A1 (en) 2014-04-30 2015-11-05 Broadcom Corporation System for Accelerated Network Route Update
US10892939B2 (en) 2014-05-27 2021-01-12 Telecom Italia S.P.A. System and method for network apparatus management
US20170085414A1 (en) 2014-05-27 2017-03-23 Telecom Italia S.P.A. System and method for network apparatus management
US20170085477A1 (en) 2014-05-30 2017-03-23 Huawei Technologies Co., Ltd. Packet Edit Processing Method and Related Device
US20170208015A1 (en) 2014-06-04 2017-07-20 Lantiq Beteiligun-Grmbh & Co. KG Data Packet Processing System on a Chip
US9888033B1 (en) 2014-06-19 2018-02-06 Sonus Networks, Inc. Methods and apparatus for detecting and/or dealing with denial of service attacks
US20180103060A1 (en) 2014-06-19 2018-04-12 Sonus Networks, Inc. Methods and apparatus for detecting and/or dealing with denial of service attacks
US9838268B1 (en) 2014-06-27 2017-12-05 Juniper Networks, Inc. Distributed, adaptive controller for multi-domain networks
US20150381418A1 (en) 2014-06-27 2015-12-31 iPhotonix Remote Orchestration of Virtual Machine Updates
US20150381495A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems for providing multi-tenancy support for single root i/o virtualization
US20170111275A1 (en) 2014-06-30 2017-04-20 Huawei Technologies Co., Ltd. Data processing method executed by network apparatus, and related device
US20160006654A1 (en) 2014-07-07 2016-01-07 Cisco Technology, Inc. Bi-directional flow stickiness in a network environment
US20160014073A1 (en) 2014-07-11 2016-01-14 Wmware, Inc. Methods and apparatus to configure hardware management systems for use in virtual server rack deployments for virtual computing environments
US20170126588A1 (en) 2014-07-25 2017-05-04 Telefonaktiebolaget Lm Ericsson (Publ) Packet Processing in an OpenFlow Switch
US20170142000A1 (en) 2014-08-11 2017-05-18 Huawei Technologies Co., Ltd. Packet control method, switch, and controller
US9755932B1 (en) 2014-09-26 2017-09-05 Juniper Networks, Inc. Monitoring packet residence time and correlating packet residence time to input sources
US20160094460A1 (en) 2014-09-30 2016-03-31 Vmware, Inc. Packet Key Parser for Flow-Based Forwarding Elements
US9960956B1 (en) 2014-10-15 2018-05-01 The United States Of America, As Represented By The Secretary Of The Navy Network monitoring method using phantom nodes
US20160139892A1 (en) 2014-11-14 2016-05-19 Xpliant, Inc. Parser engine programming tool for programmable network devices
US20160149784A1 (en) 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (Publ) Passive Performance Measurement for Inline Service Chaining
US10044646B1 (en) 2014-12-02 2018-08-07 Adtran, Inc. Systems and methods for efficiently storing packet data in network switches
EP3229424A1 (en) 2014-12-03 2017-10-11 Sanechips Technology Co., Ltd. Improved wred-based congestion control method and device
US20160173383A1 (en) 2014-12-11 2016-06-16 Futurewei Technologies, Inc. Method and apparatus for priority flow and congestion control in ethernet network
US20160173371A1 (en) 2014-12-11 2016-06-16 Brocade Communications Systems, Inc. Multilayered distributed router architecture
US20160191384A1 (en) 2014-12-24 2016-06-30 Nicira, Inc. Batch Processing of Packets
US20200084093A1 (en) 2014-12-27 2020-03-12 Intel Corporation Programmable Protocol Parser For NIC Classification And Queue Assignments
US10361914B2 (en) 2014-12-27 2019-07-23 Intel Corporation Programmable protocol parser for NIC classification and queue assignments
US10015048B2 (en) 2014-12-27 2018-07-03 Intel Corporation Programmable protocol parser for NIC classification and queue assignments
US20160188313A1 (en) 2014-12-27 2016-06-30 Scott P. Dubal Technologies for reprogramming network interface cards over a network
US20160191306A1 (en) 2014-12-27 2016-06-30 Iosif Gasparakis Programmable protocol parser for nic classification and queue assignments
US20200021486A1 (en) 2014-12-27 2020-01-16 Intel Corporation Programmable Protocol Parser For NIC Classification And Queue Assignments
US20190394086A1 (en) 2014-12-27 2019-12-26 Intel Corporation Programmable protocol parser for nic classification and queue assignments
US20180316549A1 (en) 2014-12-27 2018-11-01 Intel Corporation Programmable protocol parser for nic classification and queue assignments
US20160191370A1 (en) 2014-12-29 2016-06-30 Juniper Networks, Inc. Network topology optimization
US20160191361A1 (en) 2014-12-31 2016-06-30 Nicira, Inc. System for aggregating statistics associated with interfaces
US20180262424A1 (en) 2015-01-12 2018-09-13 Telefonaktiebolaget Lm Ericsson (Publ) Methods and Modules for Managing Packets in a Software Defined Network
US20160232019A1 (en) 2015-02-09 2016-08-11 Broadcom Corporation Network Interface Controller with Integrated Network Flow Processing
US20160234102A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for inserting an openflow flow entry into a flow table using openflow protocol
US20160234103A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for inserting an openflow flow entry into a flow table using openflow protocol
US20160234067A1 (en) 2015-02-10 2016-08-11 Alcatel-Lucent Canada Inc. Method and system for identifying an outgoing interface using openflow protocol
US20160301601A1 (en) 2015-04-09 2016-10-13 Telefonaktiebolaget L M Ericsson (Publ) Method and system for traffic pattern generation in a software-defined networking (sdn) system
US20160315866A1 (en) 2015-04-27 2016-10-27 Telefonaktiebolaget L M Ericsson (Publ) Service based intelligent packet-in mechanism for openflow switches
US20160323243A1 (en) 2015-05-01 2016-11-03 Cirius Messaging Inc. Data leak protection system and processing methods thereof
US20160337329A1 (en) 2015-05-11 2016-11-17 Kapil Sood Technologies for secure bootstrapping of virtual network functions
US20160344629A1 (en) 2015-05-22 2016-11-24 Gray Research LLC Directional two-dimensional router and interconnection network for field programmable gate arrays, and other circuits and applications of the router and network
US20160357534A1 (en) 2015-06-03 2016-12-08 The Mathworks, Inc. Data type reassignment
US10419242B1 (en) 2015-06-04 2019-09-17 Netronome Systems, Inc. Low-level programming language plugin to augment high-level programming language setup of an SDN switch
US20160359685A1 (en) 2015-06-04 2016-12-08 Cisco Technology, Inc. Method and apparatus for computing cell density based rareness for use in anomaly detection
US9891898B1 (en) 2015-06-04 2018-02-13 Netronome Systems, Inc. Low-level programming language plugin to augment high-level programming language setup of an SDN switch
US20180191640A1 (en) 2015-06-30 2018-07-05 Hewlett Packard Enterprise Development Lp Action references
US20170005951A1 (en) 2015-07-02 2017-01-05 Arista Networks, Inc. Network data processor having per-input port virtual output queues
US20170019302A1 (en) 2015-07-13 2017-01-19 Telefonaktiebolaget L M Ericsson (Publ) Analytics-driven dynamic network design and configuration
US20170019329A1 (en) 2015-07-15 2017-01-19 Argela-USA, Inc. Method for forwarding rule hopping based secure communication
US20170034082A1 (en) 2015-07-31 2017-02-02 Nicira, Inc. Managed Forwarding Element With Conjunctive Match Flow Entries
US20170041209A1 (en) 2015-08-03 2017-02-09 Telefonaktiebolaget L M Ericsson (Publ) Method and system for path monitoring in a software-defined networking (sdn) system
US9692690B2 (en) 2015-08-03 2017-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for path monitoring in a software-defined networking (SDN) system
US20170048144A1 (en) 2015-08-13 2017-02-16 Futurewei Technologies, Inc. Congestion Avoidance Traffic Steering (CATS) in Datacenter Networks
US20170053012A1 (en) 2015-08-17 2017-02-23 Mellanox Technologies Tlv Ltd. High-performance bloom filter array
US9825862B2 (en) 2015-08-26 2017-11-21 Barefoot Networks, Inc. Packet header field extraction
US20200076737A1 (en) 2015-08-26 2020-03-05 Barefoot Networks, Inc. Packet header field extraction
US20170064047A1 (en) 2015-08-26 2017-03-02 Barefoot Networks, Inc. Configuring a switch for extracting packet header fields
US20200099617A1 (en) 2015-08-26 2020-03-26 Barefoot Networks, Inc. Packet header field extraction
US9826071B2 (en) 2015-08-26 2017-11-21 Barefoot Networks, Inc. Configuring a switch for extracting packet header fields
US10225381B1 (en) 2015-08-26 2019-03-05 Barefoot Networks, Inc. Configuring a switch for extracting packet header fields
US10432527B1 (en) 2015-08-26 2019-10-01 Barefoot Networks, Inc. Packet header field extraction
US20200099619A1 (en) 2015-08-26 2020-03-26 Barefoot Networks, Inc. Packet header field extraction
US20170063690A1 (en) 2015-08-26 2017-03-02 Barefoot Networks, Inc. Packet header field extraction
US20200099618A1 (en) 2015-08-26 2020-03-26 Barefoot Networks, Inc. Packet header field extraction
US20170070416A1 (en) 2015-09-04 2017-03-09 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for modifying forwarding states in a network device of a software defined network
US20170075692A1 (en) 2015-09-11 2017-03-16 Qualcomm Incorporated Selective flushing of instructions in an instruction pipeline in a processor back to an execution-resolved target address, in response to a precise interrupt
US20170093987A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US9923816B2 (en) 2015-09-24 2018-03-20 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US9912610B2 (en) 2015-09-24 2018-03-06 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20180234355A1 (en) 2015-09-24 2018-08-16 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20180234340A1 (en) 2015-09-24 2018-08-16 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170093986A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170093707A1 (en) 2015-09-24 2017-03-30 Barefoot Networks, Inc. Data-plane stateful processing units in packet processing pipelines
US20170091258A1 (en) 2015-09-30 2017-03-30 Nicira, Inc. Packet Processing Rule Versioning
US20170118041A1 (en) 2015-10-21 2017-04-27 Brocade Communications Systems, Inc. Distributed rule provisioning in an extended bridge
US20170118042A1 (en) 2015-10-21 2017-04-27 Brocade Communications Systems, Inc. High availability for distributed network services in an extended bridge
US20170134310A1 (en) 2015-11-05 2017-05-11 Dell Products, L.P. Dynamic allocation of queue depths for virtual functions in a converged infrastructure
US20170134282A1 (en) 2015-11-10 2017-05-11 Ciena Corporation Per queue per service differentiation for dropping packets in weighted random early detection
US10291555B2 (en) 2015-11-17 2019-05-14 Telefonaktiebolaget Lm Ericsson (Publ) Service based intelligent packet-in buffering mechanism for openflow switches by having variable buffer timeouts
US9912774B2 (en) 2015-12-22 2018-03-06 Intel Corporation Accelerated network packet processing
US20170180273A1 (en) 2015-12-22 2017-06-22 Daniel Daly Accelerated network packet processing
WO2017112165A1 (en) 2015-12-22 2017-06-29 Intel Corporation Accelerated network packet processing
US10135734B1 (en) 2015-12-28 2018-11-20 Amazon Technologies, Inc. Pipelined evaluations for algorithmic forwarding route lookup
US20170195229A1 (en) 2015-12-30 2017-07-06 Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. System, method and article of manufacture for using control plane for critical data communications in software-defined networks
US20170220499A1 (en) 2016-01-04 2017-08-03 Gray Research LLC Massively parallel computer, accelerated computing clusters, and two-dimensional router and interconnection network for field programmable gate arrays, and applications
US20180375755A1 (en) 2016-01-05 2018-12-27 Telefonaktiebolaget Lm Ericsson (Publ) Mechanism to detect control plane loops in a software defined networking (sdn) network
US20170222881A1 (en) 2016-01-28 2017-08-03 Arista Networks, Inc. Network Data Stream Tracer
US20170223575A1 (en) 2016-01-29 2017-08-03 Arista Networks, Inc. System and method of a pause watchdog
US20170251077A1 (en) 2016-02-26 2017-08-31 Arista Networks, Inc. Per-input port, per-control plane network data traffic class control plane policing
US20170264571A1 (en) 2016-03-08 2017-09-14 Mellanox Technologies Tlv Ltd. Flexible buffer allocation in a network switch
US20180006950A1 (en) 2016-07-01 2018-01-04 Intel Corporation Technologies for adaptive routing using aggregated congestion information
US20180006945A1 (en) 2016-07-01 2018-01-04 Mario Flajslik Technologies for adaptive routing using throughput estimation
US20180054385A1 (en) 2016-08-17 2018-02-22 Cisco Technology, Inc. Re-configurable lookup pipeline architecture for packet forwarding
US20180115478A1 (en) 2016-10-20 2018-04-26 Gatesair, Inc. Extended time reference generation
US20180124183A1 (en) 2016-11-03 2018-05-03 Futurewei Technologies, Inc. Method and Apparatus for Stateful Control of Forwarding Elements
US10341242B2 (en) 2016-12-13 2019-07-02 Oracle International Corporation System and method for providing a programmable packet classification framework for use in a network device
US10091137B2 (en) 2017-01-30 2018-10-02 Cavium, Inc. Apparatus and method for scalable and flexible wildcard matching in a network switch
US10419366B1 (en) 2017-01-31 2019-09-17 Barefoot Networks, Inc. Mechanism for communicating to remote control plane from forwarding element
US20200007473A1 (en) 2017-01-31 2020-01-02 Barefoot Networks, Inc. Mechanism for communicating to remote control plane from forwarding element
US10412018B1 (en) 2017-03-21 2019-09-10 Barefoot Networks, Inc. Hierarchical queue scheduler
US20180287819A1 (en) 2017-03-28 2018-10-04 Marvell World Trade Ltd. Flexible processor of a port extender device
US10686735B1 (en) 2017-04-23 2020-06-16 Barefoot Networks, Inc. Packet reconstruction at deparser
US20200228433A1 (en) 2019-01-15 2020-07-16 Fujitsu Limited Computer-readable recording medium including monitoring program, programmable device, and monitoring method
US20200244576A1 (en) 2019-01-29 2020-07-30 Cisco Technology, Inc. Supporting asynchronous packet operations in a deterministic network
US20200280428A1 (en) 2019-10-18 2020-09-03 Intel Corporation Configuration scheme for link establishment
US20200280518A1 (en) 2020-01-28 2020-09-03 Intel Corporation Congestion management techniques
US20220091992A1 (en) 2020-09-23 2022-03-24 Intel Corporation Device, system and method to provide line level tagging of data at a processor cache

Non-Patent Citations (137)

* Cited by examiner, † Cited by third party
Title
"Selection of Cyclic Redundancy Code and Checksum Algorithms to Ensure Critical Data Integrity", Federal Aviation Administration William J. Hughes Technical Center Aviation Research Division Atlantic City International Airport New Jersey 08405, DOT/FAA/TC-14/49, Mar. 2015, 111 pages.
Advisory Action for U.S. Appl. No. 15/835,238, dated Nov. 22, 2019, 3 pages.
Advisory Action for U.S. Appl. No. 15/888,054, dated Feb. 10, 2020.
Arashloo, Mina Tahmasbi , et al. , "SNAP: Stateful Network-Wide Abstractions for Packet Processing", SIGCOMM '16, Aug. 22-26, 2016, 27 pages, ACM, Florianopolis, Brazil.
Bosshart, P. , et al. , "Forwarding Metamorphosis: Fast Programmable Match-Action Processing in Hardware or SDN", SIGCOMM'13, Aug. 12-16, 2013, 12 pages, ACM, Hong Kong, China.
Final Office Action for U.S. Appl. No. 15/784,191, dated Feb. 26, 2021.
Final Office Action for U.S. Appl. No. 15/784,191, dated May 7, 2020, 13 pages.
Final Office Action for U.S. Appl. No. 15/784,192, dated Jun. 1, 2020, 14 pages.
Final Office Action for U.S. Appl. No. 15/888,050, dated Dec. 12, 2019.
Final Office Action for U.S. Appl. No. 15/888,054, dated Sep. 18, 2019.
Final Office Action for U.S. Appl. No. 16/695,044, dated Dec. 20, 2021.
Final Office Action for U.S. Appl. No. 16/804,960, dated Apr. 14, 2022.
Final Office Action for U.S. Appl. No. 17/134,110 dated Oct. 25, 2022.
International Preliminary Reporton Patentability for PCT Application No. PCT/US2016/062511, dated Jun. 26, 2018.
International Search Report and Written Opinion for PCT Application No. PCT/US2016/062511, dated Feb. 28, 2017.
Kaufmann, A. , et al. , "High Performance Packet Processing with FlexNIC", ASPLOS'16, Apr. 2-6, 2016, 15 pages, ACM, Atlanta, GA, USA.
Moshref, Masoud , et al. , "Flow-level State Transition as a New Switch Primitive for SON", HotSDN'14, Aug. 22, 2014, 6 pages, ACM, Chicago, IL, USA.
Non-Final Office Action for U.S. Appl. No. 16/569,554, dated Feb. 19, 2021.
Notice of Allowance for Chinese Patent Application No. 201680075637.4, dated Apr. 1, 2022.
Notice of Allowance for U.S. Appl. No. 14/583,664, dated Feb. 28, 2018.
Notice of Allowance for U.S. Appl. No. 14/836,850, dated Jun. 20, 2017.
Notice of Allowance for U.S. Appl. No. 14/836,855, dated Jun. 30, 2017.
Notice of Allowance for U.S. Appl. No. 14/977,810, dated Oct. 20, 2017.
Notice of Allowance for U.S. Appl. No. 15/678,549, dated Apr. 8, 2020.
Notice of Allowance for U.S. Appl. No. 15/678,549, dated Dec. 27, 2019.
Notice of Allowance for U.S. Appl. No. 15/678,556, dated Feb. 4, 2020.
Notice of Allowance for U.S. Appl. No. 15/729,555, dated May 2, 2019.
Notice of Allowance for U.S. Appl. No. 15/729,593, dated Nov. 15, 2018.
Notice of Allowance for U.S. Appl. No. 15/784,190, dated May 10, 2019, 20 pages.
Notice of Allowance for U.S. Appl. No. 15/784,191, dated Apr. 19, 2019, 7 pages.
Notice of Allowance for U.S. Appl. No. 15/784,191, dated Aug. 21, 2019, 8 pages.
Notice of Allowance for U.S. Appl. No. 15/784,191, dated Aug. 31, 2021.
Notice of Allowance for U.S. Appl. No. 15/784,191, dated May 5, 2021.
Notice of Allowance for U.S. Appl. No. 15/784,192, dated Jun. 30, 2021.
Notice of Allowance for U.S. Appl. No. 15/784,192, dated Mar. 17, 2021.
Notice of Allowance for U.S. Appl. No. 15/784,192, dated Sep. 30, 2021.
Notice of Allowance for U.S. Appl. No. 15/835,233, dated Jul. 3, 2019, 8 pages.
Notice of Allowance for U.S. Appl. No. 15/835,233, dated Oct. 29, 2019.
Notice of Allowance for U.S. Appl. No. 15/835,235, dated Apr. 24, 2020.
Notice of Allowance for U.S. Appl. No. 15/835,235, dated Aug. 20, 2019, 16 pages.
Notice of Allowance for U.S. Appl. No. 15/835,238, dated Sep. 30, 2020.
Notice of Allowance for U.S. Appl. No. 15/835,239, dated Nov. 13, 2019, 10 pages.
Notice of Allowance for U.S. Appl. No. 15/835,242, dated Jul. 1, 2019, 7 pages.
Notice of Allowance for U.S. Appl. No. 15/835,242, dated Jun. 24, 2020.
Notice of Allowance for U.S. Appl. No. 15/835,247 dated Dec. 29, 2021.
Notice of Allowance for U.S. Appl. No. 15/835,247, dated Apr. 7, 2022.
Notice of Allowance for U.S. Appl. No. 15/835,247, dated Jul. 29, 2022.
Notice of Allowance for U.S. Appl. No. 15/835,249, dated Jul. 25, 2019.
Notice of Allowance for U.S. Appl. No. 15/835,250, dated Jul. 25, 2019, 17 pages.
Notice of Allowance for U.S. Appl. No. 15/878,966, dated May 15, 2019.
Notice of Allowance for U.S. Appl. No. 16/026,318, dated Mar. 12, 2019.
Notice of Allowance for U.S. Appl. No. 16/460,798, dated May 27, 2021.
Notice of Allowance for U.S. Appl. No. 16/519,873, dated Aug. 30, 2021.
Notice of Allowance for U.S. Appl. No. 16/519,873, dated Dec. 3, 2021.
Notice of Allowance for U.S. Appl. No. 16/519,873, dated Mar. 17, 2022.
Notice of Allowance for U.S. Appl. No. 16/573,847, dated Apr. 14, 2022.
Notice of Allowance for U.S. Appl. No. 16/573,847, dated Dec. 15, 2021.
Notice of Allowance for U.S. Appl. No. 16/582,798, dated Aug. 27, 2021.
Notice of Allowance for U.S. Appl. No. 16/582,798, dated Dec. 1, 2021.
Notice of Allowance for U.S. Appl. No. 16/582,798, dated Mar. 22, 2022.
Notice of Allowance for U.S. Appl. No. 16/687,271, dated Aug. 30, 2021.
Notice of Allowance for U.S. Appl. No. 16/687,271, dated Dec. 1, 2021.
Notice of Allowance for U.S. Appl. No. 16/687,271, dated Mar. 22, 2022.
Notice of Allowance for U.S. Appl. No. 16/695,044 dated Apr. 27, 2022.
Notice of Allowance for U.S. Appl. No. 16/695,049, dated Apr. 28, 2022.
Notice of Allowance for U.S. Appl. No. 16/695,049, dated Jan. 5, 2022.
Notice of Allowance for U.S. Appl. No. 16/789,339, dated Jul. 29, 2021.
Notice of Allowance for U.S. Appl. No. 16/879,704, dated Apr. 26, 2022.
Notice of Allowance for U.S. Appl. No. 17/318,890, dated Jun. 8, 2022.
Notice of Allowance for U.S. Appl. No. 17/318,890, dated Mar. 3, 2022.
Notice of Allowance for U.S. Appl. No. 17/867,508, dated Nov. 14, 2022.
Office Action for Chinese Patent Application No. 201680075637.4, dated Mar. 2, 2021.
Office Action for Chinese Patent Application No. 201680075637.4, dated Sep. 23, 2021.
Office Action for U.S. Appl. No. 14/583,664, dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/583,664, dated Jul. 28, 2016.
Office Action for U.S. Appl. No. 14/583,664, dated Oct. 18, 2017.
Office Action for U.S. Appl. No. 14/863,961, dated Jun. 16, 2017.
Office Action for U.S. Appl. No. 14/864,032, dated Feb. 14, 2017.
Office Action for U.S. Appl. No. 14/977,810, dated Jun. 29, 2017.
Office Action for U.S. Appl. No. 15/678,549, dated Feb. 26, 2019.
Office Action for U.S. Appl. No. 15/678,549, dated Jul. 30, 2019.
Office Action for U.S. Appl. No. 15/678,556, dated Jun. 19, 2019.
Office Action for U.S. Appl. No. 15/678,565, dated Jun. 13, 2019.
Office Action for U.S. Appl. No. 15/729,593, dated Aug. 10, 2018.
Office Action for U.S. Appl. No. 15/784,191, dated Aug. 26, 2020, 14 pages.
Office Action for U.S. Appl. No. 15/784,191, dated Dec. 19, 2018, 11 pages.
Office Action for U.S. Appl. No. 15/784,191, dated Jan. 24, 2020, 12 pages.
Office Action for U.S. Appl. No. 15/784,192, dated Sep. 19, 2019, 14 pages.
Office Action for U.S. Appl. No. 15/835,233, dated Feb. 8, 2019, 17 pages.
Office Action for U.S. Appl. No. 15/835,235, dated Feb. 25, 2019, 24 pages.
Office Action for U.S. Appl. No. 15/835,238, dated Dec. 11, 2019.
Office Action for U.S. Appl. No. 15/835,238, dated Feb. 7, 2019.
Office Action for U.S. Appl. No. 15/835,238, dated Jun. 19, 2019.
Office Action for U.S. Appl. No. 15/835,238, dated Jun. 5, 2020.
Office Action for U.S. Appl. No. 15/835,239, dated Feb. 7, 2019, 20 pages.
Office Action for U.S. Appl. No. 15/835,239, dated Jun. 19, 2019, 18 pages.
Office Action for U.S. Appl. No. 15/835,242, dated Oct. 18, 2019, 8 pages.
Office Action for U.S. Appl. No. 15/835,247, dated Dec. 31, 2018, 18 pages.
Office Action for U.S. Appl. No. 15/835,247, dated Jul. 10, 2019.
Office Action for U.S. Appl. No. 15/835,249, dated Dec. 31, 2018.
Office Action for U.S. Appl. No. 15/835,250, dated Apr. 4, 2019, 16 pages.
Office Action for U.S. Appl. No. 15/878,966, dated Jan. 11, 2019.
Office Action for U.S. Appl. No. 15/888,050, dated Jun. 11, 2019.
Office Action for U.S. Appl. No. 15/888,054, dated Mar. 11, 2019.
Office Action for U.S. Appl. No. 16/026,318, dated Sep. 20, 2018.
Office Action for U.S. Appl. No. 16/288,074, dated Mar. 5, 2020.
Office Action for U.S. Appl. No. 16/288,074, dated Oct. 7, 2020.
Office Action for U.S. Appl. No. 16/519,873, dated Jun. 11, 2021.
Office Action for U.S. Appl. No. 16/569,554, dated Aug. 18, 2020.
Office Action for U.S. Appl. No. 16/569,554, dated Jul. 2, 2021.
Office Action for U.S. Appl. No. 16/569,554, dated Mar. 14, 2022.
Office Action for U.S. Appl. No. 16/569,554, dated Sep. 27, 2021.
Office Action for U.S. Appl. No. 16/573,847 dated Jan. 6, 2021.
Office Action for U.S. Appl. No. 16/573,847, dated Aug. 2, 2021.
Office Action for U.S. Appl. No. 16/582,798, dated Jun. 24, 2021.
Office Action for U.S. Appl. No. 16/687,271, dated Jun. 24, 2021.
Office Action for U.S. Appl. No. 16/695,044, dated Jul. 8, 2021.
Office Action for U.S. Appl. No. 16/695,049, dated Jul. 21, 2021.
Office Action for U.S. Appl. No. 16/695,053 dated Aug. 4, 2021.
Office Action for U.S. Appl. No. 16/695,053 dated Jan. 5, 2022.
Office Action for U.S. Appl. No. 16/695,053, dated May 11, 2022.
Office Action for U.S. Appl. No. 16/695,053, dated Oct. 14, 2022.
Office Action for U.S. Appl. No. 16/804,960, dated Aug. 19, 2021.
Office Action for U.S. Appl. No. 16/804,960, dated Dec. 13, 2021.
Office Action for U.S. Appl. No. 16/804,960, dated May 12, 2021.
Office Action for U.S. Appl. No. 17/134,110, dated Dec. 22, 2022.
Office Action for U.S. Appl. No. 17/134,110, dated Jun. 24, 2022.
Office Action for U.S. Appl. No. 17/484,004, dated Oct. 27, 2022.
Office Action for U.S. Appl. No. 17/859,722, dated Oct. 26, 2022.
Office Action for U.S. Patent Application No. 16/4670,798, dated Nov. 18, 2020.
Office Action in Chinese Patent Application No. 201680075637.4, dated Jan. 5, 2022.
Sivaraman, A. , et al. , "Towards Programmable Packet Scheduling", HotNets'15, Nov. 16-17, 2015, 7 pages, ACM, Philadelphia, PA, USA.
Sivaraman, Anirudh , et al. , "Packet Transactions: A Programming Model for Data-Plane Algorithms at Hardware Speed", arXiv:1512.05023v1, Dec. 16, 2015, 22 pages.
Sivaraman, Anirudh , et al. , "Packet Transactions: High-level Programming for Line-Rate Switches", ?rXiv:1512.05023v2, Jan. 30, 2016, 16 pages.
Sivaraman, Anirudh , et al. , "Packet Transactions: High-level Programming for Line-Rate Switches", SIGCOMM'16, Aug. 22-26, 2016, 14 pages, ACM, Florianopolis, Brazil.
Sivaraman, Anirudh , et al. , "Programmable Packet Scheduling at Line Rate", SIGCOMM'16, Aug. 22-26, 2016, 14 pages, ACM, Florianopolis, Brazil.
Song , "Protocol-Oblivious Forwarding: Unleashe the Power of SDN through a Future-Proof Forwarding Plane", Huawei Technologies, USA, 6 pages.

Also Published As

Publication number Publication date
US11362967B2 (en) 2022-06-14
US10771387B1 (en) 2020-09-08
US10594630B1 (en) 2020-03-17
US20220029935A1 (en) 2022-01-27
US20230300087A1 (en) 2023-09-21
US20200259765A1 (en) 2020-08-13

Similar Documents

Publication Publication Date Title
US11700212B2 (en) Expansion of packet data within processing pipeline
US11425058B2 (en) Generation of descriptive data for packet fields
US10819633B2 (en) Data-plane stateful processing units in packet processing pipelines
US10511532B2 (en) Algorithmic longest prefix matching in programmable switch
US10454833B1 (en) Pipeline chaining
US20210105220A1 (en) Queue scheduler control via packet data
US9912610B2 (en) Data-plane stateful processing units in packet processing pipelines
US11811902B2 (en) Resilient hashing for forwarding packets
US10523764B2 (en) Data-plane stateful processing units in packet processing pipelines
US11080252B1 (en) Proxy hash table
US10516626B1 (en) Generating configuration data and API for programming a forwarding element
US11929944B2 (en) Network forwarding element with key-value processing in the data plane
US7499941B2 (en) Pipeline regular expression matching
US20180337860A1 (en) Fast adjusting load balancer
US20150067273A1 (en) Computation hardware with high-bandwidth memory interface
US20150193233A1 (en) Using a single-instruction processor to process messages
US10146468B2 (en) Addressless merge command with data item identifier
US9846662B2 (en) Chained CPP command
US9781062B2 (en) Using annotations to extract parameters from messages
US20210243137A1 (en) Multiplexed resource allocation architecture
US10949199B1 (en) Copying packet data to mirror buffer

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STCF Information on status: patent grant

Free format text: PATENTED CASE