WO2011064597A1 - Processing network traffic - Google Patents

Processing network traffic Download PDF

Info

Publication number
WO2011064597A1
WO2011064597A1 PCT/GB2010/051979 GB2010051979W WO2011064597A1 WO 2011064597 A1 WO2011064597 A1 WO 2011064597A1 GB 2010051979 W GB2010051979 W GB 2010051979W WO 2011064597 A1 WO2011064597 A1 WO 2011064597A1
Authority
WO
WIPO (PCT)
Prior art keywords
network data
engine
network
data
metadata
Prior art date
Application number
PCT/GB2010/051979
Other languages
French (fr)
Inventor
Mark Arwyn Bennett
Richard John Wilding
Gordon Campbell Friend
Original Assignee
Bae Systems Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0920857A external-priority patent/GB0920857D0/en
Priority claimed from EP09275115A external-priority patent/EP2328315A1/en
Application filed by Bae Systems Plc filed Critical Bae Systems Plc
Priority to EP10785505A priority Critical patent/EP2507966A1/en
Priority to US13/512,491 priority patent/US8923159B2/en
Priority to AU2010322819A priority patent/AU2010322819B2/en
Publication of WO2011064597A1 publication Critical patent/WO2011064597A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/026Capturing of monitoring data using flow identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic

Definitions

  • the present invention relates to processing network traffic.
  • Detica's DCI-10 platform This includes a hardware layer that provides hardware traffic inspection for examining every byte of every packet. Network packets that contain patterns of interest are passed to software for further processing.
  • the software layer of the DCI-10 platform provides a flexible API (application programming interface) that enables a developer to dynamically indicate to the hardware layer which traffic is of interest and should therefore be passed to the software layer.
  • Software modules produced by the developer are executed against the traffic via a software processing framework.
  • This class of analysis can be characterised by the requirement to track state, such as traffic events and statistics, across much, if not all, of the traffic. In many cases it is necessary to correlate such state not just in terms of a flow but also in consideration of peer communication, and potentially the network host to which each packet relates.
  • Existing platforms lack native hardware support to provide stateful correlation of traffic and so it is necessary to have large numbers of packets processed by software to achieve this functionality. It is also necessary to have software examine the state that is correlated for each of these packets so that traffic of interest may be identified and processed by the software processing modules.
  • Embodiments of the present invention are intended to address at least some of the problems outlined above.
  • a system adapted to process network traffic including:
  • At least one processing engine configured to receive network data being transferred over a network and generate metadata relating to the data
  • At least one rule engine configured to receive and process the metadata to generate an output
  • At least one selection engine configured to receive and process the rule engine output to determine whether the network data is to be processed by a further component and/or whether the network data is to continue to be transferred over the network.
  • the at least one selection engine, the at least one rule engine and the at least one selection engine will normally be implemented in system hardware or firmware.
  • the further component will normally be implemented by software executing on another (remote) processor.
  • the at least one selection engine may be configured to combine the outputs of a plurality of the rule engines.
  • the metadata generated by the at least one processing engine may include data identifying a flow, peer communication, destination host and/or source host associated with the network data, e.g. by extraction from an IP header of the packet. Alternatively, another part of the network data may be used to generate the metadata, e.g. custom headers associated with the network data.
  • the metadata may include data identifying at least one pattern and/or regular expression found in the network data.
  • the metadata may include statistical data regarding the network data.
  • the processing engine may generate metadata indicating at least one category for the network data. The category may relate to a source (port and/or IP address) and/or a destination (port and/or IP address) of the network data. The category may specify whether the network data is associated with a particular flow, peer communication or a host.
  • At least one said rule engine may be configured to count, or monitor for, events relating to network data in a said category.
  • the events may comprise a pattern match and/or a threshold comparison.
  • the events may comprise events occurring within a flow, peer communication, or data relating to a particular host associated with the network data.
  • the system may include at least one memory component, which may store state data relating to the network data.
  • the system may include a delay path for delaying transfer of the network data whilst the network data is processed by the further component.
  • the delay path may be used to retrieve selected network data previously transmitted, e.g. based on more recent events.
  • the processes executed by the at least one rule engine may be configured by a developer to control, and/or interact with, functionality implemented in firmware components.
  • the network data may comprise an IP packet.
  • the firmware may comprise an FPGA onboard a processing blade.
  • a method of processing network traffic including:
  • the at least one selection engine, the at least one rule engine and the at least one selection engine are implemented in system hardware or firmware, and the further component is implemented by software executing on another processor.
  • Figure 1 is a schematic high-level block diagram of an embodiment of the processing system
  • FIG. 2 is a more detailed schematic block diagram of the system
  • Figure 3 is a block diagram of a Rule Engine of the system
  • Figure 4 is a block diagram of a hardware implementation of the system
  • Figure 5 shows flow of data through functional blocks in an example system.
  • the system receives network traffic 102, e.g. IP packets transmitted over the internet from a source device 104 to a destination device 106.
  • the system can be inline in a network, or may be deployed from a network "tap" that is configured to send the system a copy of all or some of the network traffic.
  • the hardware includes firmware components that have been configured with criteria 108 created by a developer using an analysis application 1 10. The criteria define traffic that is of interest for further analysis.
  • the system 100 applies the criteria to the incoming network data 102 and generates metadata 1 12 relating to the network data which is transferred to the analysis application 1 10.
  • the application 1 10 can process the metadata and/or associated network traffic and, depending on the result of that processing may allow the network data to continue 1 14 to its original destination (if the system is inline in the network) and/or carry out another function, e.g. set an alert that suspicious activity has been detected.
  • Figure 1 is illustrative of an example only and variations are possible.
  • an application separate to the analysis application 1 10 may be used to generate and edit the criteria 108, and the various components may be located on board physically distinct devices.
  • the first step in the logical flow of a packet through the system can be the packet being presented to at least one processing engine 202. It will be appreciated that other types of engines may also be included in alternative embodiments.
  • Each of the processing engines generates metadata relating to the packet, which can include details of what, if any, patterns specified by the criteria 108 have been matched and/or statistics regarding the flow, peer and host with which the packet is associate.
  • the packet and metadata can be buffered in a delay path 204.
  • the metadata produced by the processing engines is presented to at least one rule engine 206, along with state data relating to the flow, peer and host membership of the packet.
  • Each Rule Engine may be configured to perform simple operations such as a threshold comparison.
  • the results from the engines are then passed to at least one selection engine 208.
  • the Selection Engine combines the results received from the Rule Engines to determine whether the current packet should be processed by software 1 10 and/or egressed for collection for follow-on storage/processing systems.
  • One purpose of the delay path 204 is to allow the software application 1 10 time to process network data transferred to it.
  • the delay path can also allow retrieval of packets that were previously transferred (within the capacity of the buffer) for further processing.
  • the Selection Engine can also receive packets that are evicted from the delay path and, based on recovered control flags, determines if these packets should be processed in software 1 10 and/or egressed for collection. Packets are processed in software by the applications defined by the developer. The packets are presented together with the metadata generated for the particular packet and any application specific state data.
  • the system can track state records for statistics and traffic correlation, e.g. using engine state memory 207 and/or state memory 212.
  • a new state entry is allocated when a packet arrives relating to a flow, peer or host that is not currently being tracked.
  • State records are retired for reuse after a configurable period of inactivity or, in the case of a flow, when the system can identify that the flow has been terminated.
  • Software is notified when a state record is retired.
  • Software will also be notified if there are no state records available when a new one needs to be allocated for an incoming packet.
  • the system can be configured to handle such an event by either dropping the packet or passing it directly to software (assuming sufficient bandwidth is available).
  • the system may be direction agnostic when resolving the state record for a given flow, peer or host. This allows traffic events to and from a network entity to be correlated and acted upon.
  • each of the processing engines 202 generates metadata regarding the packet being processed.
  • the metadata can include an indication of a flow, peer and hosts associated with the packet.
  • a number of membership categories can be computed for each packet. In the example four such categories are generated for each packet as follows:
  • the above categories are generated based on (potentially masked) fields from the packet header. It is possible to configure these four categories with alternative field and masking criteria so that network packets may be correlated on other categories, such as class B or class C domain.
  • the processing engines can also perform pattern matching, regular expressions and packet header inspection functions. Each pattern match can be mapped to one or more pattern groups. The group information is presented to the rule engines allowing multiple patterns to process by each engine. The processing engine is also presented with fields from the packet header allowing matches against particular traffic ports, IP addresses and other relevant header fields. Regular expressions are supported by means of the combination of pattern match groups and the Boolean selection capability of the Rule Engines. A regular expression can often be translated into several selectors which are typically too weak to be the sole basis of packet selection. A stronger selection criterion can be realised by using the Rule Engines to combine these weak selectors together. The full regular expression can then be validated by software 1 10. The processing engines can also update state and generate metadata fields for each of the packet categories (flow, peer and host). Examples of metadata statistics include the timestamp of the packet, byte and packet counts, packet and data rates and duration. Specific examples are given in the table below:
  • Flow count Number of flows observed for this entity Statistics can be maintained with consideration to directionality. For example, when tracking a particular flow, packets from A to B will update the same state data as packets from B to A, but with the former updating the In counters and the latter updating Out counters (or vice-versa).
  • the statistics fields can be generated in a compressed format to ensure that the resultant system bandwidth is achievable. The system can permit easy transformation to a standard number representation.
  • Rule Engines 204 are configured by the framework in response to API calls invoked by the developer.
  • Figure 3 is an overview of an example Rule Engine configured to take input from the packet header fields or the statistics metadata. It will be appreciated that the components shown are merely one example of how the Rule Engine functionality described herein can be implemented.
  • the Rule Engine can perform greater than, less than and equivalence integer operations against a threshold, with a specified bitwise mask.
  • the Rule Engine can also receive input from pattern match group information and perform an equivalence check of this data with a specified bitwise mask.
  • Each Rule Engine can be associated with a dedicated per-flow, peer or host state memory which can be loaded with a configured value and updated by the engine. This state can be used to count flow, peer or host events (such as pattern matches, packets or bytes). Additionally, each Rule Engine can have a per-flow, peer of host flag which may be controlled by the software 1 10.
  • the result of the numerical comparison, the pattern match check, the state memory and the software-controlled flag can be fed into a lookup table that is configured at a per-engine level to specify how the state memory should be updated and what Boolean result should be expressed for the Rule Engine.
  • a lookup table that is configured at a per-engine level to specify how the state memory should be updated and what Boolean result should be expressed for the Rule Engine.
  • the examples above could be based on the evaluation of a metadata field, such as the packet count compared to a predefined threshold.
  • the lookup table can also be presented with a bit field that is under software control. This allows software to influence each Rule Engine for a given flow, peer or host; for example, to reset the Rule Engine counter, or to disable a particular Rule Engine.
  • the Selection Engine(s) 208 combine the results of multiple Rule Engines to determine if a particular packet and associated metadata should be forwarded to the software 1 10 and/or egressed to the Ethernet port for collection by down stream systems.
  • the control of how the results from different Rule Engines are combined is configured by the software 1 10. This further extends the functionality that is provided by the hardware, for example selection of packets occurring on a specific TCP port that also contain a pattern match, or selection of flows which exceed a specified data rate and don't contain a specified set of patterns.
  • the Selection Engine can be presented with packets that emerge from the delay path 204. The selection flags for these packets are recovered from state memory 212 and are examined to determine what, if any, forward processing should be performed for the recovered packet.
  • the software 1 10 is used to configure the system hardware Engines by means of an API in the example system.
  • the API can take several forms, but in the embodiment described herein it is based on the C++ programming language, which advantageously allows standard software engineering practice to be used.
  • the API allows code produced by the developer to work with the system hardware.
  • the compiled code runs on the software blade and makes low level calls to the hardware in order to configure the Rule Engines and the other components.
  • system components including the processing engine(s) 202, the rule engine(s) 206 and the selection engine(s) 208 are implemented by means of configurable hardware/firmware onboard the system 100, whilst the software application 1 10 is executed by an off-board processor.
  • the system framework provides hardware acceleration of key functions and the lightweight API can be employed to offload traffic processing criteria to hardware. This allows a higher level of performance to be achieved for a given amount of software processing resource. It is anticipated that several software modules will be running concurrently on the platform during operation. Each module can perform a number of activities as follows, for example:
  • a developer can use relevant standard C++ techniques and practices for the particular task in hand.
  • a processing flow examines elements of the metadata, the state record and the packet content in order to determine what further processing tasks should be performed on the traffic.
  • the developer can make API calls to offload elements of this examination step to the hardware so that overall system performance is improved.
  • the skilled person will be capable of designing and implementing a suitable API including necessary initialisation and processing methods that will be called by the processing framework.
  • Software processing of packets can be performed on multiple processing blades, with each blade providing multiple processing cores.
  • each blade provides multiple processing cores.
  • peer or host When several packets for a particular flow, peer or host are being processed in parallel there is a race condition in regards to the integrity of the state data associated with the flow, peer and host that the packets are a member of.
  • These issues can be addressed in a number of ways with varying complexity and yielding a range of efficiency/processor utilisation.
  • a basic load balancing scheme is proposed.
  • the system hardware will support load balancing at a per IP address or per state level, and duplication of packet payload and metadata (when state-based load balancing results in a packet being routed to multiple processors).
  • FIG 4 illustrates an example implementation of the system 100 based on an IBM BladeCenter H platform.
  • 10Gbps duplex capability can be achieved in a 2 blade solution with each blade consisting of identical hardware but running different firmware.
  • the BladeCenter offers high capacity power and cooling, 44Gbit/s of network connectivity per slot, as well as a wide range of high- performance processing blades, for the software components to run on.
  • a 10G duplex capability embodiment of the system can be constructed in a 2 blade solution, with each blade hardware identical, but running different firmware.
  • blocks labelled 402 comprise DIMM; 404 comprise CAM; 406 comprise SRAM and 408 comprise FPGA.
  • Components in outline 410 comprise the ingress/protocol finder; 412 comprise the delay buffer; 414 comprise the IPQ lookup; 416 comprise the statistics block; 418 comprise the Rule Engines and 420 comprise the pattern search/Rule Engines. The functions performed by these items will be described below in more detail (with reference to the data flow of Figure 6).
  • the flow, peer and host contexts will need to be identified for each packet. It is necessary to consider two host contexts for each packet: one relating to the source IP address and one relating to the destination IP address. Given the high packet rates that must be catered for, it may be unfeasible to regard the two host memberships as part of the same category (because this would require two accesses from the same category memory). However, in a live network it is generally reasonable to assume that the routing algorithms are efficient, therefore a source IP address in one direction (A->B) should only be seen as a destination IP address in the reverse direction (B->A). This allows the hosts category to be broken into two, hosts at link end A and hosts at link end B. By making this assumption there are now four categories: flows, peers, hosts(A) and hosts(B), each of which can be handled separately by the hardware. The mapping of application "Host" based rules to these two categories will be handled by the software API.
  • each entity (a flow, peer or host) can be handled equally. This allows significant reuse of both hardware and firmware elements and to help avoid confusion in relation to this a set of terms is defined here:
  • Category - a type of characteristic, e.g. flows, hosts and peers are each a different category
  • Figure 5 shows the flow of data through the functional blocks on an example board implementation of the system, with main resource allocation assignments against each.
  • the Protocol Finder 602 can be implemented in an FPGA and identifies the standard 5-tuple packet fields discussed above, as well as populating metadata with packet length.
  • the Finder splits the packet up, with the meta-data, including the extracted 5-tuple, going into a Statistics block 606, while the full packet is sent into a delay buffer 612 and a pattern scanner 608. This separation alleviates a number of the dataflow bottlenecks and simplifies some of the logic.
  • each packet can be associated with 4 states.
  • Each state can be either a pre-existing state where the context has already been "seen” or a new state where this is the first appearance of the context. For instance, a TCP/IP packet will be associated with a flow state, 2 host states and a peer state.
  • the packet On arrival at the statistics block 606 the packet will have all 4 of its state IDs, along with metadata attached. Each of the 4 categories of state will be handled separately, as they are mutually exclusive.
  • Two activities are performed in the statistics block: state maintenance and statistics production.
  • the state maintenance involves a READ-MODIFY-WRITE cycle on the state memory, while the statistics production requires transforms to be performed to ready the values for inspection.
  • State held for tracking and generating statistics are based on fixed time slicing using simple addition and comparison. This allows the state maintenance to be achieved in a few clock cycles. A number of different options were considered for achieving the pattern scan functionality performed by block 608.
  • One example mechanism uses a pair of Ternary CAMs in conjunction with SRAMs to perform a lookup.
  • the results along with the packet metadata are presented to the rule engines 610.
  • the rule engines 610 In the example there are 4 banks of rule engines, one for each of the categories identified above.
  • Each bank of engines has context state held in an associated memory, and may use a caching scheme to guarantee coherency.
  • the delay buffer 612 serves three main purposes. The first is as an intermediate store of the packets as they arrive to allow the statistics, pattern scanning and finally rule engines to decide whether a packet should be passed to software. The second purpose is to reduce system bandwidths by not requiring packet data to be moved around the processing elements any more than necessary. As such only the metadata, statistics and decisions need to be sent back from the rule engines. Thirdly, the delay buffer allows a certain amount of time for the software applications to decide whether or not a flow is wanted.
  • the delay buffer can be implemented using multiple memories, allowing independent delay paths to be created. These can be connected up in any suitable manner, e.g. paired, with the first pair providing a relatively short delay to allow for hardware and firmware processing latencies. The second pair can be used to provide a longer delay to enable longer term software latencies to be accommodated.
  • the IPQ lookup block 614 is made available at the end of the delay buffer 512 to allow particular IP tuples to be extracted, either as part of simple rules, or in response to some software based decision.
  • the egress control block 616 provides buffering and load balancing across the software processing blades.
  • the data that is egressed can consist of the following: • Packet payload
  • Traffic selection The developer can request API calls are mapped to based on flow, a flow, peer or host when hardware rule engines.
  • a peer and host one of the computed rule engine can be statistics statistics for it meets a configured to compare specified criteria. fields from metadata
  • host traffic For example, host traffic
  • packets for the particular is the network entity (flow, peer collection of a flow when the or host) are to be data rate drops below a processed in software certain value. and/or egressed for
  • Stateful Persistent state data tracks The hardware maintains correlation of when certain traffic events state for each of the events occur (ie pattern matches or flows, peers and hosts a statistic field exceeding a being tracked.
  • the API provides a
  • Sampling of The API provides simple The hardware tracks traffic from a calls to enable sampling of statistics for each flow, network entity packets from a flow, peer or peer and host.
  • a rule engine can be for a given number of
  • the embodiments of the system described above can reduce the amount of software processing that must be performed for each packet. Common and expensive analysis tasks are offloaded into dedicated hardware, thereby reducing the number of packets that must be processed by software to achieve the required functionality.
  • the developer can employ hardware rules to identify flow, peers and hosts of interest based on the related state. Software processing of packets for an uninteresting flow, peer or host can be suppressed indefinitely or until a notable event occurs. Further, packet content and traffic statistics can be relayed to software periodically (i.e. sampling). The statistics can be computed from all packets, but the software needs only read these results periodically. Efficiently distributing packets amongst the available software processing resources means that they only receive the data they actually require.
  • the embodiments can deploy existing analysis techniques on live network traffic at high speed. Additionally, they provide an environment for rapid development of new applications and analyses.
  • the API allows the development of traffic processing software modules in C++, for example, using standard software engineering practices and tools.
  • the software modules can be deployed cost effectively and efficiently using COTS processors by means of hardware acceleration of common and expensive analysis tasks (including support for stateful flow-based processing).
  • the sophisticated selection of packets can reduce the volume of traffic that must be processed by software to achieve a particular design goal, and efficiently distributing packets to multiple software processing cores across multiple processing blades (or cards) also improves efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A system (100) adapted to process network traffic including at least one processing engine (202) configured to receive network data (104) being transferred over a network and generate metadata relating to the data. The system further includes at least one rule engine (206) configured to receive and process the metadata to generate an output, and at least one selection engine (208) configured to receive and process the rule engine output to determine whether the network data is to be processed by a further component (110) and/or whether the network data is to continue to be transferred over the network. The at least one selection engine, the at least one rule engine and the at least one selection engine are implemented in system hardware or firmware, and the further component is implemented by software executing on another processor.

Description

PROCESSING NETWORK TRAFFIC
The present invention relates to processing network traffic.
It is desirable to monitor and process network traffic for many reasons, such as to detect malicious activity. One known system designed for this purpose is Detica's DCI-10 platform. This includes a hardware layer that provides hardware traffic inspection for examining every byte of every packet. Network packets that contain patterns of interest are passed to software for further processing. The software layer of the DCI-10 platform provides a flexible API (application programming interface) that enables a developer to dynamically indicate to the hardware layer which traffic is of interest and should therefore be passed to the software layer. Software modules produced by the developer are executed against the traffic via a software processing framework.
However, there is an increasing need to employ a more powerful set of analytics tools against network traffic. This is partly in response to the increased complexity of network threats meaning that malicious traffic must be identified based on behaviour rather than purely by technology or target identifiers. It is also partly due to the fact that developers have produced improved techniques and tools (e.g. to better identify anomalous traffic or suspicious hosts) but currently lack a means to deploy them at high traffic rates.
This class of analysis can be characterised by the requirement to track state, such as traffic events and statistics, across much, if not all, of the traffic. In many cases it is necessary to correlate such state not just in terms of a flow but also in consideration of peer communication, and potentially the network host to which each packet relates. Existing platforms lack native hardware support to provide stateful correlation of traffic and so it is necessary to have large numbers of packets processed by software to achieve this functionality. It is also necessary to have software examine the state that is correlated for each of these packets so that traffic of interest may be identified and processed by the software processing modules. These processing requirements, in addition to that of the actual software module, amounts to a high software processing load which is prohibitive at high speeds even with modern multi-core processors.
Embodiments of the present invention are intended to address at least some of the problems outlined above.
According to a first aspect of the present invention there is provided a system adapted to process network traffic, the system including:
at least one processing engine configured to receive network data being transferred over a network and generate metadata relating to the data;
at least one rule engine configured to receive and process the metadata to generate an output, and
at least one selection engine configured to receive and process the rule engine output to determine whether the network data is to be processed by a further component and/or whether the network data is to continue to be transferred over the network.
The at least one selection engine, the at least one rule engine and the at least one selection engine will normally be implemented in system hardware or firmware. The further component will normally be implemented by software executing on another (remote) processor.
The at least one selection engine may be configured to combine the outputs of a plurality of the rule engines.
The metadata generated by the at least one processing engine may include data identifying a flow, peer communication, destination host and/or source host associated with the network data, e.g. by extraction from an IP header of the packet. Alternatively, another part of the network data may be used to generate the metadata, e.g. custom headers associated with the network data. The metadata may include data identifying at least one pattern and/or regular expression found in the network data. The metadata may include statistical data regarding the network data. The processing engine may generate metadata indicating at least one category for the network data. The category may relate to a source (port and/or IP address) and/or a destination (port and/or IP address) of the network data. The category may specify whether the network data is associated with a particular flow, peer communication or a host. At least one said rule engine (and/or at least one said selection engine) may be configured to count, or monitor for, events relating to network data in a said category. The events may comprise a pattern match and/or a threshold comparison. The events may comprise events occurring within a flow, peer communication, or data relating to a particular host associated with the network data.
The system may include at least one memory component, which may store state data relating to the network data.
The system may include a delay path for delaying transfer of the network data whilst the network data is processed by the further component. The delay path may be used to retrieve selected network data previously transmitted, e.g. based on more recent events.
The processes executed by the at least one rule engine (and/or the selection and processing engines) may be configured by a developer to control, and/or interact with, functionality implemented in firmware components.
The network data may comprise an IP packet.
The firmware may comprise an FPGA onboard a processing blade.
According to another aspect of the present invention there is provided a method of processing network traffic, the method including:
using at least one processing engine to receive network data being transferred over a network and generate metadata relating to the data;
using at least one rule engine to receive and process the metadata to generate an output, and
using at least one selection engine to receive and process the rule engine output to determine whether the network data is to be processed by a further component and/or whether the network data is to continue to be transferred over the network,
wherein the at least one selection engine, the at least one rule engine and the at least one selection engine are implemented in system hardware or firmware, and the further component is implemented by software executing on another processor.
According to further aspects of the present invention there is provided computer program configured to execute at least some of the processes described herein.
Whilst the invention has been described above, it extends to any inventive combination of features set out above or in the following description. Although illustrative embodiments of the invention are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to these precise embodiments. As such, many modifications and variations will be apparent to practitioners skilled in the art. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment can be combined with other individually described features, or parts of other embodiments, even if the other features and embodiments make no mention of the particular feature. Thus, the invention extends to such specific combinations not already described.
The invention may be performed in various ways, and, by way of example only, embodiments thereof will now be described, reference being made to the accompanying drawings in which:
Figure 1 is a schematic high-level block diagram of an embodiment of the processing system;
Figure 2 is a more detailed schematic block diagram of the system;
Figure 3 is a block diagram of a Rule Engine of the system;
Figure 4 is a block diagram of a hardware implementation of the system, and Figure 5 shows flow of data through functional blocks in an example system.
Referring to Figure 1 , an example of a system 100 adapted to process network traffic is shown. The system receives network traffic 102, e.g. IP packets transmitted over the internet from a source device 104 to a destination device 106. The system can be inline in a network, or may be deployed from a network "tap" that is configured to send the system a copy of all or some of the network traffic. The hardware includes firmware components that have been configured with criteria 108 created by a developer using an analysis application 1 10. The criteria define traffic that is of interest for further analysis. The system 100 applies the criteria to the incoming network data 102 and generates metadata 1 12 relating to the network data which is transferred to the analysis application 1 10. The application 1 10 can process the metadata and/or associated network traffic and, depending on the result of that processing may allow the network data to continue 1 14 to its original destination (if the system is inline in the network) and/or carry out another function, e.g. set an alert that suspicious activity has been detected.
It will be appreciated Figure 1 is illustrative of an example only and variations are possible. For instance, an application separate to the analysis application 1 10 may be used to generate and edit the criteria 108, and the various components may be located on board physically distinct devices.
Figure 2 details functional blocks of the system 100. The first step in the logical flow of a packet through the system can be the packet being presented to at least one processing engine 202. It will be appreciated that other types of engines may also be included in alternative embodiments. Each of the processing engines generates metadata relating to the packet, which can include details of what, if any, patterns specified by the criteria 108 have been matched and/or statistics regarding the flow, peer and host with which the packet is associate. The packet and metadata can be buffered in a delay path 204. The metadata produced by the processing engines is presented to at least one rule engine 206, along with state data relating to the flow, peer and host membership of the packet. Each Rule Engine may be configured to perform simple operations such as a threshold comparison.
The results from the engines are then passed to at least one selection engine 208. The Selection Engine combines the results received from the Rule Engines to determine whether the current packet should be processed by software 1 10 and/or egressed for collection for follow-on storage/processing systems. One purpose of the delay path 204 is to allow the software application 1 10 time to process network data transferred to it. The delay path can also allow retrieval of packets that were previously transferred (within the capacity of the buffer) for further processing. Thus, the Selection Engine can also receive packets that are evicted from the delay path and, based on recovered control flags, determines if these packets should be processed in software 1 10 and/or egressed for collection. Packets are processed in software by the applications defined by the developer. The packets are presented together with the metadata generated for the particular packet and any application specific state data.
The system can track state records for statistics and traffic correlation, e.g. using engine state memory 207 and/or state memory 212. A new state entry is allocated when a packet arrives relating to a flow, peer or host that is not currently being tracked. State records are retired for reuse after a configurable period of inactivity or, in the case of a flow, when the system can identify that the flow has been terminated. Software is notified when a state record is retired. Software will also be notified if there are no state records available when a new one needs to be allocated for an incoming packet. The system can be configured to handle such an event by either dropping the packet or passing it directly to software (assuming sufficient bandwidth is available). The system may be direction agnostic when resolving the state record for a given flow, peer or host. This allows traffic events to and from a network entity to be correlated and acted upon.
In more detail, each of the processing engines 202 generates metadata regarding the packet being processed. The metadata can include an indication of a flow, peer and hosts associated with the packet. For stateful correlation of traffic statistics and traffic events, a number of membership categories can be computed for each packet. In the example four such categories are generated for each packet as follows:
• As a packet in a flow as defined by the 5-tuple: source IP, destination IP, source port, destination port and protocol.
• As a packet between two peers as defined by the 2-tuple : source IP, destination IP
• As a packet coming from a particular host as defined by the source IP address
• As a packet going to a particular host as defined by the destination IP address
The above categories are generated based on (potentially masked) fields from the packet header. It is possible to configure these four categories with alternative field and masking criteria so that network packets may be correlated on other categories, such as class B or class C domain.
The processing engines can also perform pattern matching, regular expressions and packet header inspection functions. Each pattern match can be mapped to one or more pattern groups. The group information is presented to the rule engines allowing multiple patterns to process by each engine. The processing engine is also presented with fields from the packet header allowing matches against particular traffic ports, IP addresses and other relevant header fields. Regular expressions are supported by means of the combination of pattern match groups and the Boolean selection capability of the Rule Engines. A regular expression can often be translated into several selectors which are typically too weak to be the sole basis of packet selection. A stronger selection criterion can be realised by using the Rule Engines to combine these weak selectors together. The full regular expression can then be validated by software 1 10. The processing engines can also update state and generate metadata fields for each of the packet categories (flow, peer and host). Examples of metadata statistics include the timestamp of the packet, byte and packet counts, packet and data rates and duration. Specific examples are given in the table below:
Category Field Description
Packet Timestamp Timestamp applied to the packet at ingress to the board
Entity Byte count (in) Number of bytes seen to the
entity
(Flow,
peer or Byte count (out) Number of bytes seen from the Host) entity
Packet count (in) Number of packets seen to the entity
Packet count (out) Number of packets seen from
the entity
Byte count (in ) (as To the entity
observed over the past
minute)
Byte count out) (as From the entity observed over the past
minute)
Packet count (in ) (as To the entity
observed over the past
minute)
Packet count (out) (as From the entity observed over the past
minute)
Duration The time that the entity has been tracked for (not applicable to flows).
Time since last packet To the entity
(in)
Time since last packet From the entity
(out)
Flow count Number of flows observed for this entity Statistics can be maintained with consideration to directionality. For example, when tracking a particular flow, packets from A to B will update the same state data as packets from B to A, but with the former updating the In counters and the latter updating Out counters (or vice-versa). The statistics fields can be generated in a compressed format to ensure that the resultant system bandwidth is achievable. The system can permit easy transformation to a standard number representation.
The Rule Engines 204 are configured by the framework in response to API calls invoked by the developer. Figure 3 is an overview of an example Rule Engine configured to take input from the packet header fields or the statistics metadata. It will be appreciated that the components shown are merely one example of how the Rule Engine functionality described herein can be implemented.
The Rule Engine can perform greater than, less than and equivalence integer operations against a threshold, with a specified bitwise mask. The Rule Engine can also receive input from pattern match group information and perform an equivalence check of this data with a specified bitwise mask. Each Rule Engine can be associated with a dedicated per-flow, peer or host state memory which can be loaded with a configured value and updated by the engine. This state can be used to count flow, peer or host events (such as pattern matches, packets or bytes). Additionally, each Rule Engine can have a per-flow, peer of host flag which may be controlled by the software 1 10.
The result of the numerical comparison, the pattern match check, the state memory and the software-controlled flag can be fed into a lookup table that is configured at a per-engine level to specify how the state memory should be updated and what Boolean result should be expressed for the Rule Engine. The flexibility of this approach allows for variety of functionalities to be realised. Examples include:
• To generate a positive result whenever a pattern is matched; • To generate a positive result whenever a pattern is matched, and for the subsequent n packets in a flow, peer or host;
• To output a positive result until a pattern is matched. A negative result is then output for all subsequent packets in the flow, peer or host.
• To output a positive result after n occurrences of a particular pattern have matched for a flow, peer or host.
Rather than considering a pattern match, the examples above could be based on the evaluation of a metadata field, such as the packet count compared to a predefined threshold. The lookup table can also be presented with a bit field that is under software control. This allows software to influence each Rule Engine for a given flow, peer or host; for example, to reset the Rule Engine counter, or to disable a particular Rule Engine.
The Selection Engine(s) 208 combine the results of multiple Rule Engines to determine if a particular packet and associated metadata should be forwarded to the software 1 10 and/or egressed to the Ethernet port for collection by down stream systems. The control of how the results from different Rule Engines are combined is configured by the software 1 10. This further extends the functionality that is provided by the hardware, for example selection of packets occurring on a specific TCP port that also contain a pattern match, or selection of flows which exceed a specified data rate and don't contain a specified set of patterns. Additionally, the Selection Engine can be presented with packets that emerge from the delay path 204. The selection flags for these packets are recovered from state memory 212 and are examined to determine what, if any, forward processing should be performed for the recovered packet.
The software 1 10 is used to configure the system hardware Engines by means of an API in the example system. The API can take several forms, but in the embodiment described herein it is based on the C++ programming language, which advantageously allows standard software engineering practice to be used. The API allows code produced by the developer to work with the system hardware. The compiled code runs on the software blade and makes low level calls to the hardware in order to configure the Rule Engines and the other components. Thus, system components including the processing engine(s) 202, the rule engine(s) 206 and the selection engine(s) 208 are implemented by means of configurable hardware/firmware onboard the system 100, whilst the software application 1 10 is executed by an off-board processor. The system framework provides hardware acceleration of key functions and the lightweight API can be employed to offload traffic processing criteria to hardware. This allows a higher level of performance to be achieved for a given amount of software processing resource. It is anticipated that several software modules will be running concurrently on the platform during operation. Each module can perform a number of activities as follows, for example:
• Initialization
o Registration of static packet processing/collect criteria via the API o Allocation and initialisation of application state
o Other application specific initialization tasks
• Packet processing, including the follow sub-tasks:
o General application-specific logic
o Access to packet metadata (including statistics, other hardware acceleration results, and flow, peer and host state) - via the API o Suppression of passing of unwanted data (for a flow, peer or host) to software - via the API
o Dynamic registration/deregistration of packet processing/collect criteria - via the API
A developer can use relevant standard C++ techniques and practices for the particular task in hand. Typically, a processing flow examines elements of the metadata, the state record and the packet content in order to determine what further processing tasks should be performed on the traffic. The developer can make API calls to offload elements of this examination step to the hardware so that overall system performance is improved.
The skilled person will be capable of designing and implementing a suitable API including necessary initialisation and processing methods that will be called by the processing framework.
Software processing of packets can be performed on multiple processing blades, with each blade providing multiple processing cores. When several packets for a particular flow, peer or host are being processed in parallel there is a race condition in regards to the integrity of the state data associated with the flow, peer and host that the packets are a member of. These issues can be addressed in a number of ways with varying complexity and yielding a range of efficiency/processor utilisation. In one embodiment a basic load balancing scheme is proposed. The system hardware will support load balancing at a per IP address or per state level, and duplication of packet payload and metadata (when state-based load balancing results in a packet being routed to multiple processors).
An implication of this approach is that processing of a particular flow, peer of host will be tied to a specific processor or processor blade. The suitability of this scheme will therefore vary significantly depending on the traffic characteristics of a particular link and the applications that are deployed. The skilled person will appreciate that it may be possible to implement more efficient load balancing (or packet distribution) schemes, but this can require additional hardware resource and more sophisticated software management.
Figure 4 illustrates an example implementation of the system 100 based on an IBM BladeCenter H platform. 10Gbps duplex capability can be achieved in a 2 blade solution with each blade consisting of identical hardware but running different firmware. The BladeCenter offers high capacity power and cooling, 44Gbit/s of network connectivity per slot, as well as a wide range of high- performance processing blades, for the software components to run on. A 10G duplex capability embodiment of the system can be constructed in a 2 blade solution, with each blade hardware identical, but running different firmware.
In the example blade design and functional allocation of Figure 4, blocks labelled 402 comprise DIMM; 404 comprise CAM; 406 comprise SRAM and 408 comprise FPGA. Components in outline 410 comprise the ingress/protocol finder; 412 comprise the delay buffer; 414 comprise the IPQ lookup; 416 comprise the statistics block; 418 comprise the Rule Engines and 420 comprise the pattern search/Rule Engines. The functions performed by these items will be described below in more detail (with reference to the data flow of Figure 6).
The flow, peer and host contexts will need to be identified for each packet. It is necessary to consider two host contexts for each packet: one relating to the source IP address and one relating to the destination IP address. Given the high packet rates that must be catered for, it may be unfeasible to regard the two host memberships as part of the same category (because this would require two accesses from the same category memory). However, in a live network it is generally reasonable to assume that the routing algorithms are efficient, therefore a source IP address in one direction (A->B) should only be seen as a destination IP address in the reverse direction (B->A). This allows the hosts category to be broken into two, hosts at link end A and hosts at link end B. By making this assumption there are now four categories: flows, peers, hosts(A) and hosts(B), each of which can be handled separately by the hardware. The mapping of application "Host" based rules to these two categories will be handled by the software API.
As part of the hardware/firmware design, each entity (a flow, peer or host) can be handled equally. This allows significant reuse of both hardware and firmware elements and to help avoid confusion in relation to this a set of terms is defined here:
• Category - a type of characteristic, e.g. flows, hosts and peers are each a different category • Context - For each individual entity in a particular category, there exists a context, within which all processing relating to that entity is performed. Each context is assigned a unique ID within its category when it is first encountered
• State - For each context a set of information describing that context's current state is stored in memory
Figure 5 shows the flow of data through the functional blocks on an example board implementation of the system, with main resource allocation assignments against each. The Protocol Finder 602 can be implemented in an FPGA and identifies the standard 5-tuple packet fields discussed above, as well as populating metadata with packet length. The Finder splits the packet up, with the meta-data, including the extracted 5-tuple, going into a Statistics block 606, while the full packet is sent into a delay buffer 612 and a pattern scanner 608. This separation alleviates a number of the dataflow bottlenecks and simplifies some of the logic.
In order to maintain statistics across the different categories of traffic, each packet can be associated with 4 states. Each state can be either a pre-existing state where the context has already been "seen" or a new state where this is the first appearance of the context. For instance, a TCP/IP packet will be associated with a flow state, 2 host states and a peer state.
On arrival at the statistics block 606 the packet will have all 4 of its state IDs, along with metadata attached. Each of the 4 categories of state will be handled separately, as they are mutually exclusive. Two activities are performed in the statistics block: state maintenance and statistics production. The state maintenance involves a READ-MODIFY-WRITE cycle on the state memory, while the statistics production requires transforms to be performed to ready the values for inspection. State held for tracking and generating statistics are based on fixed time slicing using simple addition and comparison. This allows the state maintenance to be achieved in a few clock cycles. A number of different options were considered for achieving the pattern scan functionality performed by block 608. One example mechanism uses a pair of Ternary CAMs in conjunction with SRAMs to perform a lookup.
Once the statistics have been calculated and the packet has been searched for any target patterns, the results along with the packet metadata are presented to the rule engines 610. In the example there are 4 banks of rule engines, one for each of the categories identified above. Each bank of engines has context state held in an associated memory, and may use a caching scheme to guarantee coherency.
The delay buffer 612 serves three main purposes. The first is as an intermediate store of the packets as they arrive to allow the statistics, pattern scanning and finally rule engines to decide whether a packet should be passed to software. The second purpose is to reduce system bandwidths by not requiring packet data to be moved around the processing elements any more than necessary. As such only the metadata, statistics and decisions need to be sent back from the rule engines. Thirdly, the delay buffer allows a certain amount of time for the software applications to decide whether or not a flow is wanted.
The delay buffer can be implemented using multiple memories, allowing independent delay paths to be created. These can be connected up in any suitable manner, e.g. paired, with the first pair providing a relatively short delay to allow for hardware and firmware processing latencies. The second pair can be used to provide a longer delay to enable longer term software latencies to be accommodated.
The IPQ lookup block 614 is made available at the end of the delay buffer 512 to allow particular IP tuples to be extracted, either as part of simple rules, or in response to some software based decision.
The egress control block 616 provides buffering and load balancing across the software processing blades. The data that is egressed can consist of the following: • Packet payload
• Computed statistics
• Pattern matches (including positions)
• Result vector indicating which rule engines 'fired'
A number of example workflows that illustrate use of the system are given in the table below;
Capability Description Support provided by hardware
Traffic selection The developer can request API calls are mapped to based on flow, a flow, peer or host when hardware rule engines. A peer and host one of the computed rule engine can be statistics statistics for it meets a configured to compare specified criteria. fields from metadata
against per flow, peer
For example, host traffic
and host state thresholds. could be collected when the
number of flows connected The output of a rule to the host exceeds a engine specifies that the statically defined threshold. packets for the particular A further example is the network entity (flow, peer collection of a flow when the or host) are to be data rate drops below a processed in software certain value. and/or egressed for
collection by a
downstream system.
Stateful Persistent state data tracks The hardware maintains correlation of when certain traffic events state for each of the events occur (ie pattern matches or flows, peers and hosts a statistic field exceeding a being tracked.
specified threshold). This
Software can update state record can be updated
certain aspects of this over multiple packets for a
state to control software particular flow, peer or host.
processing of particular
Both directions of a flow, network entities.
peer or host communication
Boolean logic allows for are resolved to the same
multiple statistics fields state data.
and traffic events to be combined.
Negative For simple criteria the Combinatorial logic
hardware can deselect allows the results from selection traffic without software several rule engines be intervention . combined in a manner that effectively disables
For more complex criteria
all rules for a particular (which requires software to
network entity and determine if a flow, peer or
application.
host should be deselected)
the API provides a
mechanism to instruct the
hardware to suppress
passing of further packets to
software.
Sampling of The API provides simple The hardware tracks traffic from a calls to enable sampling of statistics for each flow, network entity packets from a flow, peer or peer and host.
host.
Sampling can be controlled
A rule engine can be for a given number of
configured to fire based packets from a network
on the value of the byte event. For example the first
or packet count in these n packets of a flow or the
statistic records.
next m bytes of traffic to a
host after a specific pattern
is identified in the traffic.
The embodiments of the system described above can reduce the amount of software processing that must be performed for each packet. Common and expensive analysis tasks are offloaded into dedicated hardware, thereby reducing the number of packets that must be processed by software to achieve the required functionality. The developer can employ hardware rules to identify flow, peers and hosts of interest based on the related state. Software processing of packets for an uninteresting flow, peer or host can be suppressed indefinitely or until a notable event occurs. Further, packet content and traffic statistics can be relayed to software periodically (i.e. sampling). The statistics can be computed from all packets, but the software needs only read these results periodically. Efficiently distributing packets amongst the available software processing resources means that they only receive the data they actually require. The embodiments can deploy existing analysis techniques on live network traffic at high speed. Additionally, they provide an environment for rapid development of new applications and analyses. The API allows the development of traffic processing software modules in C++, for example, using standard software engineering practices and tools. The software modules can be deployed cost effectively and efficiently using COTS processors by means of hardware acceleration of common and expensive analysis tasks (including support for stateful flow-based processing). The sophisticated selection of packets can reduce the volume of traffic that must be processed by software to achieve a particular design goal, and efficiently distributing packets to multiple software processing cores across multiple processing blades (or cards) also improves efficiency.

Claims

1 . A system (100) adapted to process network traffic, the system including: at least one processing engine (202) configured to receive network data (104) being transferred over a network and to generate metadata relating to the network data; at least one rule engine (206) configured to receive and process the metadata to generate an output, and
at least one selection engine (208) configured to receive and process the rule engine output to determine whether the network data is to be processed by a further component (1 10) and/or whether the network data is to continue to be transferred over the network,
wherein the at least one selection engine, the at least one rule engine and the at least one selection engine are implemented in system hardware or firmware, and the further component is implemented by software executing on another processor.
2. A system according to claim 1 , wherein the at least one selection engine (208) is configured to combine the outputs of a plurality of the rule engines (206).
3. A system according to claim 1 or 2, wherein the metadata generated by the at least one processing engine (202) includes data identifying a flow, peer communication, destination host and/or source host associated with the network data (104).
4. A system according to claim 3, wherein the metadata includes data identifying at least one pattern and/or regular expression found in the network data (104).
5. A system according to claim 3 or 4, wherein the metadata includes statistical data relating to the network data (104).
6. A system according to claim 1 , wherein the processing engine (202) generates metadata indicating at least one category for the network data (104).
7. A system according to claim 6, wherein the category relates to a source (port and/or IP address) and/or a destination (port and/or IP address) of the network data (104).
8. A system according to claim 7, wherein at least one said rule engine (206) (and/or at least one said selection engine (208)) is configured to count, or monitor for, events relating to the network data in a said category.
9. A system according to claim 8, wherein the events comprise a pattern match and/or a threshold comparison.
10. A system according to claim 9, further including at least one memory component (212) configured to store state data relating to the categorised network data (104).
1 1 . A system according to claim 1 , further including a delay path (204) for delaying transfer of the network data (104) whilst the network data is processed by the further component (1 10).
12. A system according to claim 1 1 , wherein the delay path (204) is used to retrieve selected network data previously transmitted.
13. A system according to any one of the preceding claims, wherein the network data (104) comprises an IP packet.
14. A system according to any one of the preceding claims, wherein the system hardware or firmware comprises an FPGA onboard a processing blade.
15. A method of processing network traffic, the method including:
using at least one processing engine (202) to receive network data (104) being transferred over a network and generate metadata relating to the data; using at least one rule engine (206) to receive and process the metadata to generate an output, and
using at least one selection engine (208) to receive and process the rule engine output to determine whether the network data is to be processed by a further component (1 10) and/or whether the network data is to continue to be transferred over the network,
wherein the at least one selection engine, the at least one rule engine and the at least one selection engine are implemented in system hardware or firmware, and the further component is implemented by software executing on another processor.
PCT/GB2010/051979 2009-11-30 2010-11-29 Processing network traffic WO2011064597A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP10785505A EP2507966A1 (en) 2009-11-30 2010-11-29 Processing network traffic
US13/512,491 US8923159B2 (en) 2009-11-30 2010-11-29 Processing network traffic
AU2010322819A AU2010322819B2 (en) 2009-11-30 2010-11-29 Processing network traffic

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB0920857A GB0920857D0 (en) 2009-11-30 2009-11-30 Processing network traffic
EP09275115.5 2009-11-30
GB0920857.0 2009-11-30
EP09275115A EP2328315A1 (en) 2009-11-30 2009-11-30 Processing network traffic

Publications (1)

Publication Number Publication Date
WO2011064597A1 true WO2011064597A1 (en) 2011-06-03

Family

ID=43446810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2010/051979 WO2011064597A1 (en) 2009-11-30 2010-11-29 Processing network traffic

Country Status (4)

Country Link
US (1) US8923159B2 (en)
EP (1) EP2507966A1 (en)
AU (1) AU2010322819B2 (en)
WO (1) WO2011064597A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9455598B1 (en) * 2011-06-20 2016-09-27 Broadcom Corporation Programmable micro-core processors for packet parsing
US9244798B1 (en) 2011-06-20 2016-01-26 Broadcom Corporation Programmable micro-core processors for packet parsing with packet ordering
US9736041B2 (en) * 2013-08-13 2017-08-15 Nec Corporation Transparent software-defined network management
US10936713B2 (en) * 2015-12-17 2021-03-02 The Charles Stark Draper Laboratory, Inc. Techniques for metadata processing
US10235176B2 (en) 2015-12-17 2019-03-19 The Charles Stark Draper Laboratory, Inc. Techniques for metadata processing
US10382358B1 (en) 2016-06-13 2019-08-13 Amazon Technologies. Inc. Multi-tiered data processing service
TW201935306A (en) 2018-02-02 2019-09-01 美商多佛微系統公司 Systems and methods for policy linking and/or loading for secure initialization
TWI794405B (en) 2018-02-02 2023-03-01 美商查爾斯塔克德拉普實驗室公司 Systems and methods for policy execution processing
EP3788488A1 (en) 2018-04-30 2021-03-10 Dover Microsystems, Inc. Systems and methods for checking safety properties
TW202022678A (en) 2018-11-06 2020-06-16 美商多佛微系統公司 Systems and methods for stalling host processor
US11841956B2 (en) 2018-12-18 2023-12-12 Dover Microsystems, Inc. Systems and methods for data lifecycle protection
US11463340B2 (en) * 2020-12-31 2022-10-04 Forescout Technologies, Inc. Configurable network traffic parser
US11777832B2 (en) * 2021-12-21 2023-10-03 Forescout Technologies, Inc. Iterative development of protocol parsers

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080201772A1 (en) * 2007-02-15 2008-08-21 Maxim Mondaeev Method and Apparatus for Deep Packet Inspection for Network Intrusion Detection

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030074458A1 (en) 2001-09-18 2003-04-17 Gokhale Maya B. Hybrid hardware/software packet filter
US6788687B2 (en) * 2001-10-30 2004-09-07 Qualcomm Incorporated Method and apparatus for scheduling packet data transmissions in a wireless communication system
GB2407464B (en) 2002-09-06 2005-12-14 O2Micro Inc VPN and firewall integrated system
US7525958B2 (en) 2004-04-08 2009-04-28 Intel Corporation Apparatus and method for two-stage packet classification using most specific filter matching and transport level sharing
US8027267B2 (en) * 2007-11-06 2011-09-27 Avaya Inc Network condition capture and reproduction
US20090323529A1 (en) * 2008-06-27 2009-12-31 Ericsson Inc. Apparatus with network traffic scheduler and method
US20110137733A1 (en) * 2009-12-08 2011-06-09 Mpire Corporation Methods for capturing and reporting metrics regarding ad placement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080201772A1 (en) * 2007-02-15 2008-08-21 Maxim Mondaeev Method and Apparatus for Deep Packet Inspection for Network Intrusion Detection

Also Published As

Publication number Publication date
US8923159B2 (en) 2014-12-30
US20120236756A1 (en) 2012-09-20
AU2010322819B2 (en) 2014-11-27
EP2507966A1 (en) 2012-10-10
AU2010322819A1 (en) 2012-06-21

Similar Documents

Publication Publication Date Title
AU2010322819B2 (en) Processing network traffic
US10516612B2 (en) System and method for identification of large-data flows
US9832122B2 (en) System and method for identification of large-data flows
US8005012B1 (en) Traffic analysis of data flows
US11343187B2 (en) Quantitative exact match distance in network flows
US10616101B1 (en) Forwarding element with flow learning circuit in its data plane
US20130250948A1 (en) Lookup cluster complex
US9356844B2 (en) Efficient application recognition in network traffic
Ha et al. Suspicious flow forwarding for multiple intrusion detection systems on software-defined networks
US8555374B2 (en) High performance packet processing using a general purpose processor
KR20130085919A (en) System and method for integrating line-rate application recognition in a switch asic
Tanyingyong et al. Using hardware classification to improve pc-based openflow switching
KR101679573B1 (en) Method and apparatus for service traffic security using dimm channel distribution multicore processing system
CN113518130B (en) Packet burst load balancing method and system based on multi-core processor
CN105827629A (en) Software definition safety guiding device under cloud computing environment and implementation method thereof
US20230362131A1 (en) Systems and methods for monitoring and securing networks using a shared buffer
WO2013139678A1 (en) A method and a system for network traffic monitoring
Chen et al. A streaming-based network monitoring and threat detection system
Yamaki et al. Data prediction for response flows in packet processing cache
EP2328315A1 (en) Processing network traffic
CN110046286A (en) Method and apparatus for search engine caching
Al-Dalky et al. Accelerating snort NIDS using NetFPGA-based Bloom filter
Krishnan et al. Cloudsdn: Enabling sdn framework for security and threat analytics in cloud networks
Chang et al. Hash-based OpenFlow packet classification on heterogeneous system architecture
Tharaka et al. Runtime rule-reconfigurable high throughput NIPS on FPGA

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10785505

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010322819

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 13512491

Country of ref document: US

Ref document number: 2010785505

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2010322819

Country of ref document: AU

Date of ref document: 20101129

Kind code of ref document: A