WO2015164370A1 - A method and system for deep packet inspection in software defined networks - Google Patents

A method and system for deep packet inspection in software defined networks Download PDF

Info

Publication number
WO2015164370A1
WO2015164370A1 PCT/US2015/026869 US2015026869W WO2015164370A1 WO 2015164370 A1 WO2015164370 A1 WO 2015164370A1 US 2015026869 W US2015026869 W US 2015026869W WO 2015164370 A1 WO2015164370 A1 WO 2015164370A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
instruction
flow
bytes
tcp
Prior art date
Application number
PCT/US2015/026869
Other languages
French (fr)
Inventor
Yossi Barsheshet
Simhon DOCTORI
Ronen Solomon
Original Assignee
Orckit-Corrigent Ltd.
M&B IP Analysts, LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=54333087&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2015164370(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Orckit-Corrigent Ltd., M&B IP Analysts, LLC filed Critical Orckit-Corrigent Ltd.
Priority to EP24162914.6A priority Critical patent/EP4362403A2/en
Priority to EP15783292.4A priority patent/EP3135005B1/en
Priority to EP19205064.9A priority patent/EP3618358B1/en
Priority to US15/126,288 priority patent/US10652111B2/en
Publication of WO2015164370A1 publication Critical patent/WO2015164370A1/en
Priority to US16/865,361 priority patent/US20200259726A1/en
Priority to US17/734,147 priority patent/US20220263735A1/en
Priority to US17/734,148 priority patent/US20220263736A1/en
Priority to US18/119,881 priority patent/US20230216756A1/en
Priority to US18/119,883 priority patent/US20230216757A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/026Capturing of monitoring data using flow identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • H04L45/655Interaction between route computation entities and forwarding entities, e.g. for route determination or for flow table update

Definitions

  • This disclosure generally relates to techniques for deep packet inspection (DPI), and particularly for DPI of traffic in cloud-based networks utilizing software defined networks.
  • DPI deep packet inspection
  • Deep packet inspection (DPI) technology is a form of network packet scanning technique that allows specific data patterns to be extracted from a data communication channel. Extracted data patterns can then be used by various applications, such as security and data analytics applications. DPI currently performs across various networks, such as internal networks, Internet service providers (ISPs), and public networks provided to customers. Typically, the DPI is performed by dedicated engines installed in such networks.
  • ISPs Internet service providers
  • public networks provided to customers.
  • a software defined networking is a relatively new type of networking architecture that provides centralized management of network nodes rather than a distributed architecture utilized by conventional networks.
  • the SDN is prompted by an ONF (open network foundation).
  • the leading communication standard that currently defines communication between the central controller (e.g., a SDN controller) and the network nodes (e.g., vSwitches) is the Open FlowTM standard.
  • the data forwarding e.g. data plane
  • control decisions e.g. control plane
  • the decoupling may also allow the data plane and the control plane to operate on different hardware, in different runtime environments, and/or operate using different models.
  • the network intelligence is logically centralized in the central controller which configures, using OpenFlow protocol, network nodes and to control application data traffic flows.
  • the OpenFlow protocol allows addition of programmability to network nodes for the purpose of packets-processing operations under the control of the central controller
  • the OpenFlow does not support any mechanism to allow DPI of packets through the various networking layers as defined by the OSI model.
  • the current OpenFlow specification defines a mechanism to parse and extract only packet headers, in layer-2 through layer-4, from packets flowing via the network nodes.
  • the OpenFlow specification does not define or suggest any mechanism to extract non- generic, uncommon, and/or arbitrary data patterns contained in layer-4 to layer 7 fields.
  • the OpenFlow specification does not define or suggest any mechanism to inspect or to extract content from packets belonging to a specific flow or session. This is a major limitation as it would not require inspection of the packet for the purpose of identification of, for example, security threats detection.
  • Certain embodiments disclosed herein include a method for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN.
  • the method comprises: configuring a plurality of network nodes operable in the SDN with at least one probe instruction; receiving from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number; receiving from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; computing a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected; generating at least one mirror instruction based on at least the mask value; and configuring the plurality of network nodes with at least one mirror instruction.
  • Certain embodiments disclosed herein include a system for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN.
  • the system comprises: a processor; a memory connected to the processor and configured to contain a plurality of instructions that when executed by the processor configure the system to: set a plurality of network nodes operable in the SDN with at least one probe instruction; receive from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number; receive from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; compute a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected; generate at least
  • Figure 1 is a schematic diagram of a network system utilized to describe the various disclosed embodiments.
  • Figure 2 illustrates is a schematic diagram of a flow table stored in a central controller.
  • Figure 3 is a schematic diagram of a system utilized for describing the process of flow detection as performed by a central controller and a network node according to one embodiment.
  • Figure 4 is a schematic diagram of a system utilized for describing the process of flow termination as performed by a central controller and a network node according to one embodiment.
  • Figure 5 is a data structure depicting the organization of flows according to one embodiment.
  • Figure 6 is flowchart illustrating the operation of the central controller according to one embodiment.
  • Fig. 1 is an exemplary and non-limiting diagram of a network system 100 utilized to describe the various disclosed embodiments.
  • the network system 100 includes a software defined network (SDN) 1 10 (not shown) containing a central controller 1 1 1 and a plurality of network nodes 1 12.
  • the network nodes 1 12 communicate with the central controller 1 1 1 using, for example, an OpenFlow protocol.
  • the central controller 1 1 1 1 can configure the network nodes 1 12 to perform certain data path operations.
  • the SDN 1 10 can be implemented in wide area networks (WANs), local area networks (LANs), the Internet, metropolitan area networks (MANs), ISP backbones, datacenters, inter- datacenter networks, and the like.
  • Each network node 1 12 in the SDN may be a router, a switch, a bridge, and so on.
  • the central controller 1 1 1 provides inspected data (such as application metadata) to a plurality of application servers (collectively referred to as application servers 120, merely for simplicity purposes).
  • An application server 120 executes, for example, security applications (e.g., Firewall, intrusion detection, etc.), data analytic applications, and so on.
  • a plurality of client devices communicate with a plurality of destination servers (collectively referred to as destination servers 140, merely for simplicity purposes) connected over the network 1 10.
  • client devices 130 may be, for example, a smart phone, a tablet computer, a personal computer, a laptop computer, a wearable computing device, and the like.
  • destination servers 140 are accessed by the devices 130 and may be, for example, web servers.
  • the central controller 1 1 1 is configured to perform deep packet inspection on designated packets from designated flows or TCP sessions. To this end, the central controller 1 1 1 is further configured to instruct each of the network nodes 1 12 which of the packets and/or sessions should be directed to the controller 1 1 1 for packet inspections.
  • each network node 1 12 is configured to determine if an incoming packet requires inspection or not. The determination is performed based on a set of instructions provided by the controller 1 1 1 . A packet that requires inspection is either redirected to the controller 1 1 1 or mirrored and a copy thereof is sent to the controller 1 1 1 . It should be noted that traffic flows that are inspected are not affected by the operation of the network node 1 12. In an embodiment, each network node 1 12 is configured to extract and send only a portion of a packet data that contains meaningful information. [0025]
  • the set of instructions that the controller 1 1 1 configures each of the network nodes 1 12 with include "probe instructions", "mirroring instructions", and "termination instructions.” According to some exemplary and non-limiting embodiments, the probe instructions include:
  • the termination instructions include:
  • the TCP FLAG SYN, TCP FLAG ACK, TCP FLAG FIN, TCP FLAG RST are fields in a TCP packet's header that can be analyzed by the network nodes 1 12. That is, each node 1 12 is configured to receive an incoming packet (either a request from a client device 130 or response for a server 140), analyze the packet's header, and perform the action (redirect the packet to controller 1 1 1 or send to destination server 140) respective of the value of the TCP flag.
  • the controller 1 1 1 also configures each of the network nodes 1 12 with mirroring instructions with a mirror action of X number of bytes within a packet.
  • the mirrored bytes are sent to the controller 1 1 1 to perform the DPI analysis.
  • the set of mirroring instructions have the following format:
  • the values V1 through V7 are determined by the controller 1 1 1 per network node or for all nodes 1 12.
  • the values of the TCP sequence, and TCP sequence mask are computed, by the controller 1 1 1 , as discussed in detail below.
  • TLV new type-length-value
  • the TLV structures may be applied to be utilized by an Open Flow protocol standard as defined, for example, in the OpenFlow 1 .3.3 specification published by the Open Flow Foundation on September 27, 2013 or OpenFlow 1 .4.0 specification published on October 14, 2013, for parsing and identifying any arbitrary fields within a packet.
  • the TLV structures disclosed herein include:
  • TC P_FLG_OXM_H E AD E R (0x80FE, 2, 1 ).
  • This TVL structure allows identification of the TCP header flags.
  • the Ox80FE' value represents a unique vendor identification (ID)
  • the ⁇ ' value is 1 -byte total length that stores the TCP flags header.
  • the central controller 1 1 1 also maintains a flow table having a structure 200 as illustrated in the exemplary and non-limiting Fig. 2.
  • the flow table 200 contains two main fields KEY 210 and DATA 220.
  • the KEY field 210 holds information with respect to the addresses/port numbers of a client device 130 and a destination server 140.
  • the DATA field 220 contains information with respect to a TCP flow, such as a flow ID, a request (client to server) sequence number M, a response (server to client) sequence number N, a flow state (e.g., ACK, FIN), a creation timestamp, a client to server hit counter, server to client hit counter Y [bytes], client to server data buffer, server to client buffer, and an aging bit.
  • Fig. 3 shows an exemplary and non-limiting schematic diagram of a system 300 for describing the process of flow detection as performed by the central controller 1 1 1 and a network node 1 12 according to one embodiment.
  • the central controller 1 1 1 includes a DPI flow detection module 31 1 , a DPI engine 312, and a memory 313, and a processing unit 314.
  • the DPI engine 312 in configured to inspect a packet or a number of bytes to provide application metadata as required by an application executed by an application server 120.
  • the DPI flow detection module 31 1 is configured to detect all TCP flows and maintain them in the flow table (e.g., table 200). The module 31 1 is also configured to generate and provide the network logs with the required instructions to monitor, redirect, and mirror packets. The DPI flow detection module 31 1 executes certain functions including, but not limited to, flow management, computing sequence masks, and TCP flow analysis. These functions are discussed in detail below.
  • the network node 1 12 includes a probe flow module 321 , a memory 322, and a processing unit 323.
  • the probe flow module 321 is configured to redirect any new TCP connection state initiation packets to the DPI flow detection module 31 1 , as well as to extract several packets from each detected TCP flow and mirror them to the flow detection module 31 1 .
  • probe flow module 321 executes functions and/or implements logic to intercept TCP flags, redirect packets, and count sequence numbers.
  • Both processing units 314 and 323 uses instructions stored in the memories 313 and 322 respectively to execute tasks generally performed by the central controllers of SDN as well as to control and enable the operation of behavioral network intelligence processes disclosed herewith.
  • the processing unit (314, 323) may include one or more processors.
  • the one or more processors may be implemented with any combination of general-purpose microprocessors, multi-core processors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.
  • the memories 313 and 322 may be implemented using any form of a non-transitory computer readable medium.
  • a packet arrives from a client (e.g., client 130, Fig. 1 ) at a port (not shown) at the network node 1 12.
  • the probe flow module 321 redirects the packet to the controller 1 1 1 , and in particular to the module 31 1 .
  • the module 31 1 traps the packet and creates a new flow- id in the flow table (e.g., table 200) and marks the flow-id's state as 'SYN'.
  • the flow table is saved in the memory 313.
  • the initial sequence from the client to a destination server number equals M and saved in the flow table as well.
  • the packet is sent to the node 1 12 for further processing.
  • the response is received at the node's 1 12 port.
  • the response packet is sent to the module 31 1 in the controller 1 1 1 .
  • the module 31 1 traps the packet and searches for a pre-allocated corresponding flow-id in the flow table and updates the respective state as 'SYN/ACK'.
  • the module 31 1 also stores the initial sequence number of a packet from the server to client as equals to N. This will create a new bi-directional flow-id with M and N sequence numbers identified and the sequence mask logic can be calculated respective thereof.
  • the DPI flow detection module 31 1 implements or executes a sequence mask logic that computes a mask for the initial trapped sequence numbers (M and N) to be used for a new flow to be configured into the node 1 12.
  • the computed mask is used to define new mirroring instructions to allow mirroring of a number of bytes from the TCP session in both directions.
  • the computed mask value specifies which bytes respective of the correct sequence number would be required to mirror from the TCP session.
  • the computed value is placed in a mask filed defined by the Open Flow protocol.
  • TCP_DATA_SIZE_DPI specifies the number of bytes the node 1 12 would be required to mirror from the TCP session.
  • a different value of the TCP_DATA_SIZE_DPI may be set for the upstream and downstream traffic. For example, for an upstream traffic fewer bytes may be mirrored than the downstream traffic, thus the TCP_DATA_SIZE_DPI value for upstream traffic would be smaller than a downstream traffic.
  • the mask is defined such that a ⁇ ' in a given bit position indicates a "don't care" match for the same bit in the corresponding field, whereas a means match the bit exactly.
  • all data packets containing sequence number in the range of ⁇ 0xf46d5c34 to 0xf46d9c34 ⁇ be mirrored to the controller 1 1 1 .
  • the module 31 1 using a TCP flow analysis logic creates the mirroring instructions related to the client and server traffic.
  • One instruction identifies the client to server flow traffic, including the OXM_OF_ _TCP_SEQ to identify the initial sequence number of the flow with the mask_M computed.
  • the action of the flow is to mirror all packets that the instruction applies, which will result in the TCP_DATA_SIZE_DPI number of bytes from the client to server direction to be mirrored to the controller 1 1 1 .
  • the second instruction identifies the server-to-client flow traffic, including the OXM_OF_TCP_SEQ to identify the initial sequence number of the flow with the mask_N.
  • the action is to mirror all packets that the instruction applies to, which will result in the TCP_DATA_SIZE_DPI number of byte from the server to client direction to be mirrored to the controller 1 1 1 for further analysis.
  • the mask_N and mask_M are computed using the sequence numbers N and M ⁇ respectively using the process discussed above.
  • the mirroring instructions includes:
  • the processed packet is sent back to the node 1 12 for further processing.
  • a set of mirroring instructions generated respective of the computed mask value are sent to the node 1 12.
  • packets arrive from either the client device or a destination server with their sequence number that matches the mirroring instructions and are mirrored to the central controller 1 1 1 for buffering and for analysis by the DPI engine 312. It should be noted that each instruction hit increments a counter Client-to-Server hit counter X [bytes] and Server-to-Client hit counter Y [bytes].
  • the various fields of the flow table are shown in Fig. 2.
  • Fig. 4 show an exemplary and non-limiting diagram of a system 400 for describing the process of flow termination as performed by the central controller 1 1 1 and a network node 1 12 according to one embodiment.
  • the various module of the controller 1 1 1 and node 1 12 are discussed with reference to Fig. 3.
  • the module 31 1 follows a termination of a TCP flow and is responsible to remove the exiting flow from the flow table. In addition, the module 31 1 disables or removes the mirroring instructions from the node 1 12. According to one embodiment, the module 31 1 configures the node 1 12 with a set of termination instructions. Examples for such instructions are provided above.
  • the value matches one of the termination instructions, thus, at S402, to the packet is sent to the center controller 1 1 1 .
  • the module 31 1 traps the packet and marks the corresponding flow-id in the flow table to update the state to FIN. Then, the packet is sent back it to the network log.
  • the audit mechanism implemented by the module 31 1 scans the flow table every predefined time interval to all flows that their respective state is any one of FIN, FIN/ACK, FIN/FIN/ACK, or RST.
  • the flows are removed from the probe flow module 321 and the flow table.
  • each network node 1 12 is populated with one or more probe tables generated by the central controller 1 1 1 .
  • Fig. 5 shows a non-limiting and exemplary data structure 500 depicting the organization of the flows to allow functionality of both the probe flow detection module 321 and probe sequence counter 324.
  • the data structure 500 which may be in a form of a table is updated with a general instruction to match all traffic type with instruction 501 to go to a probe table 510.
  • the instruction 501 is set to the highest priority, unless the controller 1 1 1 requires preprocessing of other instructions. All packets matching the instruction 500 are processed in the probe table 510.
  • the probe table 510 is populated with a medium priority probe and termination instructions 51 1 to detect all SYN, SYN/ACK, FIN, FIN/ACK that are the TCP connection initiation packets.
  • the instructions 51 1 allows the module 31 1 to update the flow table and as a consequence create new instructions for mirroring N bytes from each TCP connection setup.
  • the probe table 510 table is also populated with highest priority instructions 512, these are two bi-direction instructions per flow-id that match a number Y tupple flow headers including the TCP sequence number as calculated by the sequence mask logic.
  • the instructions 512 are to send the packet to the central controller 1 1 1 and also to perform go to table ID ⁇ next table ID>.
  • the instructions 512 will cause sending the packet to continue switching processing.
  • Each of these bi-directional instructions 512 will cause the node to copy several bytes from the TCP stream to the TCP flow analysis logic to be stored for further DPI engine metadata analysis.
  • the final instruction 513 placed in the probe table 510 is in the lowest priority to catch all and proceed with the switch functionality. All traffic which does not correspond to the TCP initiation packets, nor a specific detected flow and the corresponding TCP sequence number shall continue regular processing.
  • Fig. 6 shows an exemplary and non-limiting flowchart 600 illustrating the operation of the central controller 1 1 1 according to one embodiment.
  • all network nodes 1 12 are configured with a set of probe instructions utilized to instruct each node 1 12 to redirect a TCP packet having at least a flag value as designated in each probe instruction. Examples for probe instructions are provided above.
  • a first TCP packet with at least one TCP FLAG SYN value equal to 1 is received. This packet may have a sequence number M and may be sent from a client device 130.
  • a second TCP packet with at least one TCP FLAG ACK value equal to 1 is received. This packet may have a sequence number N and may be sent from a destination server 140 in response to the first TCP packet.
  • the flow table is updated with the respective flow ID and the state of the first and second packets.
  • a mask value is computed.
  • the mask value is utilized to determine which bytes from the flow respective of the sequence numbers N and M should be mirrored by the nodes.
  • An embodiment for computing the mask value is provided above.
  • a set of mirroring instructions are generated using the mirror value and sent to the network nodes.
  • Each such instruction defines the packets (designed at least by a specific source/destination IP addresses, and TCP sequences), the number of bytes, and the bytes that should be mirrored.
  • the received mirror bytes are inspected using a DPI engine in the controller 1 1 1 .
  • the flow table is updated with the number of the received mirror bytes.
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs"), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for deep packet inspection (DPI) in a software defined network (SDN). The method includes configuring a plurality of network nodes operable in the SDN with at least one probe instruction; receiving from a network node a first packet of a flow, the first packet matches the at least one probe instruction and includes a first sequence number; receiving from a network node a second packet of the flow, the second packet matches the at least one probe instruction and includes a second sequence number, the second packet is a response of the first packet; computing a mask value respective of at least the first and second sequence numbers indicating which bytes to be mirrored from subsequent packets belonging to the same flow; generating at least one mirror instruction based on at least the mask value; and configuring the plurality of network nodes with at least one mirror instruction.

Description

A METHOD AND SYSTEM FOR DEEP PACKET INSPECTION IN SOFTWARE
DEFINED NETWORKS
CROSS REFERENCE TO RELATED APPLICATIONS
[001] This application claims the benefit of US provisional application No. 61 /982,358 filed on April 22, 2014, the contents of which are herein incorporated by reference.
TECHNICAL FIELD
[002] This disclosure generally relates to techniques for deep packet inspection (DPI), and particularly for DPI of traffic in cloud-based networks utilizing software defined networks.
BACKGROUND
[003] Deep packet inspection (DPI) technology is a form of network packet scanning technique that allows specific data patterns to be extracted from a data communication channel. Extracted data patterns can then be used by various applications, such as security and data analytics applications. DPI currently performs across various networks, such as internal networks, Internet service providers (ISPs), and public networks provided to customers. Typically, the DPI is performed by dedicated engines installed in such networks.
[004] A software defined networking is a relatively new type of networking architecture that provides centralized management of network nodes rather than a distributed architecture utilized by conventional networks. The SDN is prompted by an ONF (open network foundation). The leading communication standard that currently defines communication between the central controller (e.g., a SDN controller) and the network nodes (e.g., vSwitches) is the Open Flow™ standard.
[005] Specifically, in SDN-based architectures the data forwarding (e.g. data plane) is typically decoupled from control decisions (e.g. control plane), such as routing, resources, and other management functionalities. The decoupling may also allow the data plane and the control plane to operate on different hardware, in different runtime environments, and/or operate using different models. As such, in an SDN network, the network intelligence is logically centralized in the central controller which configures, using OpenFlow protocol, network nodes and to control application data traffic flows.
[006] Although, the OpenFlow protocol allows addition of programmability to network nodes for the purpose of packets-processing operations under the control of the central controller, the OpenFlow does not support any mechanism to allow DPI of packets through the various networking layers as defined by the OSI model. Specifically, the current OpenFlow specification defines a mechanism to parse and extract only packet headers, in layer-2 through layer-4, from packets flowing via the network nodes. The OpenFlow specification does not define or suggest any mechanism to extract non- generic, uncommon, and/or arbitrary data patterns contained in layer-4 to layer 7 fields. In addition, the OpenFlow specification does not define or suggest any mechanism to inspect or to extract content from packets belonging to a specific flow or session. This is a major limitation as it would not require inspection of the packet for the purpose of identification of, for example, security threats detection.
[007] The straightforward approach of routing all traffic from network nodes to the central controller introduces some significant drawbacks, such as increased end-to-end traffic delays between the client and the server; overflowing the controller capability to perform other networking functions; and a single point of failure for the re-routed traffic.
[008] Therefore, it would be advantageous to provide a solution that overcomes the deficiencies noted above and allow efficient DPI in SDNs.
SUMMARY
[009] A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical nodes of all aspects nor delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term some embodiments may be used herein to refer to a single embodiment or multiple embodiments of the disclosure. [0010] Certain embodiments disclosed herein include a method for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN. The method comprises: configuring a plurality of network nodes operable in the SDN with at least one probe instruction; receiving from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number; receiving from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; computing a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected; generating at least one mirror instruction based on at least the mask value; and configuring the plurality of network nodes with at least one mirror instruction.
[0011 ] Certain embodiments disclosed herein include a system for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN. The system comprises: a processor; a memory connected to the processor and configured to contain a plurality of instructions that when executed by the processor configure the system to: set a plurality of network nodes operable in the SDN with at least one probe instruction; receive from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number; receive from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; compute a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected; generate at least one mirror instruction based on at least the mask value; and configure the plurality of network nodes with at least one mirror instruction.
BRIEF DESCRIPTION OF THE DRAWINGS [0012] The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
[0013] Figure 1 is a schematic diagram of a network system utilized to describe the various disclosed embodiments.
[0014] Figure 2 illustrates is a schematic diagram of a flow table stored in a central controller.
[0015] Figure 3 is a schematic diagram of a system utilized for describing the process of flow detection as performed by a central controller and a network node according to one embodiment.
[0016] Figure 4 is a schematic diagram of a system utilized for describing the process of flow termination as performed by a central controller and a network node according to one embodiment.
[0017] Figure 5 is a data structure depicting the organization of flows according to one embodiment.
[0018] Figure 6 is flowchart illustrating the operation of the central controller according to one embodiment.
DETAILED DESCRIPTION
[0019] It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular nodes may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
[0020] Fig. 1 is an exemplary and non-limiting diagram of a network system 100 utilized to describe the various disclosed embodiments. The network system 100 includes a software defined network (SDN) 1 10 (not shown) containing a central controller 1 1 1 and a plurality of network nodes 1 12. The network nodes 1 12 communicate with the central controller 1 1 1 using, for example, an OpenFlow protocol. The central controller 1 1 1 can configure the network nodes 1 12 to perform certain data path operations. The SDN 1 10 can be implemented in wide area networks (WANs), local area networks (LANs), the Internet, metropolitan area networks (MANs), ISP backbones, datacenters, inter- datacenter networks, and the like. Each network node 1 12 in the SDN may be a router, a switch, a bridge, and so on.
[0021] The central controller 1 1 1 provides inspected data (such as application metadata) to a plurality of application servers (collectively referred to as application servers 120, merely for simplicity purposes). An application server 120 executes, for example, security applications (e.g., Firewall, intrusion detection, etc.), data analytic applications, and so on.
[0022] In the exemplary network system 100, a plurality of client devices (collectively referred to as client devices 130, merely for simplicity purposes) communicate with a plurality of destination servers (collectively referred to as destination servers 140, merely for simplicity purposes) connected over the network 1 10. A client device 130 may be, for example, a smart phone, a tablet computer, a personal computer, a laptop computer, a wearable computing device, and the like. The destination servers 140 are accessed by the devices 130 and may be, for example, web servers.
[0023] According to some embodiments, the central controller 1 1 1 is configured to perform deep packet inspection on designated packets from designated flows or TCP sessions. To this end, the central controller 1 1 1 is further configured to instruct each of the network nodes 1 12 which of the packets and/or sessions should be directed to the controller 1 1 1 for packet inspections.
[0024] According to some embodiments, each network node 1 12 is configured to determine if an incoming packet requires inspection or not. The determination is performed based on a set of instructions provided by the controller 1 1 1 . A packet that requires inspection is either redirected to the controller 1 1 1 or mirrored and a copy thereof is sent to the controller 1 1 1 . It should be noted that traffic flows that are inspected are not affected by the operation of the network node 1 12. In an embodiment, each network node 1 12 is configured to extract and send only a portion of a packet data that contains meaningful information. [0025] The set of instructions that the controller 1 1 1 configures each of the network nodes 1 12 with include "probe instructions", "mirroring instructions", and "termination instructions." According to some exemplary and non-limiting embodiments, the probe instructions include:
If (TCP FLAG SYN=1) then (re-direct packet to central controller);
If (TCP FLAG SYN=1 and ACK=1) then (re-direct packet to central controller); and
If (TCP FLAG ACK=1) then (forward packet directly to a destination server).
The termination instructions include:
If (TCP FLAG FIN=1) then (re-direct packet to controller);
If (TCP FLAG FIN=1 and ACK=1) then (re-direct packet to controller); and
If (TCP FLAG RST=1) then (re-direct packet to controller).
[0026] The TCP FLAG SYN, TCP FLAG ACK, TCP FLAG FIN, TCP FLAG RST are fields in a TCP packet's header that can be analyzed by the network nodes 1 12. That is, each node 1 12 is configured to receive an incoming packet (either a request from a client device 130 or response for a server 140), analyze the packet's header, and perform the action (redirect the packet to controller 1 1 1 or send to destination server 140) respective of the value of the TCP flag.
[0027] The controller 1 1 1 also configures each of the network nodes 1 12 with mirroring instructions with a mirror action of X number of bytes within a packet. The mirrored bytes are sent to the controller 1 1 1 to perform the DPI analysis. According to some exemplary embodiments, the set of mirroring instructions have the following format:
If (source IP Address = V1 and destination IP Address = V2 and source TCP port = V3 and destination IP address = V4 and TCP sequence = V5 and TCP sequence mask = V6) then (mirror V7 bytes) [0028] The values V1 through V7 are determined by the controller 1 1 1 per network node or for all nodes 1 12. The values of the TCP sequence, and TCP sequence mask are computed, by the controller 1 1 1 , as discussed in detail below.
[0029] In another embodiment, in order to allow analysis of TCP packets' headers by a network node 1 12 and tracks flows, new type-length-value (TLV) structures are provided. The TLV structures may be applied to be utilized by an Open Flow protocol standard as defined, for example, in the OpenFlow 1 .3.3 specification published by the Open Flow Foundation on September 27, 2013 or OpenFlow 1 .4.0 specification published on October 14, 2013, for parsing and identifying any arbitrary fields within a packet. According to non-limiting and exemplary embodiments, the TLV structures disclosed herein include:
1 . TC P_FLG_OXM_H E AD E R (0x80FE, 2, 1 ). This TVL structure allows identification of the TCP header flags. The Ox80FE' value represents a unique vendor identification (ID), the value '2' represents a unique Type=2 value for the TLV, and the Ί ' value is 1 -byte total length that stores the TCP flags header.
2. TC P_S EQ_OXM_H E A D E R (0x80FE, 1 , 4) - This TLV structure allows identification of the TCP sequence number field. The Ox80FE' value represents a unique vendor ID, the value Ί ' represents a unique Type=1 value for this TLV, and the value '4' is a 4-byte total length that stores the TCP sequence number.
[0030] In order to track the flows, the central controller 1 1 1 also maintains a flow table having a structure 200 as illustrated in the exemplary and non-limiting Fig. 2. The flow table 200 contains two main fields KEY 210 and DATA 220. The KEY field 210 holds information with respect to the addresses/port numbers of a client device 130 and a destination server 140. The DATA field 220 contains information with respect to a TCP flow, such as a flow ID, a request (client to server) sequence number M, a response (server to client) sequence number N, a flow state (e.g., ACK, FIN), a creation timestamp, a client to server hit counter, server to client hit counter Y [bytes], client to server data buffer, server to client buffer, and an aging bit. [0031] Fig. 3 shows an exemplary and non-limiting schematic diagram of a system 300 for describing the process of flow detection as performed by the central controller 1 1 1 and a network node 1 12 according to one embodiment. In an exemplary implementation, the central controller 1 1 1 includes a DPI flow detection module 31 1 , a DPI engine 312, and a memory 313, and a processing unit 314. The DPI engine 312 in configured to inspect a packet or a number of bytes to provide application metadata as required by an application executed by an application server 120.
[0032] According to various embodiments discussed in detail above, the DPI flow detection module 31 1 is configured to detect all TCP flows and maintain them in the flow table (e.g., table 200). The module 31 1 is also configured to generate and provide the network logs with the required instructions to monitor, redirect, and mirror packets. The DPI flow detection module 31 1 executes certain functions including, but not limited to, flow management, computing sequence masks, and TCP flow analysis. These functions are discussed in detail below.
[0033] In exemplary implementation, the network node 1 12 includes a probe flow module 321 , a memory 322, and a processing unit 323. The probe flow module 321 is configured to redirect any new TCP connection state initiation packets to the DPI flow detection module 31 1 , as well as to extract several packets from each detected TCP flow and mirror them to the flow detection module 31 1 . In an embodiment, probe flow module 321 executes functions and/or implements logic to intercept TCP flags, redirect packets, and count sequence numbers.
[0034] Both processing units 314 and 323 uses instructions stored in the memories 313 and 322 respectively to execute tasks generally performed by the central controllers of SDN as well as to control and enable the operation of behavioral network intelligence processes disclosed herewith. In an embodiment, the processing unit (314, 323) may include one or more processors. The one or more processors may be implemented with any combination of general-purpose microprocessors, multi-core processors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information. The memories 313 and 322 may be implemented using any form of a non-transitory computer readable medium.
[0035] Prior to performing the flow detection process the network node 1 12 is set with the probe instructions, such as those discussed above. Referring to Fig. 3, at S301 , a packet arrives from a client (e.g., client 130, Fig. 1 ) at a port (not shown) at the network node 1 12. The packet is a TCP packet with a header including the following value [TCP FLAG SYN=1 , SEQUENCE = M].
[0036] As the header' value matches a redirect action, at S302, the probe flow module 321 redirects the packet to the controller 1 1 1 , and in particular to the module 31 1 .
[0037] In response, at S303, the module 31 1 traps the packet and creates a new flow- id in the flow table (e.g., table 200) and marks the flow-id's state as 'SYN'. The flow table is saved in the memory 313. The initial sequence from the client to a destination server number equals M and saved in the flow table as well. Then, the packet is sent to the node 1 12 for further processing.
[0038] At S304, a response packet arrives from a destination server (e.g., server 140, Fig. 1 ) with header value [TCP FLAG SYN=1 , TCP FLAG ACK=1 , SEQUENCE = N]. The response is received at the node's 1 12 port. At S305, as the header's value matches a probe instruction, the response packet is sent to the module 31 1 in the controller 1 1 1 .
[0039] In response, the module 31 1 traps the packet and searches for a pre-allocated corresponding flow-id in the flow table and updates the respective state as 'SYN/ACK'. The module 31 1 also stores the initial sequence number of a packet from the server to client as equals to N. This will create a new bi-directional flow-id with M and N sequence numbers identified and the sequence mask logic can be calculated respective thereof.
[0040] According to various embodiments, the DPI flow detection module 31 1 implements or executes a sequence mask logic that computes a mask for the initial trapped sequence numbers (M and N) to be used for a new flow to be configured into the node 1 12. Specifically, the computed mask is used to define new mirroring instructions to allow mirroring of a number of bytes from the TCP session in both directions. The computed mask value specifies which bytes respective of the correct sequence number would be required to mirror from the TCP session. In an embodiment, the computed value is placed in a mask filed defined by the Open Flow protocol. [0041] The following steps are taken to extract the computed mask value: Compute a temporary mask value (temp_mask_val) as follows: temp_mask_val = M XOR (M+ TCP_DATA_SIZE_DPI);
The value TCP_DATA_SIZE_DPI specifies the number of bytes the node 1 12 would be required to mirror from the TCP session. In an embodiment, a different value of the TCP_DATA_SIZE_DPI may be set for the upstream and downstream traffic. For example, for an upstream traffic fewer bytes may be mirrored than the downstream traffic, thus the TCP_DATA_SIZE_DPI value for upstream traffic would be smaller than a downstream traffic. The temp_mask_val returns a number where the most significant bit (MSB) set to one indicates the first bit of the mask. Then a sequence MSB is computed as follows: seq_msb = (int32_t)msb32(temp_Mask_val);
The 'msb32' function returns the MSB place of temp_mask_val. Finally, the mask value is computed as follows: mask = (int32_t)(0 - ((0x1 « seq_msb))).
[0042] As an example, if the sequence number M is M=0xf46d5c34, and TCP_DATA_SIZE_DPI = 16384, then: temp_mask_val = 0xf46d5c34 XOR (0xf46d5c34 + 16384) = OxcOOO
seq_msb = (int32_t)msb32(0xf46d9c34) = 16
mask = (int32_t)(0 - (0x1 « 16)) = 0xFFFF8000
[0043] The mask is defined such that a Ό' in a given bit position indicates a "don't care" match for the same bit in the corresponding field, whereas a means match the bit exactly. In above example, all data packets containing sequence number in the range of {0xf46d5c34 to 0xf46d9c34} be mirrored to the controller 1 1 1 . [0044] Using the computed mask value, the module 31 1 using a TCP flow analysis logic (not shown) creates the mirroring instructions related to the client and server traffic. One instruction identifies the client to server flow traffic, including the OXM_OF_ _TCP_SEQ to identify the initial sequence number of the flow with the mask_M computed. The action of the flow is to mirror all packets that the instruction applies, which will result in the TCP_DATA_SIZE_DPI number of bytes from the client to server direction to be mirrored to the controller 1 1 1 . The second instruction identifies the server-to-client flow traffic, including the OXM_OF_TCP_SEQ to identify the initial sequence number of the flow with the mask_N. The action is to mirror all packets that the instruction applies to, which will result in the TCP_DATA_SIZE_DPI number of byte from the server to client direction to be mirrored to the controller 1 1 1 for further analysis. The mask_N and mask_M are computed using the sequence numbers N and M< respectively using the process discussed above. As a non-limiting example, the mirroring instructions includes:
[0045] Referring back to Fig. 3, at S306, in the module 31 1 the processed packet is sent back to the node 1 12 for further processing. In an embodiment, a set of mirroring instructions generated respective of the computed mask value are sent to the node 1 12. At S307, a response TCP ACK packet with [TCP FLAG ACK=1 ] is received at a port of the node 1 12 and, based on the respective probe instruction, the packet is switched directly to the destination server 140.
[0046] In an embodiment, an audit mechanism scans the flow table every predefined time interval from the last timestamp and deletes all flows from the state is not SYN/ACK. Furthermore, an aging mechanism deletes all entries wherein their aging bit equal = 1 . The aging bit is initialized to 0 upon flow creation of a flow-id entry and is set to 1 in the first audit pass if buffer length is 0. When a flow-id is deleted from the flow table, the flow- id also removed from the tables maintained by the probe sequence counter 324. [0047] At S308 and S309, packets arrive from either the client device or a destination server with their sequence number that matches the mirroring instructions and are mirrored to the central controller 1 1 1 for buffering and for analysis by the DPI engine 312. It should be noted that each instruction hit increments a counter Client-to-Server hit counter X [bytes] and Server-to-Client hit counter Y [bytes]. The flow table audit mechanism scans the flow table, every predefined time interval, and updates the mask to 0x00000000 and the ACTION to "no Action" of all entries that their Client-to-Server buffer length = TCP_DATA_SIZE_DPI or Server-to-Client buffer length TCP_DATA_SIZE_DPI. The various fields of the flow table are shown in Fig. 2.
[0048] Fig. 4 show an exemplary and non-limiting diagram of a system 400 for describing the process of flow termination as performed by the central controller 1 1 1 and a network node 1 12 according to one embodiment. The various module of the controller 1 1 1 and node 1 12 are discussed with reference to Fig. 3.
[0049] In the flow termination process, the module 31 1 follows a termination of a TCP flow and is responsible to remove the exiting flow from the flow table. In addition, the module 31 1 disables or removes the mirroring instructions from the node 1 12. According to one embodiment, the module 31 1 configures the node 1 12 with a set of termination instructions. Examples for such instructions are provided above.
[0050] At S401 , a packet arrives, at the node 1 12, from a client 130 with a header including the value of [TCP FLAG FIN=1 ]. The value matches one of the termination instructions, thus, at S402, to the packet is sent to the center controller 1 1 1 .
[0051] In response, at S403, the module 31 1 traps the packet and marks the corresponding flow-id in the flow table to update the state to FIN. Then, the packet is sent back it to the network log.
[0052] At S404, a response packet from the destination server (e.g., server 140) with a header's value containing [TCP FLAG FIN=1 , ACK=1 ] is received at the node 1 12. As the value matches one of the termination instructions, at S405, to the packet is sent to the center controller 1 1 1 .
[0053] At S406, the module 31 1 traps the received packet and marks the corresponding FLOW-ID in its flow table DB as state=FIN/FIN/ACK. Then, the packet is sent back to the network node 1 12. At S407, a response TCP ACK packet arrives from a client 130 with a header's value containing [TCP FLAG ACK=1 ] and is switched directly to the server 140. If the response packet includes the header's value of [TCP FLAG RST=1 ], the module 31 1 marks the state of respective flow id in the flow table.
[0054] In an embodiment, the audit mechanism implemented by the module 31 1 scans the flow table every predefined time interval to all flows that their respective state is any one of FIN, FIN/ACK, FIN/FIN/ACK, or RST. The flows are removed from the probe flow module 321 and the flow table.
[0055] According to one embodiment, each network node 1 12 is populated with one or more probe tables generated by the central controller 1 1 1 . Fig. 5 shows a non-limiting and exemplary data structure 500 depicting the organization of the flows to allow functionality of both the probe flow detection module 321 and probe sequence counter 324.
[0056] The data structure 500 which may be in a form of a table is updated with a general instruction to match all traffic type with instruction 501 to go to a probe table 510. The instruction 501 is set to the highest priority, unless the controller 1 1 1 requires preprocessing of other instructions. All packets matching the instruction 500 are processed in the probe table 510.
[0057] In an embodiment, the probe table 510 is populated with a medium priority probe and termination instructions 51 1 to detect all SYN, SYN/ACK, FIN, FIN/ACK that are the TCP connection initiation packets. The instructions 51 1 allows the module 31 1 to update the flow table and as a consequence create new instructions for mirroring N bytes from each TCP connection setup.
[0058] The probe table 510 table is also populated with highest priority instructions 512, these are two bi-direction instructions per flow-id that match a number Y tupple flow headers including the TCP sequence number as calculated by the sequence mask logic. The instructions 512 are to send the packet to the central controller 1 1 1 and also to perform go to table ID <next table ID>. The instructions 512 will cause sending the packet to continue switching processing. Each of these bi-directional instructions 512 will cause the node to copy several bytes from the TCP stream to the TCP flow analysis logic to be stored for further DPI engine metadata analysis. [0059] The final instruction 513 placed in the probe table 510 is in the lowest priority to catch all and proceed with the switch functionality. All traffic which does not correspond to the TCP initiation packets, nor a specific detected flow and the corresponding TCP sequence number shall continue regular processing.
[0060] Fig. 6 shows an exemplary and non-limiting flowchart 600 illustrating the operation of the central controller 1 1 1 according to one embodiment. At S610, all network nodes 1 12 are configured with a set of probe instructions utilized to instruct each node 1 12 to redirect a TCP packet having at least a flag value as designated in each probe instruction. Examples for probe instructions are provided above.
[0061] At S620, a first TCP packet with at least one TCP FLAG SYN value equal to 1 is received. This packet may have a sequence number M and may be sent from a client device 130. At S630, a second TCP packet with at least one TCP FLAG ACK value equal to 1 is received. This packet may have a sequence number N and may be sent from a destination server 140 in response to the first TCP packet. In an embodiment, the flow table is updated with the respective flow ID and the state of the first and second packets.
[0062] At S640, using at least the sequence numbers of the first and second packets a mask value is computed. The mask value is utilized to determine which bytes from the flow respective of the sequence numbers N and M should be mirrored by the nodes. An embodiment for computing the mask value is provided above.
[0063] At S650, a set of mirroring instructions are generated using the mirror value and sent to the network nodes. Each such instruction defines the packets (designed at least by a specific source/destination IP addresses, and TCP sequences), the number of bytes, and the bytes that should be mirrored. At S660, the received mirror bytes are inspected using a DPI engine in the controller 1 1 1 . In addition, the flow table is updated with the number of the received mirror bytes.
[0064] In S670, it is checked if the inspection session should be terminated. The decision is based on the FIN and/or RST values of the TCP FLAG. As noted above, packets with TCP FLAG FIN=1 or TCP FLAG RST=1 are directed to the controller respective of the set of termination instructions. Some examples for the termination instructions are provided above. If S670, results with No answer execution returns to S660; otherwise, execution continues with S680. At S680, related exiting flows from the flow table are removed. In addition, the nodes 1 12 are instructed not to perform the mirroring instructions provided at S650.
[0065] The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPUs"), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
[0066] All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiments and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any nodes developed that perform the same function, regardless of structure.

Claims

CLAIMS What is claimed is:
1 . A method for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN, comprising:
configuring a plurality of network nodes operable in the SDN with at least one probe instruction;
receiving from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number;
receiving from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; computing a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected;
generating at least one mirror instruction based on at least the mask value; and configuring the plurality of network nodes with at least one mirror instruction.
2. The method of claim 1 , further comprising:
receiving mirrored bytes from a network node respective of the at least one mirror instruction; and
inspecting the mirrored bytes using a DPI engine.
3. The method of claim 1 , further comprising:
maintaining a flow table listing each flow inspected by the central controller; and updating a status field in the flow table upon reception of any one of: the first packet, the second packet, and the mirrored bytes.
4. The method of claim 3, further comprising:
configuring the plurality of network nodes with at least one termination instruction; removing all entries from the flow table for each flow matching the at least one termination instruction; and
disabling the at least one mirror instruction for each flow matching the at least one termination instruction.
5. The method of claim 1 , wherein the at least one probe instruction is any one of: if (TCP FLAG SYN=1 ) then (re-direct packet to the central controller) and if (TCP FLAG SYN=1 and ACK=1 ) then (re-direct packet to central controller).
6. The method of claim 1 , wherein the least one mirror action is at least: if (source IP Address = V1 and destination IP Address = V2 and source TCP port = V3 and destination IP address = V4 and TCP sequence = V5 and TCP sequence mask = V6) then (mirror V7 bytes).
7. The method of claim 4, wherein the at least one termination instruction is any one of: if (TCP FLAG FIN=1 ) then (re-direct packet to controller); if (TCP FLAG FIN=1 and ACK=1 ) then (re-direct packet to controller); and if (TCP FLAG RST=1 ) then (re-direct packet to controller).
8. The method of claim 1 , wherein a number of bytes mirrored from each packet is a portion of the packet, wherein the bytes are mirrored from packets in sequence.
9. The method of claim 1 , wherein the communication between central controller and the plurality of network nodes is performed using the OpenFlow standard.
10. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the computerized method according to claim 1 .
1 1 . A system for deep packet inspection (DPI) in a software defined network (SDN), wherein the method is performed by a central controller of the SDN, comprising: a processor;
a memory connected to the processor and configured to contain a plurality of instructions that when executed by the processor configure the system to:
set a plurality of network nodes operable in the SDN with at least one probe instruction;
receive from a network node a first packet of a flow, wherein the first packet matches the at least one probe instruction, wherein the first packet includes a first sequence number;
receive from a network node a second packet of the flow, wherein the second packet matches the at least one probe instruction, wherein the second packet includes a second sequence number, wherein the second packet is a response of the first packet; compute a mask value respective of at least the first and second sequence numbers, wherein the mask value indicates which bytes to be mirrored from subsequent packets belonging to the same flow, wherein the mirrored bytes are inspected;
generate at least one mirror instruction based on at least the mask value; and configure the plurality of network nodes with at least one mirror instruction.
12. The system of claim 1 1 , wherein the system is further configured to:
receive mirrored bytes from a network node respective of the at least one mirror instruction; and
inspect the mirrored bytes using a DPI engine.
13. The system of claim 1 1 , wherein the system is further configured to:
maintain a flow table listing each flow inspected by the central controller; and update a status field in the flow table upon reception of any one of: the first packet, the second packet, and the mirrored bytes.
14. The system of claim 13, wherein the system is further configured to:
configure the plurality of network nodes with at least one termination instruction; remove all entries from the flow table for each flow matching the at least one termination instruction; and disable the at least one mirror instruction for each flow matching the at least one termination instruction.
15. The system of claim 1 1 , wherein the at least one probe instruction is any one of: if (TCP FLAG SYN=1 ) then (re-direct packet to the central controller) and if (TCP FLAG SYN=1 and ACK=1 ) then (re-direct packet to central controller).
16. The system of claim 1 1 , wherein the least one mirror action is at least: if (source IP Address = V1 and destination IP Address = V2 and source TCP port = V3 and destination IP address = V4 and TCP sequence = V5 and TCP sequence mask = V6) then (mirror V7 bytes).
17. The system of claim 14, wherein the at least one termination instruction is any one of: if (TCP FLAG FIN=1 ) then (re-direct packet to controller); if (TCP FLAG FIN=1 and ACK=1 ) then (re-direct packet to controller); and if (TCP FLAG RST=1 ) then (re-direct packet to controller).
18. The system of claim 1 1 , wherein a number of bytes mirrored from each packet is a portion of the packet, wherein the bytes are mirrored from packets in sequence.
19. The system of claim 1 1 , wherein the communication between central controller and the plurality of network node is performed using the OpenFlow standard
PCT/US2015/026869 2014-04-22 2015-04-21 A method and system for deep packet inspection in software defined networks WO2015164370A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
EP24162914.6A EP4362403A2 (en) 2014-04-22 2015-04-21 A method for deep packet inspection in software defined networks
EP15783292.4A EP3135005B1 (en) 2014-04-22 2015-04-21 A method and system for deep packet inspection in software defined networks
EP19205064.9A EP3618358B1 (en) 2014-04-22 2015-04-21 A method for deep packet inspection in software defined networks
US15/126,288 US10652111B2 (en) 2014-04-22 2015-04-21 Method and system for deep packet inspection in software defined networks
US16/865,361 US20200259726A1 (en) 2014-04-22 2020-05-03 Method and system for deep packet inspection in software defined networks
US17/734,147 US20220263735A1 (en) 2014-04-22 2022-05-02 Method and system for deep packet inspection in software defined networks
US17/734,148 US20220263736A1 (en) 2014-04-22 2022-05-02 Method and system for deep packet inspection in software defined networks
US18/119,881 US20230216756A1 (en) 2014-04-22 2023-03-10 Method and system for deep packet inspection in software defined networks
US18/119,883 US20230216757A1 (en) 2014-04-22 2023-03-10 Method and system for deep packet inspection in software defined networks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461982358P 2014-04-22 2014-04-22
US61/982,358 2014-04-22

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/126,288 A-371-Of-International US10652111B2 (en) 2014-04-22 2015-04-21 Method and system for deep packet inspection in software defined networks
US16/865,361 Continuation US20200259726A1 (en) 2014-04-22 2020-05-03 Method and system for deep packet inspection in software defined networks

Publications (1)

Publication Number Publication Date
WO2015164370A1 true WO2015164370A1 (en) 2015-10-29

Family

ID=54333087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/026869 WO2015164370A1 (en) 2014-04-22 2015-04-21 A method and system for deep packet inspection in software defined networks

Country Status (3)

Country Link
US (6) US10652111B2 (en)
EP (3) EP3135005B1 (en)
WO (1) WO2015164370A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076014A (en) * 2016-11-14 2018-05-25 南宁富桂精密工业有限公司 Network security defence method and SDN controllers
IL267453A (en) * 2016-12-30 2019-08-29 Bitdefender Netherlands B V System for preparing network traffic for fast analysis

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201605198A (en) * 2014-07-31 2016-02-01 萬國商業機器公司 Intelligent network management device and method of managing network
US10516689B2 (en) * 2015-12-15 2019-12-24 Flying Cloud Technologies, Inc. Distributed data surveillance in a community capture environment
US10848514B2 (en) * 2015-12-15 2020-11-24 Flying Cloud Technologies, Inc. Data surveillance for privileged assets on a computer network
US9979740B2 (en) * 2015-12-15 2018-05-22 Flying Cloud Technologies, Inc. Data surveillance system
US10542026B2 (en) * 2015-12-15 2020-01-21 Flying Cloud Technologies, Inc. Data surveillance system with contextual information
US10887330B2 (en) * 2015-12-15 2021-01-05 Flying Cloud Technologies, Inc. Data surveillance for privileged assets based on threat streams
US10523698B2 (en) * 2015-12-15 2019-12-31 Flying Cloud Technologies, Inc. Data surveillance system with patterns of centroid drift
US9584381B1 (en) 2016-10-10 2017-02-28 Extrahop Networks, Inc. Dynamic snapshot value by turn for continuous packet capture
US10476673B2 (en) 2017-03-22 2019-11-12 Extrahop Networks, Inc. Managing session secrets for continuous packet capture systems
US20180324061A1 (en) * 2017-05-03 2018-11-08 Extrahop Networks, Inc. Detecting network flow states for network traffic analysis
US11874845B2 (en) * 2017-06-28 2024-01-16 Fortinet, Inc. Centralized state database storing state information
US9967292B1 (en) 2017-10-25 2018-05-08 Extrahop Networks, Inc. Inline secret sharing
WO2019132744A1 (en) * 2017-12-27 2019-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for controlling communication between an edge cloud server and a plurality of clients via a radio access network
US10389574B1 (en) 2018-02-07 2019-08-20 Extrahop Networks, Inc. Ranking alerts based on network monitoring
US10038611B1 (en) 2018-02-08 2018-07-31 Extrahop Networks, Inc. Personalization of alerts based on network monitoring
US10270794B1 (en) 2018-02-09 2019-04-23 Extrahop Networks, Inc. Detection of denial of service attacks
JP7069399B2 (en) 2018-07-18 2022-05-17 ビットディフェンダー アイピーアール マネジメント リミテッド Systems and methods for reporting computer security incidents
US10411978B1 (en) 2018-08-09 2019-09-10 Extrahop Networks, Inc. Correlating causes and effects associated with network activity
US10594718B1 (en) 2018-08-21 2020-03-17 Extrahop Networks, Inc. Managing incident response operations based on monitored network activity
US11057501B2 (en) * 2018-12-31 2021-07-06 Fortinet, Inc. Increasing throughput density of TCP traffic on a hybrid data network having both wired and wireless connections by modifying TCP layer behavior over the wireless connection while maintaining TCP protocol
CN111404768A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 DPI recognition realization method and equipment
US10965702B2 (en) * 2019-05-28 2021-03-30 Extrahop Networks, Inc. Detecting injection attacks using passive network monitoring
US11165814B2 (en) 2019-07-29 2021-11-02 Extrahop Networks, Inc. Modifying triage information based on network monitoring
US11388072B2 (en) 2019-08-05 2022-07-12 Extrahop Networks, Inc. Correlating network traffic that crosses opaque endpoints
US10742530B1 (en) 2019-08-05 2020-08-11 Extrahop Networks, Inc. Correlating network traffic that crosses opaque endpoints
US10742677B1 (en) 2019-09-04 2020-08-11 Extrahop Networks, Inc. Automatic determination of user roles and asset types based on network monitoring
US11165823B2 (en) 2019-12-17 2021-11-02 Extrahop Networks, Inc. Automated preemptive polymorphic deception
EP4218212A1 (en) 2020-09-23 2023-08-02 ExtraHop Networks, Inc. Monitoring encrypted network traffic
US11463466B2 (en) 2020-09-23 2022-10-04 Extrahop Networks, Inc. Monitoring encrypted network traffic
US11445340B2 (en) 2021-01-21 2022-09-13 Flying Cloud Technologies, Inc. Anomalous subject and device identification based on rolling baseline
US11349861B1 (en) 2021-06-18 2022-05-31 Extrahop Networks, Inc. Identifying network entities based on beaconing activity
US11296967B1 (en) 2021-09-23 2022-04-05 Extrahop Networks, Inc. Combining passive network analysis and active probing
US11470100B1 (en) 2022-03-21 2022-10-11 Flying Cloud Technologies, Inc. Data surveillance in a zero-trust network
US11843606B2 (en) 2022-03-30 2023-12-12 Extrahop Networks, Inc. Detecting abnormal data access based on data similarity

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100212006A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-peer data transfers
US20100208590A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-eer control transfers
US20110264802A1 (en) * 2009-02-13 2011-10-27 Alcatel-Lucent Optimized mirror for p2p identification
EP2672668A1 (en) * 2012-06-06 2013-12-11 Juniper Networks, Inc. Creating searchable and global database of user visible process traces

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7512980B2 (en) 2001-11-30 2009-03-31 Lancope, Inc. Packet sampling flow-based detection of network intrusions
DE102004016582A1 (en) * 2004-03-31 2005-10-27 Nec Europe Ltd. Procedures for monitoring and protecting a private network from attacks from a public network
US8346960B2 (en) * 2005-02-15 2013-01-01 At&T Intellectual Property Ii, L.P. Systems, methods, and devices for defending a network
WO2011102312A1 (en) 2010-02-16 2011-08-25 日本電気株式会社 Packet transfer device, communication system, processing rule update method and program
EP2582100A4 (en) * 2010-06-08 2016-10-12 Nec Corp Communication system, control apparatus, packet capture method and program
US8873398B2 (en) 2011-05-23 2014-10-28 Telefonaktiebolaget L M Ericsson (Publ) Implementing EPC in a cloud computer with openflow data plane
US8955093B2 (en) 2012-04-11 2015-02-10 Varmour Networks, Inc. Cooperative network security inspection
US8792347B2 (en) 2012-06-01 2014-07-29 Opera Software Ireland Limited Real-time network monitoring and subscriber identification with an on-demand appliance
US9647938B2 (en) 2012-06-11 2017-05-09 Radware, Ltd. Techniques for providing value-added services in SDN-based networks
US9063003B2 (en) 2012-06-11 2015-06-23 David M. Bergstein Radiation compensated thermometer
US9197548B2 (en) * 2012-08-15 2015-11-24 Dell Products L.P. Network switching system using software defined networking applications
US9178807B1 (en) 2012-09-20 2015-11-03 Wiretap Ventures, LLC Controller for software defined networks
US20140140211A1 (en) 2012-11-16 2014-05-22 Cisco Technology, Inc. Classification of traffic for application aware policies in a wireless network
US10389623B2 (en) * 2013-03-12 2019-08-20 Nec Corporation Packet data network, a method for operating a packet data network and a flow-based programmable network device
US20160197831A1 (en) * 2013-08-16 2016-07-07 Interdigital Patent Holdings, Inc. Method and apparatus for name resolution in software defined networking
CN104468253B (en) * 2013-09-23 2019-07-12 中兴通讯股份有限公司 A kind of deep-packet detection control method and device
US9112794B2 (en) * 2013-11-05 2015-08-18 International Business Machines Corporation Dynamic multipath forwarding in software defined data center networks
US9264400B1 (en) 2013-12-02 2016-02-16 Trend Micro Incorporated Software defined networking pipe for network traffic inspection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100212006A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-peer data transfers
US20100208590A1 (en) * 2009-02-13 2010-08-19 Alcatel-Lucent Peer-to-peer traffic management based on key presence in peer-to-eer control transfers
US20110264802A1 (en) * 2009-02-13 2011-10-27 Alcatel-Lucent Optimized mirror for p2p identification
EP2672668A1 (en) * 2012-06-06 2013-12-11 Juniper Networks, Inc. Creating searchable and global database of user visible process traces

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3135005A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076014A (en) * 2016-11-14 2018-05-25 南宁富桂精密工业有限公司 Network security defence method and SDN controllers
CN108076014B (en) * 2016-11-14 2020-11-17 南宁富桂精密工业有限公司 Network security defense method and SDN controller
IL267453A (en) * 2016-12-30 2019-08-29 Bitdefender Netherlands B V System for preparing network traffic for fast analysis
IL267453B2 (en) * 2016-12-30 2023-05-01 Bitdefender Netherlands B V System for preparing network traffic for fast analysis

Also Published As

Publication number Publication date
US20200259726A1 (en) 2020-08-13
US20230216756A1 (en) 2023-07-06
EP3618358B1 (en) 2024-05-29
EP3135005A1 (en) 2017-03-01
EP3135005A4 (en) 2017-12-20
US20230216757A1 (en) 2023-07-06
US10652111B2 (en) 2020-05-12
EP4362403A2 (en) 2024-05-01
US20220263735A1 (en) 2022-08-18
EP3135005B1 (en) 2019-12-18
US20220263736A1 (en) 2022-08-18
EP3618358A1 (en) 2020-03-04
US20170099196A1 (en) 2017-04-06

Similar Documents

Publication Publication Date Title
US20230216756A1 (en) Method and system for deep packet inspection in software defined networks
US7990847B1 (en) Method and system for managing servers in a server cluster
JP6598382B2 (en) Incremental application of resources for network traffic flows based on heuristics and business policies
Jero et al. Beads: Automated attack discovery in openflow-based sdn systems
CN108353068B (en) SDN controller assisted intrusion prevention system
Hu et al. Analysing performance issues of open-source intrusion detection systems in high-speed networks
US10263975B2 (en) Information processing device, method, and medium
US10574680B2 (en) Malware detection in distributed computer systems
US10855546B2 (en) Systems and methods for non-intrusive network performance monitoring
JP5494110B2 (en) Network communication path estimation method, communication path estimation program, and monitoring apparatus
US11122115B1 (en) Workload distribution in a data network
Sutton et al. Towards an SDN assisted IDS
JP5925287B1 (en) Information processing apparatus, method, and program
Gupta et al. Deep4r: Deep packet inspection in p4 using packet recirculation
JP2018137687A (en) Packet analyzing program, packet analyzer, and packet analyzing method
Zhang et al. Toward comprehensive network verification: Practices, challenges and beyond
JP4027213B2 (en) Intrusion detection device and method
US20150256469A1 (en) Determination method, device and storage medium
Gadallah et al. A seven-dimensional state flow traffic modelling for multi-controller Software-Defined Networks considering multiple switches
Nirasawa et al. Application switch using DPN for improving TCP based data center Applications
Lee et al. NetPiler: Detection of ineffective router configurations
Cuong et al. Hpofs: a high performance and secured openflow switch architecture for fpga.
He et al. Traffic steering of middlebox policy chain based on SDN
Polverini et al. Snoop through traffic counters to detect black holes in segment routing networks
Bonafiglia et al. Enforcement of dynamic HTTP policies on resource-constrained residential gateways

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15783292

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15126288

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015783292

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015783292

Country of ref document: EP