US20060174058A1 - Recirculation buffer for semantic processor - Google Patents

Recirculation buffer for semantic processor Download PDF

Info

Publication number
US20060174058A1
US20060174058A1 US11/376,512 US37651206A US2006174058A1 US 20060174058 A1 US20060174058 A1 US 20060174058A1 US 37651206 A US37651206 A US 37651206A US 2006174058 A1 US2006174058 A1 US 2006174058A1
Authority
US
United States
Prior art keywords
data stream
buffer
parsing
packet
recirculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/376,512
Inventor
Somsubhra Sikdar
Kevin Rowett
Rajesh Nair
Komal Rathi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Venture Lending and Leasing IV Inc
GigaFin Networks Inc
Original Assignee
Venture Lending and Leasing IV Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/351,030 external-priority patent/US7130987B2/en
Priority claimed from US11/181,527 external-priority patent/US7415596B2/en
Application filed by Venture Lending and Leasing IV Inc filed Critical Venture Lending and Leasing IV Inc
Priority to US11/376,512 priority Critical patent/US20060174058A1/en
Publication of US20060174058A1 publication Critical patent/US20060174058A1/en
Assigned to MISTLETOE TECHNOLOGIES, INC. reassignment MISTLETOE TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NAIR, RAJESH, SIKDAR, SOMSUBHRA, ROWETT, KEVIN JEROME, RATHI, KOMAL
Assigned to VENTURE LENDING & LEASING IV, INC. reassignment VENTURE LENDING & LEASING IV, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MISTLETOE TECHNOLOGIES, INC.
Assigned to GIGAFIN NETWORKS, INC. reassignment GIGAFIN NETWORKS, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: MISTLETOE TECHNOLOGIES, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9031Wraparound memory, e.g. overrun or underrun detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow
    • H04L49/9089Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
    • H04L49/9094Arrangements for simultaneous transmit and receive, e.g. simultaneous reading/writing from/to the storage element
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • a packet is a finite-length (generally several tens to several thousands of octets) digital transmission unit comprising one or more header fields and a data field.
  • the data field may contain virtually any type of digital data.
  • the header fields convey information (in different formats depending on the type of header and options) related to delivery and interpretation of the packet contents. This information may, e.g., identify the packet's source or destination, identify the protocol to be used to interpret the packet, identify the packet's place in a sequence of packets, provide an error correction checksum, or aid packet flow control.
  • the finite-length of a packet can vary based on the type of network that the packet is to be transmitted through and the type of application used to present the data.
  • packet headers and their functions are arranged in an orderly fashion according to the open-systems interconnection (OSI) reference model.
  • OSI open-systems interconnection
  • This model partitions packet communications functions into layers, each layer performing specific functions in a manner that can be largely independent of the functions of the other layers. As such, each layer can prepend its own header to a packet, and regard all higher-layer headers as merely part of the data to be transmitted.
  • Layer 1 the physical layer, is concerned with transmission of a bit stream over a physical link.
  • Layer 2 the data link layer, provides mechanisms for the transfer of frames of data across a single physical link, typically using a link-layer header on each frame.
  • Layer 3 the network layer, provides network-wide packet delivery and switching functionality-the well-known Internet Protocol (IP) is a layer 3 protocol.
  • Layer 4 the transport layer, can provide mechanisms for end-to-end delivery of packets, such as end-to-end packet sequencing, flow control, and error recovery-Transmission Control Protocol (TCP), a reliable layer 4 protocol that ensures in-order delivery of an octet stream, and User Datagram Protocol, a simpler layer 4 protocol with no guaranteed delivery, are well-known examples of layer 4 implementations.
  • Layer 5 the session layer
  • Layer 6 the presentation layer
  • Layer 7 the application layer
  • IP packet fragmentation occurs in layer 3 , e.g., when the packets are routed to a Wide Area Network (WAN) from a Local Area Network (LAN).
  • WAN Wide Area Network
  • LAN Local Area Network
  • the gateway transferring the packet from a LAN to a WAN fragments a long packet, including its higher level headers, into approximately 6 shorter IP packets.
  • the IP headers of the fragmented packet will contain information alerting a receiver that the higher-layer packet is fragmented, and provide a fragment offset to enable the receiver to properly sequence the fragmented packets.
  • Some protocols such as the Internet Small Computer Systems Interface (iSCSI) protocol, allow aggregation of multiple headers/data payloads in a single packet.
  • An iSCSI packet contains a command descriptor block (CDB), which may be comprised of multiple iSCSI headers and iSCSI payload combinations.
  • CDB command descriptor block
  • Each of the iSCSI headers contains information about the length of the iSCSI header and its corresponding iSCSI payload so the receiver can properly sequence the digital data.
  • packets are used to transmit secure data over a network, many packets are encrypted before they are sent, which causes some headers to be encrypted as well.
  • the encryption of packets typically involves complex algorithms that code the entire packet except for layers needed to switch/route the packet. Therefore, many receivers need to enable the decryption and/or authentication of packets before they can determine the content of upper-layer headers.
  • FIG. 1 illustrates, in block form, a semantic processor useful with embodiments of the present invention
  • FIG. 2 contains a flow chart for the processing of received packets in the semantic processor with the recirculation buffer in FIG. 1 ;
  • FIG. 3 illustrates another more detailed semantic processor implementation useful with embodiments of the present invention
  • FIG. 4 contains a flow chart of received IP fragmented packets in the semantic processor in FIG. 3 ;
  • FIG. 5 contains a flow chart of received encrypted and/or unauthenticated packets in the semantic processor in FIG. 3 ;
  • FIG. 6 illustrates yet another semantic processor implementation useful with embodiments of the present invention.
  • FIG. 7 contains a flow chart of received iSCSI packets through a TCP connection in the semantic processor in FIG. 6 .
  • FIG. 8 illustrates one possible implementation for port input buffer (PIB) useful with embodiments of the invention.
  • PIC port input buffer
  • the present invention relates to digital semantic processors for data stream processing with a direct execution parser.
  • Many packets received through a network require decryption, authentication, sequencing or other processing, or any combination thereof, which complicates processing through the direct execution parser.
  • the addition of a recirculation buffer to the semantic processor enables parsing to be resumed for a packet that was only partially parsed on a previous pass through the direct execution parser, which allows for fast and efficient packet processing when single-pass parsing is difficult or impossible.
  • FIG. 1 shows a block diagram of a semantic processor 100 according to an embodiment of the invention.
  • the semantic processor 100 contains an input buffer 140 for buffering a packet data stream (e.g., the input stream) received through the input port 120 , a direct execution parser (DXP) 180 that controls the processing of packet data received at the input buffer 140 , a recirculation buffer 160 , a semantic processing unit 200 for processing segments of the packets or for performing other operations, and a memory subsystem 240 for storing and/or augmenting segments of the packets.
  • the input buffer 140 and recirculation buffer 160 are preferably first-in-first-out (FIFO) buffers.
  • the DXP 180 controls the processing of packets or frames within the input buffer 140 (e.g., the input stream) and the recirculation buffer 160 (e.g., the recirculation stream). Since the DXP 180 parses the input stream from input buffer 140 and the recirculation stream from the recirculation buffer 160 in a similar fashion, only the parsing of the input stream will be described below.
  • the DXP 180 maintains an internal parser stack (not shown) of terminal and non-terminal symbols, based on parsing of the current frame up to the current symbol. For instance, each symbol on the internal parser stack is capable of indicating to the DXP 180 a parsing state for the current input frame or packet.
  • DXP 180 compares data at the head of the input stream to the terminal symbol and expects a match in order to continue.
  • the symbol at the top of the parser stack is a non-terminal symbol
  • DXP 180 uses the non-terminal symbol and current input data to expand the grammar production on the stack.
  • DXP 180 instructs SPU 200 to process segments of the input stream or perform other operations.
  • the DXP 180 may parse the data in the input stream prior to receiving all of the data to be processed by the semantic processor 100 . For instance, when the data is packetized, the semantic processor 100 may begin to parse through the headers of the packet before the entire packet is received at input port 120 .
  • Semantic processor 100 uses at least three tables. Code segments for SPU 200 are stored in semantic code table (SCT) 150 . Complex grammatical production rules are stored in a production rule table (PRT) 190 . Production rule codes for retrieving those production rules are stored in a parser table (PT) 170 . The production rule codes in parser table 170 allow DXP 180 to detect whether, for a given production rule, a code segment from SCT 150 should be loaded and executed by SPU 200 .
  • SCT semantic code table
  • PRT production rule table
  • PT parser table
  • Some embodiments of the invention contain many more elements than those shown in FIG. 1 , but these essential elements appear in every system or software embodiment. Thus, a description of the packet flow within the semantic processor 100 shown in FIG. 1 will be given before more complex embodiments are addressed.
  • FIG. 2 contains a flow chart 300 for the processing of received packets through the semantic processor 100 of FIG. 1 .
  • the flowchart 300 is used for illustrating a method of the invention.
  • a packet is received at the input buffer 140 through the input port 120 .
  • the DXP 180 begins to parse through the header of the packet within the input buffer 140 .
  • the DXP 180 If the DXP 180 was able to completely parse through the header, then according to a next block 370 , the DXP 180 calls a routine within the SPU 200 to process the packet payload. The semantic processor 100 then waits for a next packet to be received at the input buffer 140 through the input port 120 .
  • the DXP 180 If the DXP 180 had to cease parsing the header, then according to a next block 340 , the DXP 180 calls a routine within the SPU 200 to manipulate the packet or wait for additional packets. Upon completion of the manipulation or the arrival of additional packets, the SPU 200 creates an adjusted packet.
  • the SPU 200 writes the adjusted packet (or a portion thereof) to the recirculation buffer 160 .
  • This can be accomplished by either enabling the recirculation buffer 160 with direct memory access to the memory subsystem 240 or by having the SPU 200 read the adjusted packet from the memory subsystem 240 and then write the adjusted packet to the recirculation buffer 160 .
  • a specialized header can be written to the recirculation buffer 160 . This specialized header directs the SPU 200 to process the adjusted packet without having to transfer the entire packet out of memory subsystem 240 .
  • the DXP 180 begins to parse through the header of the data within the recirculation buffer 160 . Execution is then returned to block 330 , where it is determined whether the DXP 180 was able to completely parse through the header. If the DXP 180 was able to completely parse through the header, then according to a next block 370 , the DXP 180 calls a routine within the SPU 200 to process the packet payload and the semantic processor 100 waits for a next packet to be received at the input buffer 140 through the input port 120 .
  • execution returns to block 340 where the DXP 180 calls a routine within the SPU 200 to manipulate the packet or wait for additional packets, thus creating an adjusted packet.
  • the SPU 200 then writes the adjusted packet to the recirculation buffer 160 , and the DXP 180 begins to parse through the header of the packet within the recirculation buffer 160 .
  • FIG. 3 shows another semantic processor embodiment 400 .
  • Semantic processor 400 includes memory subsystem 240 , which comprises an array machine-context data memory (AMCD) 430 for accessing data in dynamic random access memory (DRAM) 480 through a hashing function or content-addressable memory (CAM) lookup, a cryptography block 440 for encryption or decryption, and/or authentication of data, a context control block (CCB) cache 450 for caching context control blocks to and from DRAM 480 , a general cache 460 for caching data used in basic operations, and a streaming cache 470 for caching data streams as they are being written to and read from DRAM 480 .
  • the context control block cache 450 is preferably a software-controlled cache, i.e., the SPU 410 determines when a cache line is used and freed.
  • the SPU 410 is coupled with AMCD 430 , cryptography block 440 , CCB cache 450 , general cache 460 , and streaming cache 470 .
  • the SPU 410 loads microinstructions from semantic code table (SCT) 150 .
  • SCT semantic code table
  • FIG. 4 contains a flow chart 500 for the processing of received Internet Protocol (IP)-fragmented packets through the semantic processor 400 of FIG. 3 .
  • IP Internet Protocol
  • the flowchart 500 is used for illustrating one method according to an embodiment of the invention.
  • the DXP 180 ceases parsing through the headers of the received packet because the packet is determined to be an IP-fragmented packet.
  • the DXP 180 completely parses through the IP header, but ceases to parse through any headers belonging to subsequent layers, such as TCP, UDP, iSCSI, etc.
  • the DXP 180 signals to the SPU 410 to load the appropriate microinstructions from the SCT 150 and read the received packet from the input buffer 140 .
  • the SPU 410 writes the received packet to DRAM 480 through the streaming cache 470 .
  • blocks 520 and 530 are shown as two separate steps, optionally, they can be performed as one step—with the SPU 410 reading and writing the packet concurrently. This concurrent operation of reading and writing by the SPU 410 is known as SPU pipelining, where the SPU 410 acts as a conduit or pipeline for streaming data to be transferred between two blocks within the semantic processor 400 .
  • the SPU 410 determines if a Context Control Block (CCB) has been allocated for the collection and sequencing of the correct IP packet fragments.
  • CCB Context Control Block
  • the CCB for collecting and sequencing the fragments corresponding to an IP-fragmented packet is stored in DRAM 480 .
  • the CCB contains pointers to the IP fragments in DRAM 480 , a bit mask for the IP-fragmented packets that have not arrived, and a timer value to force the semantic processor 400 to cease waiting for additional IP-fragmented packets after an allotted period of time and to release the data stored in the CCB within DRAM 480 .
  • the SPU 410 preferably determines if a CCB has been allocated by accessing the AMCD's 430 content-addressable memory (CAM) lookup function using the IP source address of the received IP-fragmented packet combined with the identification and protocol from the header of the received IP packet fragment as a key.
  • the IP fragment keys are stored in a separate CCB table within DRAM 480 and are accessed with the CAM by using the IP source address of the received IP-fragmented packet combined with the identification and protocol from the header of the received IP packet fragment. This optional addressing of the IP fragment keys avoids key overlap and sizing problems.
  • the SPU 410 determines that a CCB has not been allocated for the collection and sequencing of fragments for a particular IP-fragmented packet, execution then proceeds to a block 550 where the SPU 410 allocates a CCB.
  • the SPU 410 preferably enters a key corresponding to the allocated CCB, the key comprising the IP source address of the received IP fragment and the identification and protocol from the header of the received IP-fragmented packet, into an IP fragment CCB table within the AMCD 430 , and starts the timer located in the CCB.
  • the IP header is also saved to the CCB for later recirculation. For further fragments, the IP header need not be saved.
  • the SPU 410 stores a pointer to the IP-fragmented packet (minus its IP header) in DRAM 480 within the CCB, according to a next block 560 .
  • the pointers for the fragments can be arranged in the CCB as, e.g., a linked list.
  • the SPU 410 also updates the bit mask in the newly allocated CCB by marking the portion of the mask corresponding to the received fragment as received.
  • the SPU 410 determines if all of the IP fragments from the packet have been received. Preferably, this determination is accomplished by using the bit mask in the CCB.
  • bit mask in the CCB.
  • the semantic processor 400 defers further processing on that fragmented packet until another fragment is received.
  • the SPU 410 resets the timer, reads the IP fragments from DRAM 480 in the correct order, and writes them to the recirculation buffer 160 for additional parsing and processing.
  • the SPU 410 writes only a specialized header and the first part of the reassembled IP packet (with the fragmentation bit unset) to the recirculation buffer 160 .
  • the specialized header enables the DXP 180 to direct the processing of the reassembled IP-fragmented packet stored in DRAM 480 without having to transfer all of the IP-fragmented packets to the recirculation buffer 160 .
  • the specialized header can consist of a designated non-terminal symbol that loads parser grammar for IP and a pointer to the CCB.
  • the parser can then parse the IP header normally and proceed to parse higher-layer (e.g., TCP) headers.
  • higher-layer e.g., TCP
  • DXP 180 decides to parse the data received at either the recirculation buffer 160 or the input buffer 140 through round robin arbitration.
  • a high level description of round robin arbitration will now be discussed with reference to a first and a second buffer for receiving packet data streams.
  • DXP 180 looks to the second buffer to determine if data is available to be parsed. If so, the data from the second buffer is parsed. If not, then DXP 180 looks back to the first buffer to determine if data is available to be parsed. DXP 180 continues this round robin arbitration until data is available to be parsed in either the first buffer or second buffer.
  • FIG. 5 contains a flow chart 600 for the processing of received packets in need of decryption and/or authentication through the semantic processor 400 of FIG. 3 .
  • the flowchart 600 is used for illustrating another method according to an embodiment of the invention.
  • the DXP 180 ceases parsing through the headers of the received packet because it is determined that the packet needs decryption and/or authentication. If DXP 180 begins to parse through the packet headers from the recirculation buffer 160 , preferably, the recirculation buffer 160 will only contain the aforementioned specialized header and the first part of the reassembled IP packet.
  • the DXP 180 signals to the SPU 410 to load the appropriate microinstructions from the SCT 150 and read the received packet from input buffer 140 or recirculation buffer 160 .
  • SPU 410 will read the packet fragments from DRAM 480 instead of the recirculation buffer 160 for data that has not already been placed in the recirculation buffer 160 .
  • the SPU 410 writes the received packet to cryptography block 440 , where the packet is authenticated, decrypted, or both.
  • decryption and authentication are performed in parallel within cryptography block 440 .
  • the cryptography block 440 enables the authentication, encryption, or decryption of a packet through the use of Triple Data Encryption Standard (T-DES), Advanced Encryption Standard (AES), Message Digest 5 (MD- 5 ), Secure Hash Algorithm 1 (SHA- 1 ), Rivest Cipher 4 (RC- 4 ) algorithms, etc.
  • T-DES Triple Data Encryption Standard
  • AES Advanced Encryption Standard
  • MD- 5 Message Digest 5
  • MD- 5 Secure Hash Algorithm 1
  • Rivest Cipher 4 (RC- 4 ) algorithms etc.
  • the decrypted and/or authenticated packet is then written to SPU 410 and, according to a next block 640 , the SPU 410 writes the packet to the recirculation buffer 160 for further processing.
  • the cryptography block 440 contains a direct memory access engine that can read data from and write data to DRAM 480 .
  • SPU 410 can then read just the headers of the decrypted and/or authenticated packet from DRAM 480 and subsequently write them to the recirculation buffer 160 . Since the payload of the packet remains in DRAM 480 , semantic processor 400 saves processing time.
  • a specialized header can be written to the recirculation buffer to orient the parser and pass CCB information back to SPU 410 .
  • Multiple passes through the recirculation buffer 160 may be necessary when IP fragmentation and encryption/authentication are contained in a single packet received by the semantic processor 400 .
  • FIG. 6 shows yet another semantic processor embodiment.
  • Semantic processor 700 contains a semantic processing unit (SPU) cluster 410 containing a plurality of semantic processing units 410 - 1 , 410 - 2 , 410 -n.
  • SPU semantic processing unit
  • the SPU cluster 410 is coupled to the memory subsystem 240 , a SPU entry point (SEP) dispatcher 720 , the SCT 150 , port input buffer (PIB) 730 , port output buffer (POB) 750 , and a machine central processing unit (MCPU) 771 .
  • SEP SPU entry point
  • PIB port input buffer
  • POB port output buffer
  • MCPU machine central processing unit
  • DXP 180 determines that a SPU task is to be launched at a specific point in parsing
  • DXP 180 signals SEP dispatcher 720 to load microinstructions from SCT 150 and allocate a SPU from the plurality of SPUs 410 - 1 to 410 -n within the SPU cluster 410 to perform the task.
  • the loaded microinstructions and task to be performed are then sent to the allocated SPU.
  • the allocated SPU executes the microinstructions and the data packet is processed accordingly.
  • the SPU can optionally load microinstructions from the SCT 150 directly when instructed by the SEP dispatcher 720 .
  • the PIB 730 contains at least one network interface input buffer, a recirculation buffer, and a Peripheral Component Interconnect (PCI-X) input buffer.
  • the POB 750 contains at least one network interface output buffer and a Peripheral Component Interconnect (PCI-X) output buffer.
  • the port block 740 contains one or more ports, each comprising a physical interface, e.g., an optical, electrical, or radio frequency driver/receiver pair for an Ethernet, Fibre Channel, 802.11 ⁇ , Universal Serial Bus, Firewire, or other physical layer interface.
  • the number of ports within port block 740 corresponds to the number of network interface input buffers within the PIB 730 and the number of output buffers within the POB 750 .
  • the PCI-X interface 760 is coupled to a PCI-X input buffer within the PIB 730 , a PCI-X output buffer within the POB 750 , and an external PCI bus 780 .
  • the PCI bus 780 can connect to other PCI-capable components, such as disk drive, interfaces for additional network ports, etc.
  • the MCPU 771 is coupled with the SPU cluster 410 and memory subsystem 240 .
  • the MCPU 771 may perform any desired function for semantic processor 700 that can be reasonably accomplished with traditional software running on standard hardware. These functions are usually infrequent, non-time-critical functions that do not warrant inclusion in SCT 150 due to complexity.
  • the MCPU 771 also has the capability to communicate with the dispatcher in SPU cluster 410 in order to request that a SPU perform tasks on the MCPU's behalf.
  • the memory subsystem 240 further comprises a DRAM interface 790 that couples the cryptography block 440 , context control block cache 450 , general cache 460 , and streaming cache 470 to DRAM 480 and external DRAM 791 .
  • the AMCD 430 connects directly to an external TCAM 793 , which, in turn, is coupled to an external Static Random Access Memory (SRAM) 795 .
  • SRAM Static Random Access Memory
  • FIG. 7 contains a flow chart 800 for the processing of received Internet Small Computer Systems Interface (iSCSI) data through the semantic processor 700 of FIG. 6 .
  • the flowchart 800 is used for illustrating another method according to an embodiment of the invention.
  • an iSCSI connection having at least one Transmission Control Protocol (TCP) session is established between an initiator and the target semantic processor 700 for the transmission of iSCSI data.
  • the semantic processor 700 contains the appropriate grammar in the PT 170 and the PRT 190 and microcode in SCT 150 to establish a TCP session and then process the initial login and authentication of the iSCSI connection through the MCPU 771 .
  • one or more SPUs within the SPU cluster 410 organize and maintain state for the TCP session, including allocating a CCB in DRAM 480 for TCP reordering, window sizing constraints and a timer for ending the TCP session if no further TCP/iSCSI packets arrive from the initiator within the allotted time frame.
  • the TCP CCB contains a field for associating that CCB with an iSCSI CCB once an iSCSI connection is established by MCPU 771 .
  • semantic processor 700 waits for a TCP/iSCSI packet, corresponding to the TCP session established in block 810 , to arrive at the input buffer 140 of the PIB 730 . Since semantic processor 700 has a plurality of SPUs 410 - 1 to 410 -n available for processing input data, semantic processor 700 can receive and process multiple packets in parallel while waiting for the next TCP/iSCSI packet corresponding to the TCP session established in the block 810 .
  • a TCP/iSCSI packet is received at the input buffer 140 of the PIB 730 through the input port 120 of port block 740 , and the DXP 180 parses through the TCP header of the packet within the input buffer 140 .
  • the DXP 180 signals to the SEP dispatcher 720 to load the appropriate microinstructions from the SCT 150 , allocate a SPU from the SPU cluster 410 , and send to the allocated SPU microinstructions that, when executed, require the allocated SPU to read the received packet from the input buffer 140 and write the received packet to DRAM 480 through the streaming cache 470 .
  • the allocated SPU then uses the AMCD's 430 lookup function to locate the TCP CCB, stores the pointer to the location of the received packet in DRAM 480 to the TCP CCB, and restarts the timer in the TCP CCB.
  • the allocated SPU is then released and can be allocated for other processing as the DXP 180 determines.
  • the received TCP/iSCSI packet is reordered, if necessary, to ensure correct sequencing of payload data.
  • a TCP packet is deemed to be in proper order if all of the preceding packets have arrived.
  • the responsible SPU When the received packet is determined to be in the proper order, the responsible SPU signals the SEP dispatcher 720 to load microinstructions from the SCT 150 for iSCSI recirculation.
  • the allocated SPU combines the iSCSI header, the TCP connection ID from the TCP header and an iSCSI non-terminal to create a specialized iSCSI header.
  • the allocated SPU then writes the specialized iSCSI header to the recirculation buffer 160 within the PIB 730 .
  • the specialized iSCSI header can be sent to the recirculation buffer 160 with its corresponding iSCSI payload.
  • the specialized iSCSI header is parsed and semantic processor 700 processes the iSCSI payload.
  • a next decision block 870 it is inquired whether there is another iSCSI header in the received TCP/iSCSI packet. If YES, then execution returns to block 850 where the second iSCSI header within the received TCP/iSCSI packet is used to process the second iSCSI payload.
  • the second iSCSI header within the received TCP/iSCSI packet is used to process the second iSCSI payload.
  • block 870 returns execution to the block 820 , where semantic processor 700 waits for another TCP/iSCSI packet corresponding to the TCP session established in the block 810 .
  • the allocated SPU is then released and can be allocated for other processing as the DXP 180 determines.
  • multiple segments of a packet may be passed through the recirculation buffer 160 at different times when any combination of encryption, authentication, IP fragmentation and iSCSI data processing are contained in a single packet received by the semantic processor 700 .
  • FIG. 8 illustrates one possible implementation for port input buffer (PIB) 730 useful with embodiments of the invention.
  • the PIB 730 contains at least one network interface input buffer 140 ( 140 - 0 and 140 - 1 are shown), a recirculation buffer 160 , and a Peripheral Component Interconnect (PCI-X) input buffer 140 _ 2 .
  • Input buffer 140 _ 0 and 140 _ 1 , and PCI-X input buffer 140 _ 2 are functionally the same as input buffer 140 , but they receive input data from a different input to port block 740 and PCI-X interface 760 , respectively.
  • Recirculation buffer 160 is comprised of a buffer 712 that receives recirculation data from SPU Cluster 410 ( FIG. 6 ), a control block 714 for controlling the recirculation data in buffer 712 , a FIFO block 716 to allow a DXP 180 ( FIG. 6 ) FIFO access to the recirculation data in buffer 712 , and a random access (RA) block 718 to allows a SPU within SPU Cluster 410 random access to the recirculation data in buffer 712 .
  • recirculation buffer 160 transmits a Port ID to DXP 180 , alerting DXP 800 that new data has arrived.
  • the Port ID that is transmitted is the first symbol within buffer 712 .
  • DXP 180 When DXP 180 decides to parse through the recirculation data, it sends a Control_DXP signal to recirculation buffer 160 asking for a certain amount of data from buffer 712 , or to increment buffer's 712 data pointer.
  • control block 714 Upon receipt of a Control_DXP signal, control block 714 transmits a Data_DXP signal, containing data from buffer 712 , to DXP 180 through FIFO block 716 .
  • the control block 714 and FIFO block 716 add control characters into the recirculation data that is sent to DXP 180 using the Data_DXP signal.
  • the control characters are 1-bit status flags that are added at the beginning of each byte of data transferred and denote whether the byte of data is a terminal or non-terminal symbol.
  • SEP SPU entry point
  • the SPU 410 - 1 When a SPU 410 - 1 within SPU cluster 410 receives a SPU entry point (SEP) from DXP 180 that requires it to access data within the recirculation stream, the SPU 410 - 1 sends a Control_SPU signal to recirculation buffer 160 requesting the data at a certain location from buffer 712 .
  • control block 714 Upon receipt of a Control_SPU signal, control block 714 transmits a Data_SPU signal, containing data from buffer 712 , to SPU 410 - 1 through RA block 718 .

Abstract

A system and method comprising a buffer configured to receive a data stream, a parser configured to parse the data stream from the buffer, and one or more processing units configured to co-process the data stream from the buffer responsive to the parsing by the parser, and then provide at least a portion of the processed data stream back to the buffer for additional parsing by the parser.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of both copending, commonly-assigned U.S. patent application 11/181,611, which claims priority from U.S. Provisional Application No. 60/591,663 filed Jul. 27, 2004, and copending, commonly-assigned U.S. patent application Ser. No. 11/181,527, which is a continuation-in-part of copending, commonly-assigned U.S. patent application Ser. No. 10/351,030, filed on Jan. 24, 2003, and claims priority from U.S. Provisional Application No. 60/591,978 filed Jul. 28, 2004. All of which are incorporated by reference herein.
  • BACKGROUND OF THE INVENTION
  • In the data communications field, a packet is a finite-length (generally several tens to several thousands of octets) digital transmission unit comprising one or more header fields and a data field. The data field may contain virtually any type of digital data. The header fields convey information (in different formats depending on the type of header and options) related to delivery and interpretation of the packet contents. This information may, e.g., identify the packet's source or destination, identify the protocol to be used to interpret the packet, identify the packet's place in a sequence of packets, provide an error correction checksum, or aid packet flow control. The finite-length of a packet can vary based on the type of network that the packet is to be transmitted through and the type of application used to present the data.
  • Typically, packet headers and their functions are arranged in an orderly fashion according to the open-systems interconnection (OSI) reference model. This model partitions packet communications functions into layers, each layer performing specific functions in a manner that can be largely independent of the functions of the other layers. As such, each layer can prepend its own header to a packet, and regard all higher-layer headers as merely part of the data to be transmitted. Layer 1, the physical layer, is concerned with transmission of a bit stream over a physical link. Layer 2, the data link layer, provides mechanisms for the transfer of frames of data across a single physical link, typically using a link-layer header on each frame. Layer 3, the network layer, provides network-wide packet delivery and switching functionality-the well-known Internet Protocol (IP) is a layer 3 protocol. Layer 4, the transport layer, can provide mechanisms for end-to-end delivery of packets, such as end-to-end packet sequencing, flow control, and error recovery-Transmission Control Protocol (TCP), a reliable layer 4 protocol that ensures in-order delivery of an octet stream, and User Datagram Protocol, a simpler layer 4 protocol with no guaranteed delivery, are well-known examples of layer 4 implementations. Layer 5 (the session layer), Layer 6 (the presentation layer), and Layer 7 (the application layer) perform higher-level functions such as communication session management, data formatting, data encryption, and data compression.
  • Not all packets follow the basic pattern of cascaded headers with a simple payload. For instance, packets can undergo IP fragmentation when transferred through a network and can arrive at a receiver out-of-order. IP packet fragmentation occurs in layer 3, e.g., when the packets are routed to a Wide Area Network (WAN) from a Local Area Network (LAN). In the typical case, where a LAN packet is 9000 bytes in length and a WAN packet is 1500 bytes in length, the gateway transferring the packet from a LAN to a WAN fragments a long packet, including its higher level headers, into approximately 6 shorter IP packets. The IP headers of the fragmented packet will contain information alerting a receiver that the higher-layer packet is fragmented, and provide a fragment offset to enable the receiver to properly sequence the fragmented packets.
  • Some protocols, such as the Internet Small Computer Systems Interface (iSCSI) protocol, allow aggregation of multiple headers/data payloads in a single packet. An iSCSI packet contains a command descriptor block (CDB), which may be comprised of multiple iSCSI headers and iSCSI payload combinations. Each of the iSCSI headers contains information about the length of the iSCSI header and its corresponding iSCSI payload so the receiver can properly sequence the digital data.
  • Since packets are used to transmit secure data over a network, many packets are encrypted before they are sent, which causes some headers to be encrypted as well. The encryption of packets typically involves complex algorithms that code the entire packet except for layers needed to switch/route the packet. Therefore, many receivers need to enable the decryption and/or authentication of packets before they can determine the content of upper-layer headers.
  • DESCRIPTION OF THE DRAWINGS
  • The invention may be best understood by reading the disclosure with reference to the drawings, wherein:
  • FIG. 1 illustrates, in block form, a semantic processor useful with embodiments of the present invention;
  • FIG. 2 contains a flow chart for the processing of received packets in the semantic processor with the recirculation buffer in FIG. 1;
  • FIG. 3 illustrates another more detailed semantic processor implementation useful with embodiments of the present invention;
  • FIG. 4 contains a flow chart of received IP fragmented packets in the semantic processor in FIG. 3;
  • FIG. 5 contains a flow chart of received encrypted and/or unauthenticated packets in the semantic processor in FIG. 3; and
  • FIG. 6 illustrates yet another semantic processor implementation useful with embodiments of the present invention.
  • FIG. 7 contains a flow chart of received iSCSI packets through a TCP connection in the semantic processor in FIG. 6.
  • FIG. 8 illustrates one possible implementation for port input buffer (PIB) useful with embodiments of the invention.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • The present invention relates to digital semantic processors for data stream processing with a direct execution parser. Many packets received through a network require decryption, authentication, sequencing or other processing, or any combination thereof, which complicates processing through the direct execution parser. The addition of a recirculation buffer to the semantic processor enables parsing to be resumed for a packet that was only partially parsed on a previous pass through the direct execution parser, which allows for fast and efficient packet processing when single-pass parsing is difficult or impossible. The invention is now described in more detail.
  • Reference will now be made in detail to preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The present invention is not limited to the illustrated embodiments, however, and the illustrated embodiments are introduced to provide easy and complete understanding of the spirit and scope of the present invention.
  • FIG. 1 shows a block diagram of a semantic processor 100 according to an embodiment of the invention. The semantic processor 100 contains an input buffer 140 for buffering a packet data stream (e.g., the input stream) received through the input port 120, a direct execution parser (DXP) 180 that controls the processing of packet data received at the input buffer 140, a recirculation buffer 160, a semantic processing unit 200 for processing segments of the packets or for performing other operations, and a memory subsystem 240 for storing and/or augmenting segments of the packets. The input buffer 140 and recirculation buffer 160 are preferably first-in-first-out (FIFO) buffers.
  • The DXP 180 controls the processing of packets or frames within the input buffer 140 (e.g., the input stream) and the recirculation buffer 160 (e.g., the recirculation stream). Since the DXP 180 parses the input stream from input buffer 140 and the recirculation stream from the recirculation buffer 160 in a similar fashion, only the parsing of the input stream will be described below.
  • The DXP 180 maintains an internal parser stack (not shown) of terminal and non-terminal symbols, based on parsing of the current frame up to the current symbol. For instance, each symbol on the internal parser stack is capable of indicating to the DXP 180 a parsing state for the current input frame or packet. When the symbol (or symbols) at the top of the parser stack is a terminal symbol, DXP 180 compares data at the head of the input stream to the terminal symbol and expects a match in order to continue. When the symbol at the top of the parser stack is a non-terminal symbol, DXP 180 uses the non-terminal symbol and current input data to expand the grammar production on the stack. As parsing continues, DXP 180 instructs SPU 200 to process segments of the input stream or perform other operations. The DXP 180 may parse the data in the input stream prior to receiving all of the data to be processed by the semantic processor 100. For instance, when the data is packetized, the semantic processor 100 may begin to parse through the headers of the packet before the entire packet is received at input port 120.
  • Semantic processor 100 uses at least three tables. Code segments for SPU 200 are stored in semantic code table (SCT) 150. Complex grammatical production rules are stored in a production rule table (PRT) 190. Production rule codes for retrieving those production rules are stored in a parser table (PT) 170. The production rule codes in parser table 170 allow DXP 180 to detect whether, for a given production rule, a code segment from SCT 150 should be loaded and executed by SPU 200.
  • Some embodiments of the invention contain many more elements than those shown in FIG. 1, but these essential elements appear in every system or software embodiment. Thus, a description of the packet flow within the semantic processor 100 shown in FIG. 1 will be given before more complex embodiments are addressed.
  • FIG. 2 contains a flow chart 300 for the processing of received packets through the semantic processor 100 of FIG. 1. The flowchart 300 is used for illustrating a method of the invention.
  • According to a block 310, a packet is received at the input buffer 140 through the input port 120. According to a next block 320, the DXP 180 begins to parse through the header of the packet within the input buffer 140. According to a decision block 330, it is determined whether the DXP 180 was able to completely parse through header. In the case where the packet needs no additional manipulation or additional packets to enable the processing of the packet payload, the DXP 180 will completely parse through the header. In the case where the packet needs additional manipulation or additional packets to enable the processing of the packet payload, the DXP 180 will cease to parse the header.
  • If the DXP 180 was able to completely parse through the header, then according to a next block 370, the DXP 180 calls a routine within the SPU 200 to process the packet payload. The semantic processor 100 then waits for a next packet to be received at the input buffer 140 through the input port 120.
  • If the DXP 180 had to cease parsing the header, then according to a next block 340, the DXP 180 calls a routine within the SPU 200 to manipulate the packet or wait for additional packets. Upon completion of the manipulation or the arrival of additional packets, the SPU 200 creates an adjusted packet.
  • According to a next block 350, the SPU 200 writes the adjusted packet (or a portion thereof) to the recirculation buffer 160. This can be accomplished by either enabling the recirculation buffer 160 with direct memory access to the memory subsystem 240 or by having the SPU 200 read the adjusted packet from the memory subsystem 240 and then write the adjusted packet to the recirculation buffer 160. Optionally, to save processing time within the SPU 200, instead of the entire adjusted packet, a specialized header can be written to the recirculation buffer 160. This specialized header directs the SPU 200 to process the adjusted packet without having to transfer the entire packet out of memory subsystem 240.
  • According to a next block 360, the DXP 180 begins to parse through the header of the data within the recirculation buffer 160. Execution is then returned to block 330, where it is determined whether the DXP 180 was able to completely parse through the header. If the DXP 180 was able to completely parse through the header, then according to a next block 370, the DXP 180 calls a routine within the SPU 200 to process the packet payload and the semantic processor 100 waits for a next packet to be received at the input buffer 140 through the input port 120.
  • If the DXP 180 had to cease parsing the header, execution returns to block 340 where the DXP 180 calls a routine within the SPU 200 to manipulate the packet or wait for additional packets, thus creating an adjusted packet. The SPU 200 then writes the adjusted packet to the recirculation buffer 160, and the DXP 180 begins to parse through the header of the packet within the recirculation buffer 160.
  • FIG. 3 shows another semantic processor embodiment 400. Semantic processor 400 includes memory subsystem 240, which comprises an array machine-context data memory (AMCD) 430 for accessing data in dynamic random access memory (DRAM) 480 through a hashing function or content-addressable memory (CAM) lookup, a cryptography block 440 for encryption or decryption, and/or authentication of data, a context control block (CCB) cache 450 for caching context control blocks to and from DRAM 480, a general cache 460 for caching data used in basic operations, and a streaming cache 470 for caching data streams as they are being written to and read from DRAM 480. The context control block cache 450 is preferably a software-controlled cache, i.e., the SPU 410 determines when a cache line is used and freed.
  • The SPU 410 is coupled with AMCD 430, cryptography block 440, CCB cache 450, general cache 460, and streaming cache 470. When signaled by the DXP 180 to process a segment of data in memory subsystem 240 or received at input buffer 120 (FIG. 1), the SPU 410 loads microinstructions from semantic code table (SCT) 150. The loaded microinstructions are then executed in the SPU 410 and the segment of the packet is processed accordingly.
  • FIG. 4 contains a flow chart 500 for the processing of received Internet Protocol (IP)-fragmented packets through the semantic processor 400 of FIG. 3. The flowchart 500 is used for illustrating one method according to an embodiment of the invention.
  • Once a packet is received at the input buffer 140 through the input port 120 and the DXP 180 begins to parse through the headers of the packet within the input buffer 140, according to a block 510, the DXP 180 ceases parsing through the headers of the received packet because the packet is determined to be an IP-fragmented packet. Preferably, the DXP 180 completely parses through the IP header, but ceases to parse through any headers belonging to subsequent layers, such as TCP, UDP, iSCSI, etc.
  • According to a next block 520, the DXP 180 signals to the SPU 410 to load the appropriate microinstructions from the SCT 150 and read the received packet from the input buffer 140. According to a next block 530, the SPU 410 writes the received packet to DRAM 480 through the streaming cache 470. Although blocks 520 and 530 are shown as two separate steps, optionally, they can be performed as one step—with the SPU 410 reading and writing the packet concurrently. This concurrent operation of reading and writing by the SPU 410 is known as SPU pipelining, where the SPU 410 acts as a conduit or pipeline for streaming data to be transferred between two blocks within the semantic processor 400.
  • According to a next decision block 540, the SPU 410 determines if a Context Control Block (CCB) has been allocated for the collection and sequencing of the correct IP packet fragments. Preferably, the CCB for collecting and sequencing the fragments corresponding to an IP-fragmented packet is stored in DRAM 480. The CCB contains pointers to the IP fragments in DRAM 480, a bit mask for the IP-fragmented packets that have not arrived, and a timer value to force the semantic processor 400 to cease waiting for additional IP-fragmented packets after an allotted period of time and to release the data stored in the CCB within DRAM 480.
  • The SPU 410 preferably determines if a CCB has been allocated by accessing the AMCD's 430 content-addressable memory (CAM) lookup function using the IP source address of the received IP-fragmented packet combined with the identification and protocol from the header of the received IP packet fragment as a key. Optionally, the IP fragment keys are stored in a separate CCB table within DRAM 480 and are accessed with the CAM by using the IP source address of the received IP-fragmented packet combined with the identification and protocol from the header of the received IP packet fragment. This optional addressing of the IP fragment keys avoids key overlap and sizing problems.
  • If the SPU 410 determines that a CCB has not been allocated for the collection and sequencing of fragments for a particular IP-fragmented packet, execution then proceeds to a block 550 where the SPU 410 allocates a CCB. The SPU 410 preferably enters a key corresponding to the allocated CCB, the key comprising the IP source address of the received IP fragment and the identification and protocol from the header of the received IP-fragmented packet, into an IP fragment CCB table within the AMCD 430, and starts the timer located in the CCB. When the first fragment for given fragmented packet is received, the IP header is also saved to the CCB for later recirculation. For further fragments, the IP header need not be saved.
  • Once a CCB has been allocated for the collection and sequencing of IP-fragmented packet, the SPU 410 stores a pointer to the IP-fragmented packet (minus its IP header) in DRAM 480 within the CCB, according to a next block 560. The pointers for the fragments can be arranged in the CCB as, e.g., a linked list. Preferably, the SPU 410 also updates the bit mask in the newly allocated CCB by marking the portion of the mask corresponding to the received fragment as received.
  • According to a next decision block 570, the SPU 410 determines if all of the IP fragments from the packet have been received. Preferably, this determination is accomplished by using the bit mask in the CCB. A person of ordinary skill in the art can appreciate that there are multiple techniques readily available to implement the bit mask, or an equivalent tracking mechanism, for use with the invention.
  • If all of the fragments have not been received for the IP-fragmented packet, then the semantic processor 400 defers further processing on that fragmented packet until another fragment is received.
  • If all of the IP fragments have been received, according to a next block 580, the SPU 410 resets the timer, reads the IP fragments from DRAM 480 in the correct order, and writes them to the recirculation buffer 160 for additional parsing and processing. Preferably, the SPU 410 writes only a specialized header and the first part of the reassembled IP packet (with the fragmentation bit unset) to the recirculation buffer 160. The specialized header enables the DXP 180 to direct the processing of the reassembled IP-fragmented packet stored in DRAM 480 without having to transfer all of the IP-fragmented packets to the recirculation buffer 160. The specialized header can consist of a designated non-terminal symbol that loads parser grammar for IP and a pointer to the CCB. The parser can then parse the IP header normally and proceed to parse higher-layer (e.g., TCP) headers.
  • In an embodiment of the invention, DXP 180 decides to parse the data received at either the recirculation buffer 160 or the input buffer 140 through round robin arbitration. A high level description of round robin arbitration will now be discussed with reference to a first and a second buffer for receiving packet data streams. After completing the parsing of a packet within the first buffer, DXP 180 looks to the second buffer to determine if data is available to be parsed. If so, the data from the second buffer is parsed. If not, then DXP 180 looks back to the first buffer to determine if data is available to be parsed. DXP 180 continues this round robin arbitration until data is available to be parsed in either the first buffer or second buffer.
  • FIG. 5 contains a flow chart 600 for the processing of received packets in need of decryption and/or authentication through the semantic processor 400 of FIG. 3. The flowchart 600 is used for illustrating another method according to an embodiment of the invention.
  • Once a packet is received at the input buffer 140 or the recirculation buffer 160 and the DXP 180 begins to parse through the headers of the received packet, according to a block 610, the DXP 180 ceases parsing through the headers of the received packet because it is determined that the packet needs decryption and/or authentication. If DXP 180 begins to parse through the packet headers from the recirculation buffer 160, preferably, the recirculation buffer 160 will only contain the aforementioned specialized header and the first part of the reassembled IP packet.
  • According to a next block 620, the DXP 180 signals to the SPU 410 to load the appropriate microinstructions from the SCT 150 and read the received packet from input buffer 140 or recirculation buffer 160. Preferably, SPU 410 will read the packet fragments from DRAM 480 instead of the recirculation buffer 160 for data that has not already been placed in the recirculation buffer 160.
  • According to a next block 630, the SPU 410 writes the received packet to cryptography block 440, where the packet is authenticated, decrypted, or both. In a preferred embodiment, decryption and authentication are performed in parallel within cryptography block 440. The cryptography block 440 enables the authentication, encryption, or decryption of a packet through the use of Triple Data Encryption Standard (T-DES), Advanced Encryption Standard (AES), Message Digest 5 (MD-5), Secure Hash Algorithm 1 (SHA-1), Rivest Cipher 4 (RC-4) algorithms, etc. Although block 620 and 630 are shown as two separate steps, optionally, they can be performed as one step with the SPU 410 reading and writing the packet concurrently.
  • The decrypted and/or authenticated packet is then written to SPU 410 and, according to a next block 640, the SPU 410 writes the packet to the recirculation buffer 160 for further processing. In a preferred embodiment, the cryptography block 440 contains a direct memory access engine that can read data from and write data to DRAM 480. By writing the decrypted and/or authenticated packet back to DRAM 480, SPU 410 can then read just the headers of the decrypted and/or authenticated packet from DRAM 480 and subsequently write them to the recirculation buffer 160. Since the payload of the packet remains in DRAM 480, semantic processor 400 saves processing time. Like with IP fragmentation, a specialized header can be written to the recirculation buffer to orient the parser and pass CCB information back to SPU 410.
  • Multiple passes through the recirculation buffer 160 may be necessary when IP fragmentation and encryption/authentication are contained in a single packet received by the semantic processor 400.
  • FIG. 6 shows yet another semantic processor embodiment. Semantic processor 700 contains a semantic processing unit (SPU) cluster 410 containing a plurality of semantic processing units 410-1, 410-2, 410-n. Preferably, each of the SPUs 410-1 to 410-n is identical and has the same functionality. The SPU cluster 410 is coupled to the memory subsystem 240, a SPU entry point (SEP) dispatcher 720, the SCT 150, port input buffer (PIB) 730, port output buffer (POB) 750, and a machine central processing unit (MCPU) 771.
  • When DXP 180 determines that a SPU task is to be launched at a specific point in parsing, DXP 180 signals SEP dispatcher 720 to load microinstructions from SCT 150 and allocate a SPU from the plurality of SPUs 410-1 to 410-n within the SPU cluster 410 to perform the task. The loaded microinstructions and task to be performed are then sent to the allocated SPU. The allocated SPU then executes the microinstructions and the data packet is processed accordingly. The SPU can optionally load microinstructions from the SCT 150 directly when instructed by the SEP dispatcher 720.
  • The PIB 730 contains at least one network interface input buffer, a recirculation buffer, and a Peripheral Component Interconnect (PCI-X) input buffer. The POB 750 contains at least one network interface output buffer and a Peripheral Component Interconnect (PCI-X) output buffer. The port block 740 contains one or more ports, each comprising a physical interface, e.g., an optical, electrical, or radio frequency driver/receiver pair for an Ethernet, Fibre Channel, 802.11×, Universal Serial Bus, Firewire, or other physical layer interface. Preferably, the number of ports within port block 740 corresponds to the number of network interface input buffers within the PIB 730 and the number of output buffers within the POB 750.
  • The PCI-X interface 760 is coupled to a PCI-X input buffer within the PIB 730, a PCI-X output buffer within the POB 750, and an external PCI bus 780. The PCI bus 780 can connect to other PCI-capable components, such as disk drive, interfaces for additional network ports, etc.
  • The MCPU 771 is coupled with the SPU cluster 410 and memory subsystem 240. The MCPU 771 may perform any desired function for semantic processor 700 that can be reasonably accomplished with traditional software running on standard hardware. These functions are usually infrequent, non-time-critical functions that do not warrant inclusion in SCT 150 due to complexity. Preferably, the MCPU 771 also has the capability to communicate with the dispatcher in SPU cluster 410 in order to request that a SPU perform tasks on the MCPU's behalf.
  • In an embodiment of the invention, the memory subsystem 240 further comprises a DRAM interface 790 that couples the cryptography block 440, context control block cache 450, general cache 460, and streaming cache 470 to DRAM 480 and external DRAM 791. In this embodiment, the AMCD 430 connects directly to an external TCAM 793, which, in turn, is coupled to an external Static Random Access Memory (SRAM) 795.
  • FIG. 7 contains a flow chart 800 for the processing of received Internet Small Computer Systems Interface (iSCSI) data through the semantic processor 700 of FIG. 6. The flowchart 800 is used for illustrating another method according to an embodiment of the invention.
  • According to a block 810, an iSCSI connection having at least one Transmission Control Protocol (TCP) session is established between an initiator and the target semantic processor 700 for the transmission of iSCSI data. The semantic processor 700 contains the appropriate grammar in the PT 170 and the PRT 190 and microcode in SCT 150 to establish a TCP session and then process the initial login and authentication of the iSCSI connection through the MCPU 771. In one embodiment, one or more SPUs within the SPU cluster 410 organize and maintain state for the TCP session, including allocating a CCB in DRAM 480 for TCP reordering, window sizing constraints and a timer for ending the TCP session if no further TCP/iSCSI packets arrive from the initiator within the allotted time frame. The TCP CCB contains a field for associating that CCB with an iSCSI CCB once an iSCSI connection is established by MCPU 771.
  • After a TCP session is established with the initiator, according to a next block 820, semantic processor 700 waits for a TCP/iSCSI packet, corresponding to the TCP session established in block 810, to arrive at the input buffer 140 of the PIB 730. Since semantic processor 700 has a plurality of SPUs 410-1 to 410-n available for processing input data, semantic processor 700 can receive and process multiple packets in parallel while waiting for the next TCP/iSCSI packet corresponding to the TCP session established in the block 810.
  • A TCP/iSCSI packet is received at the input buffer 140 of the PIB 730 through the input port 120 of port block 740, and the DXP 180 parses through the TCP header of the packet within the input buffer 140. According to a next block 830, the DXP 180 signals to the SEP dispatcher 720 to load the appropriate microinstructions from the SCT 150, allocate a SPU from the SPU cluster 410, and send to the allocated SPU microinstructions that, when executed, require the allocated SPU to read the received packet from the input buffer 140 and write the received packet to DRAM 480 through the streaming cache 470. The allocated SPU then uses the AMCD's 430 lookup function to locate the TCP CCB, stores the pointer to the location of the received packet in DRAM 480 to the TCP CCB, and restarts the timer in the TCP CCB. The allocated SPU is then released and can be allocated for other processing as the DXP 180 determines.
  • According to a next block 840, the received TCP/iSCSI packet is reordered, if necessary, to ensure correct sequencing of payload data. As is well known in the art, a TCP packet is deemed to be in proper order if all of the preceding packets have arrived.
  • When the received packet is determined to be in the proper order, the responsible SPU signals the SEP dispatcher 720 to load microinstructions from the SCT 150 for iSCSI recirculation. According to a next block 850, the allocated SPU combines the iSCSI header, the TCP connection ID from the TCP header and an iSCSI non-terminal to create a specialized iSCSI header. The allocated SPU then writes the specialized iSCSI header to the recirculation buffer 160 within the PIB 730. Optionally, the specialized iSCSI header can be sent to the recirculation buffer 160 with its corresponding iSCSI payload.
  • According to a next block 860, the specialized iSCSI header is parsed and semantic processor 700 processes the iSCSI payload.
  • According to a next decision block 870, it is inquired whether there is another iSCSI header in the received TCP/iSCSI packet. If YES, then execution returns to block 850 where the second iSCSI header within the received TCP/iSCSI packet is used to process the second iSCSI payload. As is well known in the art, there can be multiple iSCSI headers and payloads in a single TCP/iSCSI packet and thus there may be a plurality of packet segments sent through the recirculation buffer 160 and DXP 180 for any given iSCSI packet.
  • If NO, block 870 returns execution to the block 820, where semantic processor 700 waits for another TCP/iSCSI packet corresponding to the TCP session established in the block 810. The allocated SPU is then released and can be allocated for other processing as the DXP 180 determines.
  • As can be understood by a person skilled in the art, multiple segments of a packet may be passed through the recirculation buffer 160 at different times when any combination of encryption, authentication, IP fragmentation and iSCSI data processing are contained in a single packet received by the semantic processor 700.
  • FIG. 8 illustrates one possible implementation for port input buffer (PIB) 730 useful with embodiments of the invention. The PIB 730 contains at least one network interface input buffer 140 (140-0 and 140-1 are shown), a recirculation buffer 160, and a Peripheral Component Interconnect (PCI-X) input buffer 140_2. Input buffer 140_0 and 140_1, and PCI-X input buffer 140_2 are functionally the same as input buffer 140, but they receive input data from a different input to port block 740 and PCI-X interface 760, respectively.
  • Recirculation buffer 160 is comprised of a buffer 712 that receives recirculation data from SPU Cluster 410 (FIG. 6), a control block 714 for controlling the recirculation data in buffer 712, a FIFO block 716 to allow a DXP 180 (FIG. 6) FIFO access to the recirculation data in buffer 712, and a random access (RA) block 718 to allows a SPU within SPU Cluster 410 random access to the recirculation data in buffer 712. When the recirculation data is received at buffer 712 from SPU Cluster 410, recirculation buffer 160 transmits a Port ID to DXP 180, alerting DXP 800 that new data has arrived. Preferably, the Port ID that is transmitted is the first symbol within buffer 712.
  • When DXP 180 decides to parse through the recirculation data, it sends a Control_DXP signal to recirculation buffer 160 asking for a certain amount of data from buffer 712, or to increment buffer's 712 data pointer. Upon receipt of a Control_DXP signal, control block 714 transmits a Data_DXP signal, containing data from buffer 712, to DXP 180 through FIFO block 716. In an embodiment of the invention, the control block 714 and FIFO block 716 add control characters into the recirculation data that is sent to DXP 180 using the Data_DXP signal. Preferably, the control characters are 1-bit status flags that are added at the beginning of each byte of data transferred and denote whether the byte of data is a terminal or non-terminal symbol.
  • When a SPU 410-1 within SPU cluster 410 receives a SPU entry point (SEP) from DXP 180 that requires it to access data within the recirculation stream, the SPU 410-1 sends a Control_SPU signal to recirculation buffer 160 requesting the data at a certain location from buffer 712. Upon receipt of a Control_SPU signal, control block 714 transmits a Data_SPU signal, containing data from buffer 712, to SPU 410-1 through RA block 718.
  • One of ordinary skill in the art will recognize that the concepts taught herein can be tailored to a particular application in many other advantageous ways. In particular, those skilled in the art will recognize that the illustrated embodiments are but one of many alternative implementations that will become apparent upon reading this disclosure.
  • The preceding embodiments are exemplary. Although the specification may refer to “an”, “one”, “another”, or “some” embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment.

Claims (17)

1. A recirculation buffer system, comprising:
an input for receiving a partially parsed data stream from a semantic processor;
a buffer unit for buffering the partially parsed data stream; and
an output for outputting the partially parsed data stream to a data parsing unit for further parsing or back to the semantic processor for processing.
2. The recirculation buffer system according to claim 1 wherein the output includes a first interface for providing the partially parsed data stream to the data parsing unit, and a second interface for sending the buffered partially parsed data stream to one or more semantic processors responsive to the parsing by the data parsing unit.
3. The recirculation buffer system according to claim 2 wherein the first interface includes a First In-First Out (FIFO) access unit for sequential access to the partially parsed data stream by the data parsing unit and the second interface includes a random access buffer unit for random access to the partially parsed data stream by the semantic processors.
4. The recirculation buffer system of claim 1 including:
an input buffer for receiving a non-parsed data stream from a network interface; and
a recirculation buffer for receiving the partially parsed data stream from the semantic processor.
5. The recirculation buffer system of claim 1 wherein the partially parsed data stream is a Transmission Control Protocol (TCP) data stream that was reordered with the semantic processor responsive to an earlier parsing of the partially parsed data stream by the data parsing unit.
6. The recirculation buffer system of claim 1 wherein the partially parsed data stream within the buffer unit was decrypted or authenticated with the semantic processor responsive to an earlier parsing of the partially parsed data stream by the processing unit.
7. The recirculation buffer system of claim 1 wherein the partially parsed data stream in the buffer unit is a set of Internet Protocol (IP) fragmented data streams.
8. The recirculation buffer system of claim 1 wherein the partially parsed data stream includes a symbol identifying a parsing state for the partially parsed data.
9. A method comprising:
receiving a data stream;
parsing the data stream with a parsing unit to identify contents in the data stream, where one or more of the contents requires processing by a co-processor prior to completing the parsing;
outputting the data stream to the co-processor responsive to the parsing of the data stream by the parsing unit;
receiving back a processed data stream from the co-processor; and
parsing the processed data stream with the parsing unit.
10. The method according to claim 9 wherein the co-processor performs at least one of Transmission Control Protocol (TCP) reordering, cryptography operations, and Internet Protocol (IP) fragment reassembly.
11. The method according to claim 9 wherein the processed data stream from the co-processor contains a symbol that identifies a parsing state for the data stream.
12. A system comprising:
a buffer configured to receive a data stream;
a parser configured to parse the data stream from the buffer; and
one or more processing units configured to co-process the data stream from the buffer responsive to the parsing by the parser, and then provide at least a portion of the processed data stream back to the buffer for additional parsing by the parser.
13. The system according to claim 12 wherein a first processing unit receives the data stream from the buffer and a second processing unit sends the processed data stream back to the buffer.
14. The system according to claim 12 wherein the processing units provide the processed data stream back to the buffer responsive to another processed data stream.
15. The system according to claim 12 wherein the co-processing by the processing units includes at least one of Transmission Control Protocol (TCP) reordering, cryptography operations including at least one of encryption, decryption, and authentication, and Internet Protocol (IP) fragment reassembly.
16. The system according to claim 12 wherein the buffer includes an input buffer having an input for receiving the data stream from a network interface, and a recirculation buffer having an input to receive the processed data stream from the one or more processing units and an output connected to the parser and the processing units.
17. The system according to claim 12 wherein the parser identifies when the data stream requires co-processing to continue parsing and directs at least one of the processing units to co-process the data stream, the processing units then sending the processed data stream back to the buffer for subsequent parsing by the parser.
US11/376,512 2003-01-24 2006-03-14 Recirculation buffer for semantic processor Abandoned US20060174058A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/376,512 US20060174058A1 (en) 2003-01-24 2006-03-14 Recirculation buffer for semantic processor

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US10/351,030 US7130987B2 (en) 2003-01-24 2003-01-24 Reconfigurable semantic processor
US59166304P 2004-07-27 2004-07-27
US59197804P 2004-07-28 2004-07-28
US11/181,611 US7424571B2 (en) 2004-07-27 2005-07-13 Array machine context data memory
US11/181,527 US7415596B2 (en) 2003-01-24 2005-07-14 Parser table/production rule table configuration using CAM and SRAM
US11/376,512 US20060174058A1 (en) 2003-01-24 2006-03-14 Recirculation buffer for semantic processor

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US11/181,611 Continuation US7424571B2 (en) 2003-01-24 2005-07-13 Array machine context data memory
US11/181,527 Continuation US7415596B2 (en) 2003-01-24 2005-07-14 Parser table/production rule table configuration using CAM and SRAM

Publications (1)

Publication Number Publication Date
US20060174058A1 true US20060174058A1 (en) 2006-08-03

Family

ID=35733744

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/181,611 Expired - Fee Related US7424571B2 (en) 2003-01-24 2005-07-13 Array machine context data memory
US11/376,512 Abandoned US20060174058A1 (en) 2003-01-24 2006-03-14 Recirculation buffer for semantic processor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/181,611 Expired - Fee Related US7424571B2 (en) 2003-01-24 2005-07-13 Array machine context data memory

Country Status (1)

Country Link
US (2) US7424571B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225757A1 (en) * 2008-03-07 2009-09-10 Canon Kabushiki Kaisha Processing apparatus and method for processing ip packets
US8468546B2 (en) 2011-02-07 2013-06-18 International Business Machines Corporation Merging result from a parser in a network processor with result from an external coprocessor
US9088594B2 (en) 2011-02-07 2015-07-21 International Business Machines Corporation Providing to a parser and processors in a network processor access to an external coprocessor

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451268B2 (en) * 2004-07-27 2008-11-11 Gigafin Networks, Inc. Arbiter for array machine context data memory
JP4527640B2 (en) * 2005-09-15 2010-08-18 株式会社ソニー・コンピュータエンタテインメント Data reading device
US7793032B2 (en) * 2007-07-11 2010-09-07 Commex Technologies, Ltd. Systems and methods for efficient handling of data traffic and processing within a processing device
JP2009104555A (en) * 2007-10-25 2009-05-14 Intel Corp Method and apparatus for preventing alteration of software agent operating in vt environment
US9124448B2 (en) * 2009-04-04 2015-09-01 Oracle International Corporation Method and system for implementing a best efforts resequencer
US20100254388A1 (en) * 2009-04-04 2010-10-07 Oracle International Corporation Method and system for applying expressions on message payloads for a resequencer
US8942258B2 (en) 2012-09-14 2015-01-27 International Business Machines Corporation Segmentation and reassembly of network packets for switched fabric networks
US8923299B2 (en) * 2012-09-14 2014-12-30 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Segmentation and reassembly of network packets
CN107273100B (en) * 2017-06-15 2021-06-08 华为技术有限公司 Data real-time processing and storing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781729A (en) * 1995-12-20 1998-07-14 Nb Networks System and method for general purpose network analysis
US5916305A (en) * 1996-11-05 1999-06-29 Shomiti Systems, Inc. Pattern recognition in data communications using predictive parsers
US20020172198A1 (en) * 2001-02-22 2002-11-21 Kovacevic Branko D. Method and system for high speed data retention
US20040215976A1 (en) * 2003-04-22 2004-10-28 Jain Hemant Kumar Method and apparatus for rate based denial of service attack detection and prevention
US20050021825A1 (en) * 2003-06-27 2005-01-27 Broadcom Corporation Internet protocol multicast replication
US6876653B2 (en) * 1998-07-08 2005-04-05 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US6976096B1 (en) * 2001-06-02 2005-12-13 Redback Networks Inc. Method and apparatus for controlling the admission of data into a network element
US6985964B1 (en) * 1999-12-22 2006-01-10 Cisco Technology, Inc. Network processor system including a central processor and at least one peripheral processor
US6999457B2 (en) * 2000-03-29 2006-02-14 Juniper Networks, Inc. Arbiter circuit and method of carrying out arbitration

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193192A (en) * 1989-12-29 1993-03-09 Supercomputer Systems Limited Partnership Vectorized LR parsing of computer programs
US5487147A (en) * 1991-09-05 1996-01-23 International Business Machines Corporation Generation of error messages and error recovery for an LL(1) parser
US5805808A (en) * 1991-12-27 1998-09-08 Digital Equipment Corporation Real time parser for data packets in a communications network
US5581696A (en) * 1995-05-09 1996-12-03 Parasoft Corporation Method using a computer for automatically instrumenting a computer program for dynamic debugging
US6493761B1 (en) * 1995-12-20 2002-12-10 Nb Networks Systems and methods for data processing using a protocol parsing engine
US6034963A (en) * 1996-10-31 2000-03-07 Iready Corporation Multiple network protocol encoder/decoder and data processor
US6330659B1 (en) * 1997-11-06 2001-12-11 Iready Corporation Hardware accelerator for an object-oriented programming language
KR20010020250A (en) 1997-05-08 2001-03-15 코야마 리오 Hardware accelerator for an object-oriented programming language
US6122757A (en) * 1997-06-27 2000-09-19 Agilent Technologies, Inc Code generating system for improved pattern matching in a protocol analyzer
US5991539A (en) * 1997-09-08 1999-11-23 Lucent Technologies, Inc. Use of re-entrant subparsing to facilitate processing of complicated input data
US6145073A (en) * 1998-10-16 2000-11-07 Quintessence Architectures, Inc. Data flow integrated circuit architecture
US6356950B1 (en) * 1999-01-11 2002-03-12 Novilit, Inc. Method for encoding and decoding data according to a protocol specification
US6772413B2 (en) 1999-12-21 2004-08-03 Datapower Technology, Inc. Method and apparatus of data exchange using runtime code generator and translator
US6892237B1 (en) * 2000-03-28 2005-05-10 Cisco Technology, Inc. Method and apparatus for high-speed parsing of network messages
US7379475B2 (en) 2002-01-25 2008-05-27 Nvidia Corporation Communications processor
US8218555B2 (en) 2001-04-24 2012-07-10 Nvidia Corporation Gigabit ethernet adapter
US7127559B2 (en) * 2001-07-10 2006-10-24 Micron Technology, Inc. Caching of dynamic arrays
US6587750B2 (en) 2001-09-25 2003-07-01 Intuitive Surgical, Inc. Removable infinite roll master grip handle and touch sensor for robotic surgery
US7535913B2 (en) 2002-03-06 2009-05-19 Nvidia Corporation Gigabit ethernet adapter supporting the iSCSI and IPSEC protocols
US7251722B2 (en) * 2004-05-11 2007-07-31 Mistletoe Technologies, Inc. Semantic processor storage server architecture

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781729A (en) * 1995-12-20 1998-07-14 Nb Networks System and method for general purpose network analysis
US5793954A (en) * 1995-12-20 1998-08-11 Nb Networks System and method for general purpose network analysis
US6266700B1 (en) * 1995-12-20 2001-07-24 Peter D. Baker Network filtering system
US5916305A (en) * 1996-11-05 1999-06-29 Shomiti Systems, Inc. Pattern recognition in data communications using predictive parsers
US6876653B2 (en) * 1998-07-08 2005-04-05 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US6985964B1 (en) * 1999-12-22 2006-01-10 Cisco Technology, Inc. Network processor system including a central processor and at least one peripheral processor
US6999457B2 (en) * 2000-03-29 2006-02-14 Juniper Networks, Inc. Arbiter circuit and method of carrying out arbitration
US20020172198A1 (en) * 2001-02-22 2002-11-21 Kovacevic Branko D. Method and system for high speed data retention
US6976096B1 (en) * 2001-06-02 2005-12-13 Redback Networks Inc. Method and apparatus for controlling the admission of data into a network element
US20040215976A1 (en) * 2003-04-22 2004-10-28 Jain Hemant Kumar Method and apparatus for rate based denial of service attack detection and prevention
US20050021825A1 (en) * 2003-06-27 2005-01-27 Broadcom Corporation Internet protocol multicast replication

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225757A1 (en) * 2008-03-07 2009-09-10 Canon Kabushiki Kaisha Processing apparatus and method for processing ip packets
US7969977B2 (en) * 2008-03-07 2011-06-28 Canon Kabushiki Kaisha Processing apparatus and method for processing IP packets
US8468546B2 (en) 2011-02-07 2013-06-18 International Business Machines Corporation Merging result from a parser in a network processor with result from an external coprocessor
US8949856B2 (en) 2011-02-07 2015-02-03 International Business Machines Corporation Merging result from a parser in a network processor with result from an external coprocessor
US9088594B2 (en) 2011-02-07 2015-07-21 International Business Machines Corporation Providing to a parser and processors in a network processor access to an external coprocessor

Also Published As

Publication number Publication date
US7424571B2 (en) 2008-09-09
US20060026378A1 (en) 2006-02-02

Similar Documents

Publication Publication Date Title
US20060174058A1 (en) Recirculation buffer for semantic processor
EP1791060B1 (en) Apparatus performing network processing functions
US6956853B1 (en) Receive processing with network protocol bypass
US7478223B2 (en) Symbol parsing architecture
US7924868B1 (en) Internet protocol (IP) router residing in a processor chipset
US9485178B2 (en) Packet coalescing
US7930349B2 (en) Method and apparatus for reducing host overhead in a socket server implementation
US8094670B1 (en) Method and apparatus for performing network processing functions
US6629125B2 (en) Storing a frame header
US7561573B2 (en) Network adaptor, communication system and communication method
JP4723586B2 (en) Packet queuing, scheduling, and ordering
US7142540B2 (en) Method and apparatus for zero-copy receive buffer management
US7290134B2 (en) Encapsulation mechanism for packet processing
US20050281281A1 (en) Port input buffer architecture
US20060227811A1 (en) TCP engine
US20020188839A1 (en) Method and system for high-speed processing IPSec security protocol packets
US20040057434A1 (en) Multi-data receive processing according to a data communication protocol
US20050135395A1 (en) Method and system for pre-pending layer 2 (L2) frame descriptors
KR100798926B1 (en) Apparatus and method for forwarding packet in packet switch system
US7188250B1 (en) Method and apparatus for performing network processing functions
US20080263171A1 (en) Peripheral device that DMAS the same data to different locations in a computer
US20070019661A1 (en) Packet output buffer for semantic processor
US7539204B2 (en) Data and context memory sharing
US20060026377A1 (en) Lookup interface for array machine context data memory
CN111031055B (en) IPsec acceleration device and implementation method

Legal Events

Date Code Title Description
AS Assignment

Owner name: MISTLETOE TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIKDAR, SOMSUBHRA;ROWETT, KEVIN JEROME;NAIR, RAJESH;AND OTHERS;REEL/FRAME:018508/0316;SIGNING DATES FROM 20060308 TO 20060310

AS Assignment

Owner name: VENTURE LENDING & LEASING IV, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MISTLETOE TECHNOLOGIES, INC.;REEL/FRAME:019524/0042

Effective date: 20060628

AS Assignment

Owner name: GIGAFIN NETWORKS, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:MISTLETOE TECHNOLOGIES, INC.;REEL/FRAME:021219/0979

Effective date: 20080708

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION