WO2006080898A1 - Improvements in and relating to data processing - Google Patents

Improvements in and relating to data processing Download PDF

Info

Publication number
WO2006080898A1
WO2006080898A1 PCT/SG2005/000021 SG2005000021W WO2006080898A1 WO 2006080898 A1 WO2006080898 A1 WO 2006080898A1 SG 2005000021 W SG2005000021 W SG 2005000021W WO 2006080898 A1 WO2006080898 A1 WO 2006080898A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction portion
data packet
memory
instruction
processing
Prior art date
Application number
PCT/SG2005/000021
Other languages
French (fr)
Inventor
Taro Kamiko
Ganesha Nayak
Yao Chye Lee
Jin Sze Sow
Original Assignee
Infineon Technologies Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infineon Technologies Ag filed Critical Infineon Technologies Ag
Priority to PCT/SG2005/000021 priority Critical patent/WO2006080898A1/en
Publication of WO2006080898A1 publication Critical patent/WO2006080898A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the invention relates to a method and apparatus for receiving and processing data packets.
  • the invention relates to a method and apparatus for receiving and processing data packets, each data packet including an instruction portion defining processing instructions for the data packet.
  • the arrangement 101 includes a receiver 103 and a processor 105 connected to the receiver 103.
  • the receiver 103 receives the incoming data packets. Each packet is sent to the processor 105 where it is processed and then dealt with appropriately.
  • every single data packet has to be processed by the processor 105. Processing of each packet may require many processing cycles for parsing it, performing memory look ups for example to detect destinations or exceptions, analyzing protocol types, performing security checks, editing the packet and so on.
  • the chip size will increase since the chip will need more logic and, as bandwidth requirements increase, the chip size will increase since the memory size will need to be big enough to store the increasing amount of incoming data and the chip will need more logic for faster processing.
  • a method for receiving and processing data packets comprising the steps of: providing a processor for processing data packets, each data packet including an instruction portion defining processing instructions for the data packet; providing a memory for storing instruction portions of processed data packets and associated processing instructions; receiving a data packet, the data packet including an instruction portion; comparing the instruction portion of the received data packet with instruction portions stored in the memory; and if the instruction portion of the received data packet matches an instruction portion stored in the memory, sending the processing instructions associated with the instruction portion to the processor; or if the instruction portion of the received data packet does not match an instruction portion stored in the memory, processing the data packet and storing the instruction portion and associated processing instructions in the memory.
  • the processing instructions associated with the instruction portion are sent to the processor. Therefore, processing of that data packet can be bypassed, as the processor can simply read the processing instructions from the memory and process the data packet accordingly.
  • the instruction portion of the received data packet does not match one already stored in the memory, the data packet is processed in the usual way and the instruction portion and associated processing instructions are stored in the memory for later use.
  • the instruction portion is preferably a header.
  • the header may be a header defined in network standards e.g. an Ethernet header, ATM header, AAL5 header or IP/PPP header.
  • the header may be defined by a user to define network flow, packet destination etc.
  • processing instructions should be interpreted broadly. Processing instructions might include the higher layer protocol or Quality of Service, the priority to assign to the data packet, whether to discard the data packet, where to transmit the data packet and so on.
  • the processing instructions defined by the instruction portion include mask data, the mask data defining the section or sections of the instruction portion which are to be excluded when the instruction portion is compared with another instruction portion.
  • the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory may comprise comparing the instruction portion, excluding any sections defined by the respective mask data, of the received data packet with instruction portions, excluding any sections defined by the respective mask data, stored in the memory.
  • the step of receiving a data packet may comprise a receiver receiving the data packet.
  • the receiver is preferably connected to the processor.
  • the receiver is preferably connected to the memory.
  • the receiver may be directly or indirectly connected to the processor and/or the memory.
  • the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory comprises a memory controller comparing the instruction portion of the received data packet with instruction portions stored in the memory.
  • the memory controller is preferably connected to the processor.
  • the step of send ing the processing instructions associated with the instruction portion to the processor may comprise the memory controller sending the processing instructions associated with the instruction portion to the processor.
  • the memory controller is a cache controller. That is, the memory is preferably used as a cache memory.
  • the step of processing the data packet comprises the processor processing the data packet.
  • a method for processing data packets each data packet including a header defining processing instructions for the data packet, the processing instructions including mask data defining any sections of the header which are to be excluded when the header is compared with another header
  • the method comprisi ng the steps of: providing a processor for processing data packets; providing a memory for storing headers of processed data packets and associated processing instructions; receiving a data packet; comparing the header, excluding any sections defined by the respective mask data, of the received data packet with headers, excluding any sections defined by the respective mask data, stored in the memory; and if the header of the received data packet matches a header stored in the memory, sending the processing instructions associated with the header to the processor so that processing of the data packet in the processor can be bypassed; or if the header of the received data packet does not match a header stored in the memory, processing the data packet and storing the header and associated processing instructions in the memory.
  • apparatus for receiving and processing data packets, the apparatus-comprising: a receiver for receiving data packets, each data packet including an instruction portion defining processing instructions for the data packet; a processor for processing data packets; a memory for storing instruction portions of processed data packets and associated processing instructions; and a memory controller arranged to compare the instruction portion of a received data packet with instruction portions stored in the memory, wherein the memory controller is arranged, if the instruction portion of the received data packet matches an instruction portion stored in the memory, to send the processing instructions associated with the instruction portion to the processor, and the processor is arranged, if the instruction portion of the received data packet does not match an instruction portion stored in the i memory, to process the data packet and store the instruction portion arid ; associated processing instructions in the memory.
  • the processing instructions associated with the instruction portion are sent to the processor so processing of that data packet can be skipped. In that way, the processing is bypassed if the instruction portion of the received data packet has previously been processed.
  • the processor processes the data packet in the usual way and the instruction portion and associated processing instructions are stored in the memory for later use.
  • the instruction portion is preferably a header.
  • the header may be a header defined in network standards e.g. an Ethernet header, ATM header, AAL5 header or IP/PPP header.
  • the header may be defined by a user to define network flow, packet destination etc.
  • processing instructions should be interpreted broadly. Processing instructions might include the higher layer protocol or Quality of Service, what priority to assign to the data packet, whether to discard the data packet, where to transmit the data packet and so on.
  • the processing instructions include mask data, the mask data defining the section or sections of the instruction portion which are to be excluded when the instruction portion is compared with another instruction portion.
  • the memory controller may be arranged to compare the instruction portion, excluding any sections defined by the respective mask data, of the received data packet with instruction portions, excluding any sections defined by the respective mask data, stored in the memory.
  • the memory controller is a cache controller. That is, the memory is preferably used as a cache memory.
  • apparatus for processing data packets, each data packet including a header defining processing instructions for the data packet, the processing instructions including mask data defining any sections of the header which are to be excluded when the header is compared with another header
  • the apparatus comprising: a receiver for receiving data packets; a processor for processing data packets; a memory for storing headers of processed data packets and associated processing instructions; and a memory controller arranged to compare the header, excluding any sections defined by the respective mask data, of a received data packet with headers, excluding any sections defined by the respective mask data, stored in the memory, wherein the memory controller is arranged, if the header of the received data packet matches a header stored in the memory, to send the processing instructions associated with the header to the processor so that processing of the data packet in the processor can be bypassed, and the processor is arranged, if the header of the received data packet does not match a header stored in the memory, to process the data packet and store the header and associated processing instructions in the memory.
  • Figure 1 is a schematic diagram showing a known arrangement for receiving and processing data packets.
  • Figure 2 is a schematic diagram of an arrangement for receiving and processing data packets, according to an embodiment of the invention.
  • the arrangement 201 includes a receiver 203 and a processor 205, just as in known arrangements.
  • the arrangement 201 additionally includes a memory in the form of a cache memory 207 and a memory controller in the form of a cache controller 209.
  • the cache memory and the cache controller may or may not be integrated into the receiver and/or processor.
  • the receiver is able to receive many types of protocol header.
  • One protocol header may be received back to back with another protocol header since it is usual for one complete set of data to be segmented into pieces and sent over a network e.g. Ethernet or ATM.
  • Some types of protocol header defined in network standards are ATM, Ethernet, AAL5 and IP/PPP.
  • the term "protocol header" should be taken to include, not only protocol headers defined in network standards, but also a field specified by a user, which can eventually uniquely determine network flow, destination, required processing etc within the device.
  • the cache memory stores already processed headers and associated data as will be described in more detail below.
  • the cache controller 209 is able to send control signals to the cache memory 207, to read data from the cache memory 207 and to write data to the cache memory 207.
  • the entire packet will be sent to the processor 205 and the instruction portion in the form of the header will be sent to the cache controller 209.
  • the cache controller 209 will then read data (arrow A) from the cache memory 207 and compare the received header with the headers read from the cache memory 207.
  • the cache controller 209 will indicate to the processor 205 that there is a match and will send the processing instructions associated with that header to the processor (arrow B).
  • the processing instructions include, for example, where to transmit or whether to discard the packet, what priority is to be used in a transmit queue and what is the new data structure in the header or payload.
  • the processing instructions may also include mask data which indicates the portions of the header which should be masked when the header is compared with another header i.e. which portion(s) should not be involved in a comparison.
  • the processor 205 can then use the supplied processing instructions to bypass processing for that packet. Thus, any header that has already been processed by the processor and is thus stored in the cache memory, will not need to be processed again.
  • the cache controller 209 will indicate this to the processor 205 (arrow C).
  • the packet will then be processed as normal by the processor 205 and the processor will feed back the processing instructions (including the required mask pattern) to the cache controller 209 (arrow D).
  • the cache controller will then update the cache memory 207 with the new data (arrow E), so that when the same header arrives next, processing can be bypassed.
  • the cache memory As new types of header are processed, data is added to the cache memory so that processing the same header more than once is avoided.
  • the size (width and depth) of the cache memory should be sufficiently large to cover all frequently used protocols and their headers. If, when the cache controller 209 tries to write new data to the cache memory (arrow E), the cache memory is found to be full, one of the existing entries is replaced by the latest data. Ideally, the replaced entry is the entry which is the least recently or frequently used.
  • the header is sent to the cache controller piece by piece (probably in 1 byte or 2 byte pieces) because the physical interface is arranged to receive that amount of data at a time. This is taking place as the entire data packet is being sent to the processor 205. Comparison of the received header and headers already stored in the memory can begin once the first piece of the header is received; there is no need to wait until the entire header is received. It should be understood that where the term "data packet” is used, this may comprise any type of data bundle including a data packet or a data frame.
  • the cache memory is able to store headers of packets along with parsing, lookup and other relevant results to be used later.
  • the arrangement can bypass actual frame or packet processing flow and can simply read the results from the cache memory to decide what action to take on the frame or packet for example whether to transmit it or discard it.
  • the invention can increase the bandwidth because the average processing time for a frame or packet is reduced. This cannot be achieved by known arrangements which process every frame or packet individually.
  • the arrangement is able to deal with many different types of header, each of which may be stored in the cache memory, the arrangement may be used in a large number of different situations.
  • the invention is applicable to many different devices, using a variety of different defined network protocols.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Communication Control (AREA)

Abstract

There is provided a method and apparatus for receiving and processing data packets. Each data packet comprises an instruction portion, the instruction portion defining processing instructions for the data packet. The processing instructions may include mask data defining any sections of the instruction portion which are not to be used in comparison with other instruction portions. When a data packet is received, the instruction portion of the received data packet is compared with instruction portions stored in a memory, which stores already processed instruction portions together with the associated processing instructions. If the received instruction portion matches an instruction portion stored in the memory, the processing instructions can be used, thereby bypassing the processing of a packet whose instruction portion has already been processed. If the received instruction portion does not match an instruction portion stored in the memory, the data packet can be processed as usual and the instruction portion and processing instructions can be stored in the memory for later use.

Description

Improvements in and relating to Data Processing
Field of the Invention
The invention relates to a method and apparatus for receiving and processing data packets. In particular, the invention relates to a method and apparatus for receiving and processing data packets, each data packet including an instruction portion defining processing instructions for the data packet.
Background of the Invention For network processing, switching, routing and inter working silicon devices, it is well known that important considerations are the chip size and the power consumption, as increased chip size and power consumption obviously increase costs.
In known arrangements for receiving and processing data, the buffer memory must be big enough to store all the received data while processing it. In addition, the processing power has to be fast enough so that the buffer memory (in the software, firmware or hardware) does not overflow. One such known arrangement for receiving and processing data packets is shown schematically in Figure 1. The arrangement 101 includes a receiver 103 and a processor 105 connected to the receiver 103. The receiver 103 receives the incoming data packets. Each packet is sent to the processor 105 where it is processed and then dealt with appropriately.
In the arrangement of Figure 1 , every single data packet has to be processed by the processor 105. Processing of each packet may require many processing cycles for parsing it, performing memory look ups for example to detect destinations or exceptions, analyzing protocol types, performing security checks, editing the packet and so on. With the arrangement of Figure 1 , as feature requirements increase, the chip size will increase since the chip will need more logic and, as bandwidth requirements increase, the chip size will increase since the memory size will need to be big enough to store the increasing amount of incoming data and the chip will need more logic for faster processing.
Summary of the Invention It is an object of the invention to provide an apparatus and method for receiving and processing data packets which mitigates or substantially overcomes the problems of prior art arrangements described above.
According to the invention, there is provided a method for receiving and processing data packets, the method comprising the steps of: providing a processor for processing data packets, each data packet including an instruction portion defining processing instructions for the data packet; providing a memory for storing instruction portions of processed data packets and associated processing instructions; receiving a data packet, the data packet including an instruction portion; comparing the instruction portion of the received data packet with instruction portions stored in the memory; and if the instruction portion of the received data packet matches an instruction portion stored in the memory, sending the processing instructions associated with the instruction portion to the processor; or if the instruction portion of the received data packet does not match an instruction portion stored in the memory, processing the data packet and storing the instruction portion and associated processing instructions in the memory.
If the instruction portion of the received data packet matches one already stored in the memory, the processing instructions associated with the instruction portion are sent to the processor. Therefore, processing of that data packet can be bypassed, as the processor can simply read the processing instructions from the memory and process the data packet accordingly. On the other hand, if the instruction portion of the received data packet does not match one already stored in the memory, the data packet is processed in the usual way and the instruction portion and associated processing instructions are stored in the memory for later use.
The instruction portion is preferably a header. The header may be a header defined in network standards e.g. an Ethernet header, ATM header, AAL5 header or IP/PPP header. Alternatively, the header may be defined by a user to define network flow, packet destination etc.
The term "processing instructions" should be interpreted broadly. Processing instructions might include the higher layer protocol or Quality of Service, the priority to assign to the data packet, whether to discard the data packet, where to transmit the data packet and so on.
In one embodiment, the processing instructions defined by the instruction portion include mask data, the mask data defining the section or sections of the instruction portion which are to be excluded when the instruction portion is compared with another instruction portion. In that embodiment, the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory may comprise comparing the instruction portion, excluding any sections defined by the respective mask data, of the received data packet with instruction portions, excluding any sections defined by the respective mask data, stored in the memory.
The step of receiving a data packet may comprise a receiver receiving the data packet. The receiver is preferably connected to the processor. The receiver is preferably connected to the memory. The receiver may be directly or indirectly connected to the processor and/or the memory.
In one embodiment, the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory comprises a memory controller comparing the instruction portion of the received data packet with instruction portions stored in the memory. The memory controller is preferably connected to the processor.
In that embodiment of the invention, the step of send ing the processing instructions associated with the instruction portion to the processor, if the instruction portion of the received data packet matches an instruction portion stored in the memory, may comprise the memory controller sending the processing instructions associated with the instruction portion to the processor.
Preferably the memory controller is a cache controller. That is, the memory is preferably used as a cache memory.
In one embodiment of the invention, the step of processing the data packet, if the instruction portion of the received data packet does not match an instruction portion stored in the memory, comprises the processor processing the data packet.
According to the invention, there is also provided a method for processing data packets, each data packet including a header defining processing instructions for the data packet, the processing instructions including mask data defining any sections of the header which are to be excluded when the header is compared with another header, the method comprisi ng the steps of: providing a processor for processing data packets; providing a memory for storing headers of processed data packets and associated processing instructions; receiving a data packet; comparing the header, excluding any sections defined by the respective mask data, of the received data packet with headers, excluding any sections defined by the respective mask data, stored in the memory; and if the header of the received data packet matches a header stored in the memory, sending the processing instructions associated with the header to the processor so that processing of the data packet in the processor can be bypassed; or if the header of the received data packet does not match a header stored in the memory, processing the data packet and storing the header and associated processing instructions in the memory.
According to the invention, there is also provided apparatus for receiving and processing data packets, the apparatus-comprising: a receiver for receiving data packets, each data packet including an instruction portion defining processing instructions for the data packet; a processor for processing data packets; a memory for storing instruction portions of processed data packets and associated processing instructions; and a memory controller arranged to compare the instruction portion of a received data packet with instruction portions stored in the memory, wherein the memory controller is arranged, if the instruction portion of the received data packet matches an instruction portion stored in the memory, to send the processing instructions associated with the instruction portion to the processor, and the processor is arranged, if the instruction portion of the received data packet does not match an instruction portion stored in the i memory, to process the data packet and store the instruction portion arid ; associated processing instructions in the memory.
Thus, if the instruction portion of the received data packet matches one already stored in the memory, the processing instructions associated with the instruction portion are sent to the processor so processing of that data packet can be skipped. In that way, the processing is bypassed if the instruction portion of the received data packet has previously been processed.
On the other hand, if the instruction portion of the received data packet does not match one already stored in the memory, the processor processes the data packet in the usual way and the instruction portion and associated processing instructions are stored in the memory for later use.
The instruction portion is preferably a header. The header may be a header defined in network standards e.g. an Ethernet header, ATM header, AAL5 header or IP/PPP header. Alternatively, the header may be defined by a user to define network flow, packet destination etc.
The term "processing instructions" should be interpreted broadly. Processing instructions might include the higher layer protocol or Quality of Service, what priority to assign to the data packet, whether to discard the data packet, where to transmit the data packet and so on.
In one embodiment, the processing instructions include mask data, the mask data defining the section or sections of the instruction portion which are to be excluded when the instruction portion is compared with another instruction portion. In that embodiment, the memory controller may be arranged to compare the instruction portion, excluding any sections defined by the respective mask data, of the received data packet with instruction portions, excluding any sections defined by the respective mask data, stored in the memory.
Preferably the memory controller is a cache controller. That is, the memory is preferably used as a cache memory.
According to the invention, there is also provided apparatus for processing data packets, each data packet including a header defining processing instructions for the data packet, the processing instructions including mask data defining any sections of the header which are to be excluded when the header is compared with another header, the apparatus comprising: a receiver for receiving data packets; a processor for processing data packets; a memory for storing headers of processed data packets and associated processing instructions; and a memory controller arranged to compare the header, excluding any sections defined by the respective mask data, of a received data packet with headers, excluding any sections defined by the respective mask data, stored in the memory, wherein the memory controller is arranged, if the header of the received data packet matches a header stored in the memory, to send the processing instructions associated with the header to the processor so that processing of the data packet in the processor can be bypassed, and the processor is arranged, if the header of the received data packet does not match a header stored in the memory, to process the data packet and store the header and associated processing instructions in the memory.
Brief Description of the Drawings
A known arrangement has already been described with reference to Figure 1 , which is a schematic diagram showing a known arrangement for receiving and processing data packets.
An exemplary embodiment of the invention will now be described with reference to Figure 2, which is a schematic diagram of an arrangement for receiving and processing data packets, according to an embodiment of the invention.
Detailed Description of Preferred Embodiment Figure 2 is a schematic diagram of an embodiment of the invention. The arrangement 201 includes a receiver 203 and a processor 205, just as in known arrangements. The arrangement 201 additionally includes a memory in the form of a cache memory 207 and a memory controller in the form of a cache controller 209. The cache memory and the cache controller may or may not be integrated into the receiver and/or processor. The receiver is able to receive many types of protocol header. One protocol header may be received back to back with another protocol header since it is usual for one complete set of data to be segmented into pieces and sent over a network e.g. Ethernet or ATM. Some types of protocol header defined in network standards are ATM, Ethernet, AAL5 and IP/PPP. The term "protocol header" should be taken to include, not only protocol headers defined in network standards, but also a field specified by a user, which can eventually uniquely determine network flow, destination, required processing etc within the device.
The cache memory stores already processed headers and associated data as will be described in more detail below. The cache controller 209 is able to send control signals to the cache memory 207, to read data from the cache memory 207 and to write data to the cache memory 207.
The operation of the arrangement of Figure 2 will now be described.
For each packet received by the receiver 203, the entire packet will be sent to the processor 205 and the instruction portion in the form of the header will be sent to the cache controller 209. The cache controller 209 will then read data (arrow A) from the cache memory 207 and compare the received header with the headers read from the cache memory 207.
If there is a match between the received header and one already stored in the cache memory 207, the cache controller 209 will indicate to the processor 205 that there is a match and will send the processing instructions associated with that header to the processor (arrow B). The processing instructions include, for example, where to transmit or whether to discard the packet, what priority is to be used in a transmit queue and what is the new data structure in the header or payload. The processing instructions may also include mask data which indicates the portions of the header which should be masked when the header is compared with another header i.e. which portion(s) should not be involved in a comparison. The processor 205 can then use the supplied processing instructions to bypass processing for that packet. Thus, any header that has already been processed by the processor and is thus stored in the cache memory, will not need to be processed again.
If, on the other hand, there is no match between the received header and one already stored in the cache memory 207 (either because there is a mismatch or because there is no data stored in the cache memory), the cache controller 209 will indicate this to the processor 205 (arrow C). The packet will then be processed as normal by the processor 205 and the processor will feed back the processing instructions (including the required mask pattern) to the cache controller 209 (arrow D). The cache controller will then update the cache memory 207 with the new data (arrow E), so that when the same header arrives next, processing can be bypassed.
Thus, as new types of header are processed, data is added to the cache memory so that processing the same header more than once is avoided. The size (width and depth) of the cache memory should be sufficiently large to cover all frequently used protocols and their headers. If, when the cache controller 209 tries to write new data to the cache memory (arrow E), the cache memory is found to be full, one of the existing entries is replaced by the latest data. Ideally, the replaced entry is the entry which is the least recently or frequently used.
In practice, the header is sent to the cache controller piece by piece (probably in 1 byte or 2 byte pieces) because the physical interface is arranged to receive that amount of data at a time. This is taking place as the entire data packet is being sent to the processor 205. Comparison of the received header and headers already stored in the memory can begin once the first piece of the header is received; there is no need to wait until the entire header is received. It should be understood that where the term "data packet" is used, this may comprise any type of data bundle including a data packet or a data frame.
Thus, the problems of the known arrangements are overcome. The cache memory is able to store headers of packets along with parsing, lookup and other relevant results to be used later. Once it has been detected that the received packet header is the same as one stored in the cache memory, the arrangement can bypass actual frame or packet processing flow and can simply read the results from the cache memory to decide what action to take on the frame or packet for example whether to transmit it or discard it. By bypassing the repeated processing flow, the invention can increase the bandwidth because the average processing time for a frame or packet is reduced. This cannot be achieved by known arrangements which process every frame or packet individually.
Because the arrangement is able to deal with many different types of header, each of which may be stored in the cache memory, the arrangement may be used in a large number of different situations. Thus, the invention is applicable to many different devices, using a variety of different defined network protocols.

Claims

Claims:
1. A method for receiving and processing data packets, the method comprising the steps of: providing a processor for processing data packets, each data packet including an instruction portion defining processing instructions for the data packet; providing a memory for storing instruction portions of processed data packets and associated processing instructions; receiving a data packet, the data packet including an instruction portion; comparing the instruction portion of the received data packet with instruction portions stored in the memory; and if the instruction portion of the received data packet matches an instruction portion stored in the memory, sending the processing instructions associated with the instruction portion to the processor; or if the instruction portion of the received data packet does not match an instruction portion stored in the memory, processing the data packet and storing the instruction portion and associated processing instructions in the memory.
2. A method according to claim 1 wherein the instruction portion is a header.
3. A method according to claim 1 or claim 2 wherein the processing instructions include mask data defining the section or sections of the instruction portion which are to be excluded when the instruction portion is compared with another instruction portion.
4. A method according to claim 3 wherein the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory comprises comparing the instruction portion, excluding any sections defined by the respective mask data, of the received data packet with instruction portions, excluding any sections defined by the respective mask data, stored in the memory.
5. A method according to any one of the preceding claims wherein the step of receiving a data packet comprises a receiver receiving the data packet.
6. A method according to claim 5 wherein the receiver is connected to the processor.
7. A method according to claim 5 or claim 6 wherein the receiver is connected to the memory.
8. A method according to any one of the preceding claims wherein the step of comparing the instruction portion of the received data packet with instruction portions stored in the memory comprises a memory controller comparing the instruction portion of the received data packet with instruction portions stored in the memory.
9. A method according to claim 8 wherein the memory controller is connected to the processor.
10. A method according to claim 8 or claim 9 wherein the step of sending the processing instructions associated with the instruction portion to the processor, if the instruction portion of the received data packet matches an instruction portion stored in the memory, comprises the memory controller sending the processing instructions associated with the instruction portion to the processor.
11. A method according to any one of claims 8 to 10 wherein the memory controller is a cache controller.
12. A method according to any one of the preceding claims wherein the step of processing the data packet, if the instruction portion of the received data packet does not match an instruction portion stored in the memory, comprises the processor processing the data packet.
13. Apparatus for receiving and processing data packets, the apparatus comprising: a receiver for receiving data packets, each data packet including an instruction portion defining processing instructions for the data packet; a processor for processing data packets; a memory for storing instruction portions of processed data packets and associated processing instructions; and a memory controller arranged to compare the instruction portion of a received data packet with instruction portions stored in the memory, wherein the memory controller is arranged, if the instruction portion of the received data packet matches an instruction portion stored in the memory, to send the processing instructions associated with the instruction portion to the processor, and the processor is arranged, if the instruction portion of the received data packet does not match an instruction portion stored in the memory, to process the data packet and store the instruction portion and associated processing instructions in the memory.
14. Apparatus according to claim 13 wherein the instruction portion is a header.
15. Apparatus according to claim 13 or claim 14 wherein the memory controller is a cache controller.
PCT/SG2005/000021 2005-01-26 2005-01-26 Improvements in and relating to data processing WO2006080898A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SG2005/000021 WO2006080898A1 (en) 2005-01-26 2005-01-26 Improvements in and relating to data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2005/000021 WO2006080898A1 (en) 2005-01-26 2005-01-26 Improvements in and relating to data processing

Publications (1)

Publication Number Publication Date
WO2006080898A1 true WO2006080898A1 (en) 2006-08-03

Family

ID=35253680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2005/000021 WO2006080898A1 (en) 2005-01-26 2005-01-26 Improvements in and relating to data processing

Country Status (1)

Country Link
WO (1) WO2006080898A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7908376B2 (en) 2008-07-31 2011-03-15 Broadcom Corporation Data path acceleration of a network stack

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000052897A2 (en) * 1999-03-01 2000-09-08 Sun Microsystems, Inc. Dynamic parsing in a high performance network interface
US20020161919A1 (en) * 1997-10-14 2002-10-31 Boucher Laurence B. Fast-path processing for receiving data on TCP connection offload devices
WO2003034670A1 (en) * 2001-10-19 2003-04-24 Operax Ab A method and apparatus for transferring data packets in ip routers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020161919A1 (en) * 1997-10-14 2002-10-31 Boucher Laurence B. Fast-path processing for receiving data on TCP connection offload devices
WO2000052897A2 (en) * 1999-03-01 2000-09-08 Sun Microsystems, Inc. Dynamic parsing in a high performance network interface
WO2003034670A1 (en) * 2001-10-19 2003-04-24 Operax Ab A method and apparatus for transferring data packets in ip routers

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7908376B2 (en) 2008-07-31 2011-03-15 Broadcom Corporation Data path acceleration of a network stack

Similar Documents

Publication Publication Date Title
US8886827B2 (en) Flow cache mechanism for performing packet flow lookups in a network device
US8630294B1 (en) Dynamic bypass mechanism to alleviate bloom filter bank contention
US7362761B2 (en) Packet processing apparatus
US8799507B2 (en) Longest prefix match searches with variable numbers of prefixes
US7050394B2 (en) Framer
US20030231627A1 (en) Arbitration logic for assigning input packet to available thread of a multi-threaded multi-engine network processor
US20090086736A1 (en) Notification of out of order packets
US20050152335A1 (en) Managing processing utilization in a network node
US6526066B1 (en) Apparatus for classifying a packet within a data stream in a computer network
US20070230469A1 (en) Transmission apparatus
US8340090B1 (en) Interconnecting forwarding contexts using u-turn ports
US6603759B1 (en) Adaptive buffer management for voice over packet network
US20090073970A1 (en) System and method for parsing frames
JPH09162922A (en) Inter-network connection device
EP1294156B1 (en) Method and apparatus for transferring packets in network with monitoring of malicious packets
US7649906B2 (en) Method of reducing buffer usage by detecting missing fragments and idle links for multilink protocols and devices incorporating same
US7990987B2 (en) Network processor having bypass capability
EP2908483A1 (en) Communication node, communication system, control device, packet transfer method, and program
WO2006080898A1 (en) Improvements in and relating to data processing
CA2437667A1 (en) Frame transfer method and node in ethernet
US20070076706A1 (en) Fast reroute in a multiprotocol label switching network
US7293132B2 (en) Apparatus and method for efficient data storage using a FIFO memory
JP2003218907A (en) Processor with reduced memory requirements for high- speed routing and switching of packets
US7639684B2 (en) Modified ethernet switch
US6683875B1 (en) Network switch including restriction of source address look-ups based on receive queue length

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase

Ref document number: 05704841

Country of ref document: EP

Kind code of ref document: A1