CN106789733B - Device and method for improving large-scale network flow table searching efficiency - Google Patents

Device and method for improving large-scale network flow table searching efficiency Download PDF

Info

Publication number
CN106789733B
CN106789733B CN201611088812.1A CN201611088812A CN106789733B CN 106789733 B CN106789733 B CN 106789733B CN 201611088812 A CN201611088812 A CN 201611088812A CN 106789733 B CN106789733 B CN 106789733B
Authority
CN
China
Prior art keywords
hash
index
module
node
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611088812.1A
Other languages
Chinese (zh)
Other versions
CN106789733A (en
Inventor
刘钧锴
王江为
暴宇
于睿
余勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201611088812.1A priority Critical patent/CN106789733B/en
Publication of CN106789733A publication Critical patent/CN106789733A/en
Application granted granted Critical
Publication of CN106789733B publication Critical patent/CN106789733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6265Queue scheduling characterised by scheduling criteria for service slots or service orders past bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a device and a method for improving the searching efficiency of a large-scale network flow table, wherein the device comprises: the preparation module carries out the first hash calculation on the quintuple of the input data packet and divides the quintuple into 256 groups of data packets according to the hash value; the Lup _ index module extracts a data packet quintuple from each of the 256 groups of data packets to perform second hash calculation to obtain a DRAM read address, and sends a read instruction to the memory through the Sdram _ controller module in sequence to search for a hash index; the Fifo module caches information of each quintuple; the Wait _ index module obtains a hash index from a memory; the Lup _ node module takes the effective index address as a read address, sends a read instruction to the memory through the Sdma _ controller module, and searches for a hash node; the Wait _ node module receives the content of the hash node in the flow table returned by the memory; and the Write _ node module updates the hash node in the flow table and outputs the result. The invention realizes a hardware device by using the existing resources of the programmable logic device in the FPGA, and the device improves the reading rate of the DRAM, thereby improving the searching efficiency of the network flow table.

Description

Device and method for improving large-scale network flow table searching efficiency
Technical Field
The invention relates to the Field of Field-Programmable Gate Array (FPGA), wired digital communication and network data flow management, in particular to a device and a method for improving the searching efficiency of a large-scale network flow table.
Background
A network flow table is one way to implement network data flow management. The network contains data packets with the same five-tuple (source ip address, destination ip address, source port number, destination port number and protocol type) to form a data flow, and all the data packets of the data flow are processed uniformly by establishing an entry for each data flow. For example, the number of packets of a certain data flow is counted, or all packets of a certain data flow are forwarded to a certain port in a unified manner.
The number of data streams contained in each 10G fiber line in the network is now in the order of tens of millions. To implement stream creation for such large-scale data streams requires the use of a large-capacity memory, i.e., a Dynamic Random Access Memory (DRAM). The lookup mode adopts a hash mode, as shown in fig. 1, namely, hash calculation is carried out on the quintuple, the hash value is used as an address to lookup table items in the DRAM, if the quintuple in the table items is consistent with the searched quintuple, the lookup is correct, then the data packet is processed according to the content of the flow table, and the content of the flow table is updated; if the five-tuple in the flow table is not consistent with the searched five-tuple, the hash conflict is indicated, and an entry needs to be established (i.e. flow establishment).
At present, a CPU or an FPGA is mostly used to search a flow table, and a quintuple of one packet can be searched to find the next packet, as shown in fig. 2. The read and write of the DRAM are random and have no regularity. Because there is delay between the read-write operations of the DRAM, the random search can greatly reduce the read efficiency of the DRAM, and the processing capacity of the single board is reduced under the condition that the number of DRAM controllers is limited.
Disclosure of Invention
The invention aims to provide a device and a method for improving the searching efficiency of a large-scale network flow table, which realize a hardware device by using the existing resources of a programmable logic device in an FPGA (field programmable gate array), improve the reading speed of a DRAM (dynamic random access memory), and further improve the searching efficiency of the network flow table.
In order to achieve the purpose, the invention adopts the technical scheme that:
a device for improving the searching efficiency of a large-scale network flow table comprises a memory controller, a grouping module, an index processing module, a searching module and an updating module;
the memory controller is used for controlling the lookup and return of the hash index, the lookup and return of the hash node information and the update of the hash node in the flow table;
the grouping module is used for carrying out first hash calculation on five-tuple of an input data packet and dividing the five-tuple into 256 groups of data packets according to the generated 8bit hash value, wherein the hash value of each group of data packets is the same;
the index processing module is used for extracting a data packet quintuple from each of the 256 groups of data packets to carry out second hash calculation to obtain a DRAM read address with the fixed width of 29 bits, caching information of each quintuple, sending a read instruction to a memory through the memory controller in sequence, and searching and receiving a hash index;
the searching module is used for sending a reading instruction to the memory by the effective index address through the memory controller, searching and receiving the content of the hash node in the flow table;
and the updating module is used for updating the hash node in the flow table.
Further, the hash index is used for searching an index table of the hash node address stored in the memory by using the hash value as the address.
Further, the valid index is that the valid bit in the hash index table is 1, that is, there is a hash node in the memory.
Furthermore, the index processing module comprises a search index module, a first-in first-out module and a receiving index module;
the index searching module extracts a data packet quintuple from each of the 256 groups of data packets to carry out second hash calculation to obtain a DRAM read address with the fixed width of 29 bits, and sends a read instruction to the memory through the memory controller in sequence to search a hash index;
the first-in first-out module is used for caching information of each quintuple;
and the index receiving module receives a hash index result returned by the memory.
Furthermore, the searching module comprises a searching node module, a first-in first-out module and a receiving node module;
the searching node module sends a reading instruction to the memory through the memory controller according to the index address, and searches the content of the hash node in the flow table;
the first-in first-out module is used for caching quintuple information of the hash node in each flow table;
and the receiving node module receives the content of the hash node in the flow table returned by the memory.
A method for improving the searching efficiency of a large-scale network flow table comprises the following steps:
1) performing first hash calculation on five-tuple of an input data packet, dividing the five-tuple into 256 groups of data packets according to a generated 8-bit hash value and storing the data packets;
2) extracting a five-tuple of data packets from each of the 256 groups of data packets to perform a second hash calculation to obtain a DRAM read address with a fixed width of 29 bits;
3) sequentially sending a reading instruction to a memory through a memory controller, searching a hash index, and caching information of each quintuple;
4) obtaining a hash index from a memory, sending a reading instruction to the memory by using an effective index address as a reading address through a memory controller, and searching a hash node in the memory;
5) if the quintuple in the hash node is consistent with the quintuple information, the search is correct;
6) and updating the hash node in the flow table and outputting the result.
Further, in the step 3), the hash index is searched by using a hash value as an address, and searching an index table of the hash node address stored in the memory.
Further, in the step 4), the effective index is an effective bit of 1 in the hash index table, that is, a hash node exists in the memory; and if the valid bit is 0, the hash index is invalid, and then the stream building operation is carried out.
Further, in the step 5), if the information is inconsistent, determining whether a hash collision exists, if so, continuing to search a next hash node, otherwise, performing a flow establishing operation.
Furthermore, the flow establishing operation is to establish a hash node.
The invention has the beneficial effects that: the invention provides a device and a method for improving the searching efficiency of a large-scale network flow table, which realize a hardware device by applying the existing resources of an FPGA programmable logic device under the condition of not changing the hardware of the original system, and finish the improvement of the searching efficiency of the network flow table. The present invention divides the quintuple of the input data packet into 256 groups of data packets according to the 8bit hash value obtained by the first hash calculation by adopting the way of calculating the hash value of the quintuple, and performs the operation of searching the flow table uniformly for the data packets with 256 different hash values each time, as shown in fig. 3. And when 256 data packets are processed, the next flow table lookup of the 256 data packets is initiated. Therefore, once read-write switching is carried out every time of flow table searching, the read-write delay is distributed to 256 data packets, the delay time averagely reaching each data packet is reduced to 1/256, and the searching efficiency of the network flow table is greatly improved.
Drawings
Fig. 1 is a data flow chart of lookup of a flow table in a hash manner.
Fig. 2 is a schematic diagram of an operation sequence of a conventional lookup flow table.
Fig. 3 is a schematic diagram of the operation sequence of the lookup table according to the present invention.
Fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
FIG. 5 is a schematic view of a device structure not using the present invention.
FIG. 6 is a schematic diagram of the structure of the device using the present invention.
FIG. 7 is a flowchart of a method according to an embodiment of the invention.
FIG. 8 is a schematic diagram of the Prepare module of FIG. 6.
FIG. 9 is a flow chart of the write process of the Prepare module of FIG. 6.
FIG. 10 is a flowchart illustrating the read-out of the Prepare module of FIG. 6.
Detailed Description
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides a device for improving the searching efficiency of a large-scale network flow table, as shown in fig. 4, the device comprises a memory controller, as shown in the Sdram _ controller module; a grouping module, such as the Prepare module in the figure; an index processing module, which comprises a search index module, a first-in first-out module and a receiving index module, which are respectively shown as a Lup _ index module, a Fifo module and a Wait _ index module in the figure; the searching module comprises a searching node module, a first-in first-out module and a receiving node module which are respectively a Lup _ node module, a Fifo module and a Wait _ node module in the figure; an update module, such as the Write _ node module in the figure;
the Sdram _ controller module is used for controlling the lookup and return of a hash index, the lookup and return of hash node information and the update of a hash node in a flow table;
the preparation module performs first hash calculation on five-tuple of an input data packet, divides the five-tuple into 256 groups of data packets (256 data streams) according to an obtained 8-bit hash value for storage, reads out the five-tuple according to the sequence of group numbers, and sends the five-tuple to a downstream module for searching, wherein the hash value of each data packet is the same;
the Lup _ index module carries out secondary hash calculation on a data packet quintuple extracted from each of the 256 groups of data packets to obtain a DRAM read address with a fixed width of 29 bits, sends a read instruction to a memory through a memory controller and searches for a hash index, and the hash index is searched for an index table of a hash node address stored in the memory by taking a hash value as an address;
the Fifo module is used for caching information of each quintuple;
the Wait _ index module is used for receiving a hash index result returned by the memory;
the Lup _ node module sends a read instruction to a memory through a memory controller according to the index address, and searches the content of a hash node in the flow table;
the Wait _ node module is used for receiving the content of the hash node in the flow table returned by the memory;
and the Write _ node module is used for updating the hash node in the flow table.
The hash calculation used in the invention refers to that the calculation result is a numerical value with uniform bit width.
Referring to fig. 5 and 6, fig. 5 and 6 are a schematic diagram of a device structure not using the present invention and a schematic diagram of a device structure using the present invention, respectively. As can be seen from fig. 5 and 6, under the condition that the hardware of the original system is not changed, the improvement of the network flow table lookup efficiency is completed by applying the existing resources of the FPGA programmable logic device (i.e., adding the device of the present invention to the programmable logic device).
The present invention further provides a method for improving the efficiency of searching a large-scale network flow table, as shown in fig. 7, the method includes the steps of:
1) performing hash calculation on five-tuple of an input data packet for the first time in a Prepare module, dividing the five-tuple into 256 groups of data packets according to the obtained 8bit hash value and storing the data packets, and sending the cached 256 five-tuple with different hash values to a downstream Lup _ index module according to the sequence of the groups under the condition that the downstream module is idle;
2) the Lup _ index module carries out secondary hash calculation on 256 quintuples to obtain a DRAM read address with a fixed width of 29 bits, sends a read instruction to a memory through a memory controller in sequence, searches for a hash index, and then places the quintuple into a Fifo module for caching, wherein the hash index is used for searching an index table of a hash node address stored in the memory by taking a hash value as an address;
3) judging whether the hash index obtained from the memory is valid or not in the Wait _ index module, and if the hash index is valid, sending the index address and the quintuple in the Fifo module to the Lup _ node module; the effective bit of the hash index in the hash index table is 1, namely the existing hash node in the memory; if the effective bit is 0, the hash index is invalid, and then the stream building operation is carried out, namely a hash node is built;
4) the Lup _ node module takes the index address as a read address, sends a read instruction to the memory through the memory controller, and stores the quintuple in the Fifo module;
5) the Wait _ node module compares the five-tuple in the hash node returned by the memory with the five-tuple in the Fifo module, if the five-tuple is consistent with the five-tuple in the Fifo module, the lookup is correct, the address and the content of the hash node are sent to the Write _ node module, the operation of updating the hash node in the flow table is carried out, and the result is output; and if the lookup is incorrect, determining whether a hash conflict exists, if so, continuing to lookup the next hash node, and if not, performing flow establishing operation, namely establishing a hash node.
Referring to fig. 8, the figure is a schematic structural diagram of the Prepare module in fig. 6, where the Prepare module is composed of a plurality of sub-functional modules, and includes:
ram: a dual-port ram inside the programmable logic device;
HashX _ waddr: writing address registers in the part of ram corresponding to the hash value;
HashX _ raddr: reading an address register by using a part of ram corresponding to the hash value;
cnt: accumulating a counter;
ram _ wr _ addr: write address of the entire ram;
ram _ rd _ addr: read address of the entire ram.
Referring to FIG. 9, which is a write flow chart of the Prepare module in FIG. 6, first, 8-bit hash calculation is performed on the quintuple; secondly, combining the obtained hash value and a write address register corresponding to the hash value into a write address of ram; and finally, writing the quintuple into ram, and adding 1 to a write address register corresponding to the hash value.
Referring to fig. 10, which is a read flow chart of the Prepare module in fig. 6, when the downstream module is idle, the value of the cnt of the accumulation counter and the read address register of the part of ram corresponding to the accumulation counter are combined to form the read address of ram, whether there is five-tuple data in ram is determined, if there is data, the data is output, and the read address register of the part of ram corresponding to the cnt is incremented by 1, and the cnt of the accumulation counter is accumulated; if there is no data, only the accumulation counter cnt is accumulated. This round of operation is stopped when the accumulation counter reaches 256.
TABLE 1 comparison of the processing capacity of a plant not using the invention with that of a plant using the invention
The effect of the invention is evaluated by the network tester by comparing whether the packet sending test is carried out on the equipment adopting the rapid flow table searching method. The evaluation index adopts the maximum processing capacity, namely the maximum processable input bandwidth under the condition of no packet loss. The data packets sent by the tester to the device using the invention are completely consistent with the data packets sent by the device not using the invention, the maximum input bandwidth of the device is 100Gbps, and the result is shown in Table 1. The experimental result shows that under various flow environments, the device adopting the rapid flow table searching device provided by the invention has good performance improvement and can improve the searching speed of the flow table.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and a person skilled in the art can make modifications or equivalent substitutions to the technical solution of the present invention without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. A device for improving the searching efficiency of a large-scale network flow table comprises a memory controller, a grouping module, an index processing module, a searching module and an updating module;
the memory controller is used for controlling the lookup and return of the hash index, the lookup and return of the hash node information and the update of the hash node in the flow table;
the grouping module is used for carrying out first hash calculation on the quintuple of the input data packet, dividing the quintuple into 256 groups of data packets according to the generated 8bit hash value, wherein the hash value of each group of data packets is the same, and sending the cached 256 quintuple with different hash values to the downstream index processing module according to the sequence of the groups;
the index processing module is used for extracting a data packet quintuple from each of the 256 groups of data packets to carry out second hash calculation to obtain a DRAM read address with the fixed width of 29 bits, caching information of each quintuple, sending a read instruction to a memory through the memory controller in sequence, and searching and receiving a hash index;
the searching module is used for sending a reading instruction to the memory by the effective index address through the memory controller, searching and receiving the content of the hash node in the flow table;
and the updating module is used for updating the hash node in the flow table.
2. The apparatus as claimed in claim 1, wherein the lookup hash index is an index table for looking up hash node addresses stored in the memory using a hash value as an address.
3. The apparatus of claim 1, wherein the valid index is a hash index table in which a valid bit is 1, that is, an existing hash node in the memory.
4. The apparatus of claim 1, wherein the index processing module comprises a look-up index module, a first-in-first-out module, a receive index module;
the index searching module extracts a data packet quintuple from each of the 256 groups of data packets to carry out second hash calculation to obtain a DRAM read address with the fixed width of 29 bits, and sends a read instruction to the memory through the memory controller in sequence to search a hash index;
the first-in first-out module is used for caching information of each quintuple;
and the index receiving module receives a hash index result returned by the memory.
5. The apparatus of claim 1, wherein the lookup module comprises a lookup node module, a first-in-first-out module, a receive node module;
the searching node module sends a reading instruction to the memory through the memory controller according to the index address, and searches the content of the hash node in the flow table;
the first-in first-out module is used for caching quintuple information of the hash node in each flow table;
and the receiving node module receives the content of the hash node in the flow table returned by the memory.
6. A method for improving the searching efficiency of a large-scale network flow table comprises the following steps:
1) performing first hash calculation on quintuple of an input data packet, dividing the quintuple into 256 groups of packets according to the generated 8bit hash value and storing the packets, and outputting 256 cached quintuple with different hash values according to the sequence of the groups;
2) extracting a five-tuple of data packets from each of the 256 groups of data packets to perform a second hash calculation to obtain a DRAM read address with a fixed width of 29 bits;
3) sequentially sending a reading instruction to a memory through a memory controller, searching a hash index, and caching information of each quintuple;
4) obtaining a hash index from a memory, sending a reading instruction to the memory by using an effective index address as a reading address through a memory controller, and searching a hash node in the memory;
5) if the quintuple in the hash node is consistent with the quintuple information, the search is correct;
6) and updating the hash node in the flow table and outputting the result.
7. The method as claimed in claim 6, wherein the step 3) of looking up the hash index is to look up an index table of hash node addresses stored in the memory by using a hash value as an address.
8. The method as claimed in claim 6, wherein in step 4), the valid index is that the valid bit in the hash index table is 1, that is, there is a hash node in the memory; and if the valid bit is 0, the hash index is invalid, and then the stream building operation is carried out.
9. The method as claimed in claim 6, wherein in step 5), if the information is inconsistent, it is determined whether there is a hash collision, if there is a hash collision, the next hash node is searched for, otherwise, the flow establishing operation is performed.
10. A method according to claim 8 or 9, wherein the flow establishment operation is establishing a hash node.
CN201611088812.1A 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency Active CN106789733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611088812.1A CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611088812.1A CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Publications (2)

Publication Number Publication Date
CN106789733A CN106789733A (en) 2017-05-31
CN106789733B true CN106789733B (en) 2019-12-20

Family

ID=58915231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611088812.1A Active CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Country Status (1)

Country Link
CN (1) CN106789733B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248939B (en) * 2017-05-26 2020-07-31 中国人民解放军理工大学 Network flow high-speed correlation method based on hash memory
CN107707479B (en) * 2017-10-31 2021-08-31 北京锐安科技有限公司 Five-tuple rule searching method and device
CN113448996B (en) * 2021-06-11 2022-09-09 成都三零嘉微电子有限公司 High-speed searching method for IPSec security policy database
CN113259006B (en) * 2021-07-14 2021-11-26 北京国科天迅科技有限公司 Optical fiber network communication system, method and device
CN113595822B (en) * 2021-07-26 2024-03-22 北京恒光信息技术股份有限公司 Data packet management method, system and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540723A (en) * 2009-04-20 2009-09-23 杭州华三通信技术有限公司 Flow stream searching method and device
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow
CN105224692A (en) * 2015-11-03 2016-01-06 武汉烽火网络有限责任公司 Support the system and method for the SDN multilevel flow table parallel search of polycaryon processor
CN105357128A (en) * 2015-10-30 2016-02-24 迈普通信技术股份有限公司 Stream table creating and querying method
KR20160025323A (en) * 2014-08-27 2016-03-08 한국전자통신연구원 Network device and Method for processing a packet in the same device
CN105591914A (en) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 Openflow flow table look-up method and device
CN106059957A (en) * 2016-05-18 2016-10-26 中国科学院信息工程研究所 Flow table rapid searching method and system under high-concurrency network environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540723A (en) * 2009-04-20 2009-09-23 杭州华三通信技术有限公司 Flow stream searching method and device
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow
KR20160025323A (en) * 2014-08-27 2016-03-08 한국전자통신연구원 Network device and Method for processing a packet in the same device
CN105591914A (en) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 Openflow flow table look-up method and device
CN105357128A (en) * 2015-10-30 2016-02-24 迈普通信技术股份有限公司 Stream table creating and querying method
CN105224692A (en) * 2015-11-03 2016-01-06 武汉烽火网络有限责任公司 Support the system and method for the SDN multilevel flow table parallel search of polycaryon processor
CN106059957A (en) * 2016-05-18 2016-10-26 中国科学院信息工程研究所 Flow table rapid searching method and system under high-concurrency network environment

Also Published As

Publication number Publication date
CN106789733A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106789733B (en) Device and method for improving large-scale network flow table searching efficiency
CN109921996B (en) High-performance OpenFlow virtual flow table searching method
US10732851B2 (en) Hybrid memory device for lookup operations
US11811660B2 (en) Flow classification apparatus, methods, and systems
US7039764B1 (en) Near-perfect, fixed-time searching algorithm using hashing, LRU and cam-based caching
CN105897589B (en) The method and apparatus for the concurrency that stream is searched is hashed for CUCKOO
JP4556761B2 (en) Packet transfer device
Bando et al. Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps
Bando et al. FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
CN110808910A (en) OpenFlow flow table energy-saving storage framework supporting QoS and application thereof
CN110912826B (en) Method and device for expanding IPFIX table items by using ACL
CN109981464B (en) TCAM circuit structure realized in FPGA and matching method thereof
CN107800644A (en) Dynamically configurable pipelined token bucket speed limiting method and device
CN115695014A (en) Access control list construction and data message processing method, device and system
EP4175233B1 (en) Packet matching method and apparatus, network device, and medium
Lin et al. Route table partitioning and load balancing for parallel searching with TCAMs
Bando et al. Flashlook: 100-gbps hash-tuned route lookup architecture
CN107707479B (en) Five-tuple rule searching method and device
CN109039911B (en) Method and system for sharing RAM based on HASH searching mode
CN103198105A (en) Searching device and method for Ethernet internet protocol security (IPSec) database
US20100054272A1 (en) Storage device capable of accommodating high-speed network using large-capacity low-speed memory
CN103064901B (en) Random access memory (RAM), network processing system and RAM table look-up method
Wu et al. Scalable pipelined IP lookup with prefix tries
JP6266445B2 (en) Packet relay apparatus and packet relay method
CN113904999B (en) Data expansion method and programmable switch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant