CN106789733A - A kind of device and method for improving large scale network flow stream searching efficiency - Google Patents

A kind of device and method for improving large scale network flow stream searching efficiency Download PDF

Info

Publication number
CN106789733A
CN106789733A CN201611088812.1A CN201611088812A CN106789733A CN 106789733 A CN106789733 A CN 106789733A CN 201611088812 A CN201611088812 A CN 201611088812A CN 106789733 A CN106789733 A CN 106789733A
Authority
CN
China
Prior art keywords
hash
internal memory
module
address
tuple
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611088812.1A
Other languages
Chinese (zh)
Other versions
CN106789733B (en
Inventor
刘钧锴
王江为
暴宇
于睿
余勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201611088812.1A priority Critical patent/CN106789733B/en
Publication of CN106789733A publication Critical patent/CN106789733A/en
Application granted granted Critical
Publication of CN106789733B publication Critical patent/CN106789733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6265Queue scheduling characterised by scheduling criteria for service slots or service orders past bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority

Abstract

The present invention provides a kind of device and method for improving large scale network flow stream searching efficiency, and the device includes:The five-tuple of input data bag is carried out the calculating of first time hash by Prepare modules, and 256 groups of packets are divided into according to hash values;Lup_index modules respectively extract a packet five-tuple from 256 groups of packets and carry out second hash calculating, obtain DRAM and read address, and send reading instruction to internal memory sequentially through Sdram_controller modules, search hash indexes;Fifo modules cache each five-tuple information;Wait_index modules obtain hash indexes from internal memory;Effective index address is sent reading instruction by Lup_node modules by Sdram_controller modules as address is read to internal memory, searches hash nodes;Wait_node modules receive the content of hash nodes in the flow table that internal memory is returned;Write_node modules update hash nodes and output result in flow table.The present invention realizes a kind of hardware unit with the existing resource of FPGA internal programmable logical devices, the arrangement increases the reading rate of DRAM, so as to improve the search efficiency of network flow table.

Description

A kind of device and method for improving large scale network flow stream searching efficiency
Technical field
The present invention relates to FPGA (Field-Programmable Gate Array, field programmable gate array), there is line number Word communicates and network data flow management domain, more particularly to a kind of device for improving large scale network flow stream searching efficiency and side Method.
Background technology
Network flow table is to realize a kind of mode of network data flow management.In network containing identical five-tuple (source ip addresses, Purpose ip addresses, source port number, destination slogan and protocol type) packet constitute a data stream, by every Data flow sets up the mode of list item, and all packets to data flow are uniformly processed.For example, to the data of certain data stream Bag number is counted, or is forwarded to some port to the unification of all packets of certain data stream.
The number of data streams that now every 10G fibre circuit in a network is included is millions.Realize to such Large-scale data flow is built flow table and needs to use mass storage, i.e. dynamic random access memory (DRAM).Lookup mode Using hash modes, as shown in figure 1, carrying out hash calculating to five-tuple, hash values search list item as address in DRAM, If the five-tuple in list item is consistent with by lookup five-tuple, illustrate to search correctly, then carry out packet by flow table content Treatment, is updated to flow table content;If the five-tuple in flow table is inconsistent with by lookup five-tuple, then it represents that hash is rushed Dash forward, it is necessary to resettle a list item (building stream).
Flow table is searched using CPU or FPGA mostly at present, having looked into a five-tuple for packet can just look into next, such as schemed Shown in 2.To the read-write of DRAM than more random, there is no rule.Due to there is delay between the read-write operation of DRAM, stochastic searching can be big The reading efficiency of amplitude reduction DRAM, in the case of dram controller limited amount, makes the reduction of veneer disposal ability.
The content of the invention
It is an object of the invention to provide a kind of device and method for improving large scale network flow stream searching efficiency, with FPGA The existing resource of portion's PLD realizes a kind of hardware unit, the arrangement increases the reading rate of DRAM, so as to carry The search efficiency of network flow table high.
To reach above-mentioned purpose, the technical solution adopted by the present invention is:
A kind of device for improving large scale network flow stream searching efficiency, including Memory Controller Hub, grouping module, index process Module, searching modul, update module;
The Memory Controller Hub is used to control to search and return hash indexes, searches and return to hash nodal informations, update Hash nodes in flow table;
The grouping module is used to for the five-tuple of input data bag to carry out the calculating of first time hash, according to the 8bit for producing Hash values are divided into 256 groups of packets, and the hash values of each of which group packet are identical;
The index process module is used for each one packet five-tuple of extraction from 256 groups of packets to be carried out second Hash is calculated, and is fixed the DRAM that width is 29bit and is read address, and caches each five-tuple information and sequentially through internal memory Controller sends reading instruction to internal memory, searches and receive hash indexes;
The searching modul is used to for effective index address to send reading instruction to internal memory by Memory Controller Hub, searches and connects Receive the content of hash nodes in flow table;
The update module is used to be updated operation to hash nodes in flow table.
Further, the lookup hash indexes are, using hash values as address, to search hash section of the storage in internal memory The concordance list of dot address.
Further, effective index is effective bit in hash concordance lists to have hash sections in 1, i.e. internal memory Point.
Further, the index process module includes searching index module, FIFO module, receives index module;
The lookup index module respectively extracts a packet five-tuple from 256 groups of packets and carries out second hash meter Calculate, be fixed the DRAM that width is 29bit and read address, and reading instruction is sent to internal memory sequentially through Memory Controller Hub, look into Look for hash indexes;
The FIFO module is used to cache each five-tuple information;
The index module that receives receives the hash indexed results that internal memory is returned.
Further, the searching modul includes searching node module, FIFO module, receiving node module;
The lookup node module sends reading instruction by Memory Controller Hub according to index address to internal memory, searches flow table The content of middle hash nodes;
The FIFO module is used to cache the five-tuple information of hash nodes in each flow table;
The receiving node module receives the content of hash nodes in the flow table that internal memory is returned.
A kind of method for improving large scale network flow stream searching efficiency, its step includes:
1) five-tuple of input data bag is carried out into the calculating of first time hash, 256 is divided into according to the 8bit hash values for producing Group packet is simultaneously stored;
2) a packet five-tuple is respectively extracted from 256 groups of packets carries out second hash calculating, is fixed width Spend for the DRAM of 29bit reads address;
3) reading instruction is sent to internal memory sequentially through Memory Controller Hub, searches hash indexes, and cache each five-tuple letter Breath;
4) hash indexes are obtained from internal memory, effective index address is sent out by Memory Controller Hub as address is read to internal memory Go out reading instruction, and hash nodes are searched in internal memory;
If 5) five-tuple in hash nodes is consistent with above-mentioned five-tuple information, search correct;
6) the hash nodes and output result in flow table are updated.
Further, step 3) described in search hash indexes be that, using hash values as address, lookup is stored in internal memory Hash node address concordance list.
Further, step 4) described in effectively index for effective bit in hash concordance lists be in 1, i.e. internal memory There are hash nodes;If effectively bit is 0, illustrate that hash indexes are invalid, then carry out building stream operation.
Further, step 5) in if information is inconsistent, then be confirmed whether there is hash to conflict, if hash conflict then Next hash nodes are continued to search for, otherwise carries out building stream operation.
Further, it is described build stream operation be exactly to set up a hash node.
The beneficial effects of the present invention are:The present invention provide it is a kind of improve large scale network flow stream searching efficiency device and Method, in the case where not made any changes to original system hardware, using the existing resource reality of FPGA PLDs A kind of existing hardware unit, completes the raising to network flow stream searching efficiency.The present invention calculates hash values by using to five-tuple Mode, the 8bit hash values that the five-tuple of input data bag is calculated according to first time hash are divided into 256 groups of data Wrap, the packet to 256 difference hash values carries out the operation for unifying to search flow table every time, as shown in Figure 3.When 256 data After bag has all been processed, then initiate 256 flow stream searchings of packet next time.So each flow stream searching is just once read and write Switching, read-write delay will be shared 256 packets, be averagely reduced by as original to the time delay on each packet 1/256, the search efficiency of network flow table is greatly improved.
Brief description of the drawings
Fig. 1 is the data flowchart that flow table is searched using hash modes.
Fig. 2 is existing lookup flow table operation order schematic diagram.
Fig. 3 is lookup flow table operation order schematic diagram of the invention.
Fig. 4 is the apparatus structure schematic diagram of one embodiment of the invention.
Fig. 5 is the schematic diagram that apparatus of the present invention structure is not used.
Fig. 6 is the schematic diagram using apparatus of the present invention structure.
Fig. 7 is the method flow diagram of one embodiment of the invention.
Fig. 8 is the structural representation of Prepare modules in Fig. 6.
Fig. 9 is the write-in flow chart of Prepare modules in Fig. 6.
Figure 10 is the reading flow chart of Prepare modules in Fig. 6.
Specific embodiment
To enable features described above of the invention and advantage to become apparent, special embodiment below, and coordinate institute's accompanying drawing to make Describe in detail as follows.
The present invention provides a kind of device for improving large scale network flow stream searching efficiency, as shown in figure 4, the device includes one Sdram_controller modules in Memory Controller Hub, such as figure;Prepare modules in one grouping module, such as figure;One index Processing module, it includes searching index module, FIFO module, index module is received, respectively such as the Lup_index moulds in figure Block, Fifo modules, Wait_index modules;One searching modul, it includes searching node module, FIFO module, receives section Point module, respectively such as Lup_node modules, Fifo modules, the Wait_node modules in figure;One update module, in figure Write_node modules;
The Sdram_controller modules are used to control to search and return hash indexes, search and return to hash nodes Information, the hash nodes updated in flow table;
The five-tuple of input data bag is carried out the calculating of first time hash by the Prepare modules, according to the 8bit for obtaining Hash values are divided into 256 groups of packets (i.e. 256 data streams) and are stored, and sequentially read five-tuple by group number, are sent to Downstream module is searched, and the hash values of each packet are identical;
The Lup_index modules are carried out second to each packet five-tuple for extracting from 256 groups of packets Hash is calculated the DRAM that fixed width is 29bit and reads address, and sends reading instruction to internal memory by Memory Controller Hub, searches Hash indexes, the lookup hash indexes are, using hash values as address, to search hash node address of the storage in internal memory Concordance list;
The Fifo modules are used to cache each five-tuple information;
The Wait_index modules are used to receive the hash indexed results of internal memory return;
The Lup_node modules send reading instruction by Memory Controller Hub according to index address to internal memory, search flow table The content of middle hash nodes;
The Wait_node modules are used for the content of hash nodes in the flow table for receive internal memory return to;
The Write_node modules are used to be updated operation to hash nodes in flow table.
The hash that the present invention is used is calculated, and refers to that result of calculation is to unify the numerical value of bit wide.
Refer to Fig. 5 and Fig. 6, Fig. 5 and Fig. 6 is respectively the schematic diagram of unused apparatus of the present invention structure and uses the present invention The schematic diagram of apparatus structure.From Fig. 5 and Fig. 6 as can be seen that in the case where not made any changes to original system hardware, should With the existing resource (increasing apparatus of the present invention in the programmable logic device) of FPGA PLDs, complete to net The raising of network flow stream searching efficiency.
The present invention also provides a kind of method for improving large scale network flow stream searching efficiency, as shown in fig. 7, the method step Including:
1) calculating of first time hash is carried out to the five-tuple of input data bag in Prepare modules, according to the 8bit for obtaining Hash values are divided into 256 groups of packets and store, in the case of the downstream module free time, 256 will cached by the order of group The different five-tuple of hash values issues downstream Lup_index modules;
2) it is 29bit's that 256 five-tuples carry out second hash to be calculated fixed width by Lup_index modules DRAM reads address, and reading instruction is sent to internal memory sequentially through Memory Controller Hub, searches hash indexes, then five-tuple is put into Cached in Fifo modules, the lookup hash indexes are, using hash values as address, to search hash node of the storage in internal memory The concordance list of address;
3) whether effectively the hash indexes obtained from internal memory are judged in Wait_index modules, effectively then by index address Lup_node modules are sent to the five-tuple in Fifo modules;The hash indexes are effectively for effective in hash concordance lists Bit is existing hash nodes in 1, i.e. internal memory;If effectively bit is 0, illustrate that hash indexes are invalid, then carry out building stream operation, Set up a hash node;
4) Lup_node modules send reading instruction by Memory Controller Hub using index address as address is read to internal memory, will Five-tuple is stored in Fifo modules;
5) five-tuple in the hash nodes that Wait_node modules return to internal memory is carried out with the five-tuple in Fifo modules Compare, if unanimously, illustrating to search correctly, hash node address and content are sent to Write_node modules, carry out more The operation of hash nodes and output result in new flow table;It is confirmed whether have hash to conflict if lookup is incorrect, if Hash conflicts then continue to search for next hash nodes, carry out building stream operation if not, that is, set up a hash node.
Fig. 8 is refer to, the figure is the structural representation of Prepare modules in Fig. 6, and Prepare modules are by multiple subfunctions Module composition, it includes:
Ram:Dual-port ram inside PLD;
HashX_waddr:The corresponding part ram writing address registers of hash values;
HashX_raddr:The corresponding part ram of hash values reads address register;
Cnt:Summary counter;
Ram_wr_addr:The write address of whole ram;
Ram_rd_addr:The reading address of whole ram.
Fig. 9 is refer to, the figure is the write-in flow chart of Prepare modules in Fig. 6, carries out 8bit's to five-tuple first Hash is calculated;Secondly the hash values writing address register corresponding with hash values that will be obtained is combined into the write address of ram;Finally Five-tuple is write into ram, and writing address register corresponding with hash values is done into operation of Jia 1.
Figure 10 is refer to, the figure is the reading flow chart of Prepare modules in Fig. 6, be idle situation in downstream module Under, the corresponding part ram of the value and summary counter of summary counter cnt is read into the reading ground that address register is combined into ram Whether location, judges there are five-tuple data in ram, if data, then output data, and by the reading ground of the corresponding part ram of cnt Location register adds 1, and summary counter cnt adds up;If without data, only carrying out summary counter cnt and adding up.When cumulative meter Number device stops this wheel operation when reaching 256.
Table 1. is not used present device and uses present device disposal ability contrast table
By network tester to whether using the test that give out a contract for a project of the equipment of quick flow stream searching method be used as control, Assess effect of the invention.Evaluation index uses maximum processing capability, i.e., maximum not under packet drop can process input bandwidth. Tester is sent to, equipment maximum input bandwidth completely the same using the packet of present device and unused present device It is 100Gbps, as a result as shown in table 1.Test result indicate that, under various traffic environments, using quick stream proposed by the present invention The equipment of table search device has good performance to be lifted, and can improve the lookup speed of flow table.
Implement to be merely illustrative of the technical solution of the present invention rather than be limited above, the ordinary skill people of this area Member can modify or equivalent to technical scheme, without departing from the spirit and scope of the present invention, this hair Bright protection domain should be to be defined described in claims.

Claims (10)

1. a kind of device for improving large scale network flow stream searching efficiency, including Memory Controller Hub, grouping module, index process mould Block, searching modul, update module;
The Memory Controller Hub is used to control to search and return hash indexes, searches and return to hash nodal informations, update flow table In hash nodes;
The grouping module is used to for the five-tuple of input data bag to carry out the calculating of first time hash, according to the 8bit for producing Hash values are divided into 256 groups of packets, and the hash values of each of which group packet are identical;
The index process module is based on respectively one packet five-tuple of extraction carries out second hash from 256 groups of packets Calculate, be fixed the DRAM that width is 29bit and read address, and cache each five-tuple information and sequentially through Memory Controller Hub Reading instruction is sent to internal memory, hash indexes are searched and receive;
The searching modul is used to for effective index address to send reading instruction to internal memory by Memory Controller Hub, searches and receiving stream The content of hash nodes in table;
The update module is used to be updated operation to hash nodes in flow table.
2. device as claimed in claim 1, it is characterised in that the lookup hash indexes are, using hash values as address, to look into Look for the concordance list of hash node address of the storage in internal memory.
3. device as claimed in claim 1, it is characterised in that effective index is effective bit in hash concordance lists Be 1, i.e., existing hash nodes in internal memory.
4. device as claimed in claim 1, it is characterised in that the index process module includes searching index module, first enters First go out module, receive index module;
The lookup index module respectively extracts a packet five-tuple from 256 groups of packets and carries out second hash calculating, It is fixed the DRAM that width is 29bit and reads address, and reading instruction is sent to internal memory sequentially through Memory Controller Hub, searches Hash indexes;
The FIFO module is used to cache each five-tuple information;
The index module that receives receives the hash indexed results that internal memory is returned.
5. device as claimed in claim 1, it is characterised in that the searching modul includes searching node module, FIFO Module, receiving node module;
The lookup node module sends reading instruction, in lookup flow table by Memory Controller Hub according to index address to internal memory The content of hash nodes;
The FIFO module is used to cache the five-tuple information of hash nodes in each flow table;
The receiving node module receives the content of hash nodes in the flow table that internal memory is returned.
6. a kind of method for improving large scale network flow stream searching efficiency, its step includes:
1) five-tuple of input data bag is carried out into the calculating of first time hash, 256 groups of numbers is divided into according to the 8bit hash values for producing According to wrapping and store;
2) a packet five-tuple is respectively extracted from 256 groups of packets carries out second hash calculating, and being fixed width is The DRAM of 29bit reads address;
3) reading instruction is sent to internal memory sequentially through Memory Controller Hub, searches hash indexes, and cache each five-tuple information;
4) hash indexes are obtained from internal memory, effective index address is sent into reading by Memory Controller Hub as address is read to internal memory Instruction, and hash nodes are searched in internal memory;
If 5) five-tuple in hash nodes is consistent with above-mentioned five-tuple information, search correct;
6) the hash nodes and output result in flow table are updated.
7. method as claimed in claim 6, it is characterised in that step 3) described in search hash indexes be using hash values as Address, searches the concordance list of hash node address of the storage in internal memory.
8. method as claimed in claim 6, it is characterised in that step 4) described in effectively index be having in hash concordance lists Effect bit is existing hash nodes in 1, i.e. internal memory;If effectively bit is 0, illustrate that hash indexes are invalid, then carry out building stream behaviour Make.
9. method as claimed in claim 6, it is characterised in that step 5) in if information is inconsistent, then be confirmed whether have Hash conflicts, and if hash conflicts then continue to search for next hash nodes, otherwise carries out building stream operation.
10. method as claimed in claim 8 or 9, it is characterised in that described to build stream operation be exactly to set up a hash node.
CN201611088812.1A 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency Active CN106789733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611088812.1A CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611088812.1A CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Publications (2)

Publication Number Publication Date
CN106789733A true CN106789733A (en) 2017-05-31
CN106789733B CN106789733B (en) 2019-12-20

Family

ID=58915231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611088812.1A Active CN106789733B (en) 2016-12-01 2016-12-01 Device and method for improving large-scale network flow table searching efficiency

Country Status (1)

Country Link
CN (1) CN106789733B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248939A (en) * 2017-05-26 2017-10-13 中国人民解放军理工大学 Network flow high-speed associative method based on hash memories
CN107707479A (en) * 2017-10-31 2018-02-16 北京锐安科技有限公司 The lookup method and device of five-tuple rule
CN113259006A (en) * 2021-07-14 2021-08-13 北京国科天迅科技有限公司 Optical fiber network communication system, method and device
CN113448996A (en) * 2021-06-11 2021-09-28 成都三零嘉微电子有限公司 High-speed searching method for IPSec security policy database
CN113595822A (en) * 2021-07-26 2021-11-02 北京恒光信息技术股份有限公司 Data packet management method, system and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540723A (en) * 2009-04-20 2009-09-23 杭州华三通信技术有限公司 Flow stream searching method and device
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow
CN105224692A (en) * 2015-11-03 2016-01-06 武汉烽火网络有限责任公司 Support the system and method for the SDN multilevel flow table parallel search of polycaryon processor
CN105357128A (en) * 2015-10-30 2016-02-24 迈普通信技术股份有限公司 Stream table creating and querying method
KR20160025323A (en) * 2014-08-27 2016-03-08 한국전자통신연구원 Network device and Method for processing a packet in the same device
CN105591914A (en) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 Openflow flow table look-up method and device
CN106059957A (en) * 2016-05-18 2016-10-26 中国科学院信息工程研究所 Flow table rapid searching method and system under high-concurrency network environment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540723A (en) * 2009-04-20 2009-09-23 杭州华三通信技术有限公司 Flow stream searching method and device
CN102938000A (en) * 2012-12-06 2013-02-20 武汉烽火网络有限责任公司 Unlocked flow table routing lookup algorithm adopting high-speed parallel execution manner
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow
KR20160025323A (en) * 2014-08-27 2016-03-08 한국전자통신연구원 Network device and Method for processing a packet in the same device
CN105591914A (en) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 Openflow flow table look-up method and device
CN105357128A (en) * 2015-10-30 2016-02-24 迈普通信技术股份有限公司 Stream table creating and querying method
CN105224692A (en) * 2015-11-03 2016-01-06 武汉烽火网络有限责任公司 Support the system and method for the SDN multilevel flow table parallel search of polycaryon processor
CN106059957A (en) * 2016-05-18 2016-10-26 中国科学院信息工程研究所 Flow table rapid searching method and system under high-concurrency network environment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248939A (en) * 2017-05-26 2017-10-13 中国人民解放军理工大学 Network flow high-speed associative method based on hash memories
CN107248939B (en) * 2017-05-26 2020-07-31 中国人民解放军理工大学 Network flow high-speed correlation method based on hash memory
CN107707479A (en) * 2017-10-31 2018-02-16 北京锐安科技有限公司 The lookup method and device of five-tuple rule
CN113448996A (en) * 2021-06-11 2021-09-28 成都三零嘉微电子有限公司 High-speed searching method for IPSec security policy database
CN113448996B (en) * 2021-06-11 2022-09-09 成都三零嘉微电子有限公司 High-speed searching method for IPSec security policy database
CN113259006A (en) * 2021-07-14 2021-08-13 北京国科天迅科技有限公司 Optical fiber network communication system, method and device
CN113595822A (en) * 2021-07-26 2021-11-02 北京恒光信息技术股份有限公司 Data packet management method, system and device
CN113595822B (en) * 2021-07-26 2024-03-22 北京恒光信息技术股份有限公司 Data packet management method, system and device

Also Published As

Publication number Publication date
CN106789733B (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN106789733A (en) A kind of device and method for improving large scale network flow stream searching efficiency
Li et al. Packet forwarding in named data networking requirements and survey of solutions
Song et al. Fast hash table lookup using extended bloom filter: an aid to network processing
WO2020181740A1 (en) High-performance openflow virtual flow table search method
Pontarelli et al. Traffic-aware design of a high-speed FPGA network intrusion detection system
US7356663B2 (en) Layered memory architecture for deterministic finite automaton based string matching useful in network intrusion detection and prevention systems and apparatuses
Cassidy et al. Design of a one million neuron single FPGA neuromorphic system for real-time multimodal scene analysis
CN104579974B (en) The Hash Bloom Filter and data forwarding method of Name Lookup towards in NDN
Tokusashi et al. Lake: the power of in-network computing
Wellem et al. A flexible sketch-based network traffic monitoring infrastructure
CN110912826B (en) Method and device for expanding IPFIX table items by using ACL
Zhang et al. Identifying elephant flows in internet backbone traffic with bloom filters and LRU
CN107729053A (en) A kind of method for realizing cache tables
Wellem et al. A hardware-accelerated infrastructure for flexible sketch-based network traffic monitoring
Tang et al. A real-time updatable FPGA-based architecture for fast regular expression matching
Geethakumari et al. A specialized memory hierarchy for stream aggregation
CN109086815B (en) Floating point number discretization method in decision tree model based on FPGA
Lin et al. Pipelined parallel ac-based approach for multi-string matching
CN109491602A (en) A kind of Hash calculation method and system for the storage of Key-Value data
CN107707479A (en) The lookup method and device of five-tuple rule
Zhao et al. High-performance implementation of dynamically configurable load balancing engine on FPGA
CN111131197B (en) Filtering strategy management system and method thereof
JP2015053673A (en) Packet relay device and packet relay method
Trzepiński et al. FPGA Implementation of Memory Management for Multigigabit Traffic Monitoring
Van Thin et al. Design Least Recently Used Cache to Speedup Ethernet Packet Classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant