CN106059957B - Quickly flow stream searching method and system under a kind of high concurrent network environment - Google Patents
Quickly flow stream searching method and system under a kind of high concurrent network environment Download PDFInfo
- Publication number
- CN106059957B CN106059957B CN201610330417.3A CN201610330417A CN106059957B CN 106059957 B CN106059957 B CN 106059957B CN 201610330417 A CN201610330417 A CN 201610330417A CN 106059957 B CN106059957 B CN 106059957B
- Authority
- CN
- China
- Prior art keywords
- grouping
- packet
- module
- index
- pkt
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/6275—Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention relates to flow stream searching method quick under a kind of high concurrent network environment and systems.This method comprises: 1) counted to the flow for entering network interface, according to the buffer window of the current traffic conditions setting buffer area of statistics;2) according to the size of the buffer window of setting, division operation is executed using data packet of the five-tuple information to arrival;3) grouping of each caching is scheduled according to preset scheduling strategy, connection management module successively is sent in each grouping;4) connection management module extracts the five-tuple information of each grouping, carries out flow stream searching process, finds corresponding flow entry, and the information of flow entry is updated using the data packet in grouping.The present invention is mainly suitable for the access expenses that in the high-speed network flow processing system of backbone links, can optimize high speed network environment lower connector reason module, improve the access efficiency of flow table.
Description
Technical field
The invention belongs to technical field of network security, and in particular to a kind of quick flow table towards high concurrent network environment is looked into
Look for method and system.
Background technique
Under high speed network environment, efficient connection management has become existing network flow processing system (as invaded
The systems such as detection, charge on traffic) a key modules, usual flow processing system framework is broadly divided into three big modules: flow
Acquisition, connection management, business processing.Connection management provides stream tracing function for business processing, including lookup, update and deletes this
Three kinds of operations.Accurately to record each connection, connection management module must safeguard a connection table (or conversational list), wherein
A connection in each connection list item retrospect network, is responsible for the relevant informations such as mark ID, the state of record connection, wherein connecting
Connect mark be it is globally unique, be generally made of the five-tuple information on the head TCP/IP.
Existing flow processing system uses the strategy of single packet scheduling: data packet is buffered in first in network interface card buffer area, later
It is successively sent to connection management module by the order of arrival in buffer area, the state for executing connection updates and attended operation.In high speed
In network environment, single packet processing can not only bring a large amount of function readjustment expense, also result in the performance bottleneck of flow table access.With
The increase of concurrent connection number, the scale of connection table be continuously increased.It is limited to the limitation and Hash table structure itself of hardware resource
Limitation, Hash table slot number needs are preset and dynamic adjusts extremely difficult, and the growth of the chain that conflicts is so that single packet flow stream searching
Efficiency decline.In the high speed network of existing 10Gbps flow, it is even higher that Bao Su reaches 10Mpps, and most in network
Packet requires to execute flow stream searching operation, and the lookup frequency and packet arrival rate of flow table are suitable, and flow stream searching efficiency has become
One of important performance of stream processing system.Based on this, it is necessary to a kind of expansible, efficient flow stream searching method is designed, to answer
To the high speed concurrent environment of backbone network.
Current flow stream searching operation is divided into three classes implementation method: Hash table, Bloom Filter, content addressed storage
Device.For the flow table using Hash table structure, a Hash table searches the calculating for including cryptographic Hash and conflict chain compares two steps
Operation.Very poor with the worst-case performance of connection method processing hash-collision: all N number of keywords are inserted into the same slot
In, to generate the chained list that a length is N, the worst search length at this moment is O (N).
Based on this, many work all concentrate on making as far as possible the conflict chain length on each slot balanced, to guarantee averagely to look into
Look for length close to the O (1+ α) of best-case, α is to load the factor.The hash algorithm that this point has needed is realized, although can be with
By all conflict chain length distributing equilibriums in complicated cryptographic Hash method (MD5, SHA-1) Lai Shixian Hash table, but
The hash function for being would generally consume a large amount of CPU.
Relative to above-mentioned one heavy Hash, the effect of multiple Hash can be more preferable.Multiple Hash will calculate multiple Kazakhstan
Uncommon value is ultimately inserted into multiple sublists shortest one, but this makes lookup every time that will search multiple conflict chains, in packet number
Biggish lookup expense will be brought in intensive backbone network.
In terms of optimizing search operation by network locality, realize that the high speed cache of flow table can using FPGA and SRAM
To accelerate access speed, it is limited to the circuit complexity of FPGA and the capacity limit of SRAM, the scale of flow table receives storage
The limitation of capacity is influenced in high concurrent network environment by flowed fluctuation and burst flow, and a large amount of be flexibly connected will be by
Compel to replace, leads to system missing inspection.
Summary of the invention
In order to optimize high speed network environment lower connector reason module access expense, the high concurrent based on backbone links, slowly
It updates, there is the traffic characteristic of locality to a certain degree, the present invention provides a kind of quick flow stream searching method and systems, mainly
Suitable for the high-speed network flow processing system of backbone links.
Main contents of the invention include: (1) efficient network flow grouping algorithm, to the data packet in network interface card buffer area
It is grouped according to the number of failing to be sold at auction;(2) threshold value scheduling strategy is scheduled grouped data packet;(3) flow stream searching.
The core of quick flow stream searching method of the invention is that connection management module is sent in the data packet grouping in network,
To reduce the number of comparisons and readjustment expense of flow stream searching.The packet number that each connection reaches after grouping is more, fast searching method
Bring good effect is better.Therefore, efficient grouping algorithm is the basis of quick flow stream searching method.Based on this, grouping is calculated
The design of method mainly includes the following aspects:
1) group basis is the five-tuple information on the head TCP/IP.Connection in connection management module is in network communication
Source IP, destination IP, source port, destination port and transport layer communication protocol type five-tuple information uniquely determine.
2) high efficiency and flexibility of grouping algorithm.Division operation can introduce regular hour expense, good data structure
The expense of division operation can be greatly reduced, so that fast searching method brings more good effects.
3) for each group of data packet after grouping, their five-tuple is identical, comes from the same connection, needs height
Effect ground index gets up, so that scheduling strategy efficiently can be scheduled, safeguard to each grouping.
4) object of grouping algorithm operation is the data packet in network interface card buffer area, and the size of buffer window, which needs to compromise, to be examined
Consider.Window is too big, can not only consume certain memory headroom, also results in data packet and increases from the delay for capturing processing;Window
Mouth is too small, then the packet number that will lead to each Connection Cache is very little, and bring good effect is limited.
For the data packet of grouped mistake, need certain scheduling strategy that connection management module is sent in each grouping,
Good scheduling strategy can not only make the data packet in each grouping that can obtain fair dispatcher meeting, while also can be quick
Lookup method brings more good effects.The design of scheduling strategy mainly includes following design content:
1) for the grouping cached, the grouping more than packet number should preferentially be scheduled, and the few grouping of packet number should delay tune
Degree waits and caches more data packets, this can reduce more flow table access expenses.
2) hunger phenomenon should not occur in the grouping for being delayed scheduling, i.e., for a long time do not obtain dispatcher meeting, this meeting so that
System missing inspection.
Data packet grouping in network is sent to connection management module by quick flow stream searching scheme provided by the invention, with
The number of comparisons and readjustment expense of flow stream searching are reduced, and makes the data packet in each grouping can by reasonable scheduling strategy
Obtain fair dispatcher meeting.This method is primarily adapted for use in the high-speed network flow processing system of backbone links, can be optimized
High speed network environment lower connector manages the access expense of module, improves the access efficiency of flow table.
Detailed description of the invention
Fig. 1 is present system structural schematic diagram.
Fig. 2 is data stream packet structural schematic diagram.
Fig. 3 is Q1Q2 queue migration schematic diagram.
Fig. 4 is flow table access time comparison diagram in scenario A.
Fig. 5 is that flow stream searching length vs scheme in scenario A.
Fig. 6 is flow table access time comparison diagram in scenario B.
Fig. 7 is that flow stream searching length vs scheme in scenario B.
Specific embodiment
Below by specific embodiments and the drawings, the present invention will be further described.
Overall framework of the invention is as shown in Figure 1, by network interface, buffer window management module, data stream packet mould
Block, starvation avoid six module, packet scheduler, connection management module parts from forming, and operating procedure is as follows:
1) traffic conditions are counted while flow enters network interface, and traffic statistics are sent into buffer area
Window management module;Buffer window management module selects one according to current traffic conditions from preset window size;
2) according to the window size of setting, data stream packet module executes division operation to the data packet of arrival, works as scheduling
When opportunity reaches, packet scheduler is triggered;
3) after packet scheduler receives triggering command, the grouping of each caching is scheduled according to scheduling strategy, successively
Connection management module is sent in each grouping;
4) starvation avoids module from being responsible for from packet scheduler collection scheduling information, and in due course triggering packet scheduler to not adjusting
The grouping of degree is scheduled;
5) connection management module extracts the five-tuple information for each grouping that packet scheduler is sent, and carries out primary true
Flow stream searching process, find corresponding flow entry, successively update the state etc. of flow entry using the data packet in grouping later
Information.
It is discussed in detail in the following, being done with regard to operating procedure.
Buffer window management module: by being acquired to current packet arrival rate and inter-packet gap information, in conjunction with system
Delay degrees of tolerance select a suitable window size K, if window size is prefixed 64,256,512 3 values, defaults and select
256 are selected as, unit is " a ", can accommodate how many a data packets to describe.The delay tolerance refers to system or item
The data packet that mesh can be born handles delay.If system delay tolerance is window selection K=64 within 100us;If system is prolonged
When tolerance be 100~500us, window selection K=256;Otherwise window selects K=512 size.
Data stream packet module: such as Fig. 2, in order to realize efficient data stream packet, algorithm uses Hash table (in Fig. 2
PT the grouping of data packet) is carried out for main structure, while introducing index queue (Q1 and Q2 in Fig. 2) to grouped data packet
It is indexed, and establishes two-way index relative between Hash table and index queue, the specific steps are as follows:
1) when being reached a data packet x, total packet number cached_num of current buffer increases by 1, five yuan for extracting x
Group information represents its number of failing to be sold at auction x.fid, and calculates position j of the x in PT by x.fid.Turn 2).
2) if PT [j] is sky, shows that x is not belonging to the grouping that any one has currently been cached, need to be implemented and 3) establish one
Newly it is grouped and establishes the two-way index between PT and Q1;If PT [j] is not sky, execute 4).
3) x and its five-tuple information are stored in the position PT [j], PT [j] buffered packet number PT [j] .pkt_count increases
Add 1, and increase a j at the t of the tail portion Q1, it is established that the unidirectional index of Q1 to PT [j], the element number in Q1 increase one
A, total packet number Q1.pkt_count in Q1 increases by 1;Increase Q1 and t information: PT [j] .Q=Q1, PT in the position [j] PT simultaneously
[j] .idx=t, it is established that the unidirectional index of PT to Q1, the two-way index between PT [j] and Q1 is successfully established at this time.Turn 6).
4) PT [j] is not sky, shows that the position j of PT has maintained a grouping and index information.By PT [j] .fid with
X.fid is compared, if equal, indicated that x belongs to the grouping of PT [j] maintenance, is executed 5);If unequal, it is meant that grouping process hair
Hash-collision has been given birth to, conflict label submitflag is set 1, is turned 6).
5) by x storage in PT [j], PT [j] buffered packet number PT [j] .pkt_count increases Q corresponding to 1, j
Total packet number Q.pkt_count increases by 1 in (Q1 or Q2);Turn 6).
If 6) submitflag is 1, submitflag is set 0, triggering packet scheduler is scheduled, PT after scheduling
The position of [j] is sky, is executed 4);Alternatively, having indicated buffer area if total packet number cached_num=K of current buffer
Man Liao should also trigger packet scheduler and be scheduled;Later, it continues back at 1).
Packet scheduler:
1) in data grouping module 4) and 5), if hash-collision has occurred in grouping process, if PT [j] .pkt_
Count is more than history average, then implementation procedure 2) PT [j] is transferred in Q2 from Q1;
2) as shown in Figure 3.Wherein (a) figure is to move to the PT [2] in Q1 in Q2, and the dotted line in figure is expressed as will be
Increased index in Q2, PT [2] will be removed in Q1).(b) figure is the index situation completed after migration operation, and dotted line indicates
Unidirectional index updated the two-way index for solid line, meanwhile, the PT [1] of tail portion has filled up removed PT [2] in Q1,
For Q1:2, the element number for increasing element a j, Q2 at the tail portion t of Q2 increases index Q1:3 web update corresponding with PT [1]
Add 1, it is established that the unidirectional index of Q2 to PT [j], Q2.pkt_count add PT [j] .pkt_count.Arrive Q1's by PT [j]
Unidirectional index, finds its position PT [j] .idx in Q1, PT [i] is found by the element i that the tail portion Q1 t is stored, by PT [i]
.idx it is set as PT [j] .idx identical value, while setting i for the element at position PT [j] .idx in Q1, by the element of Q1
Number subtracts 1, and Q1.pkt_count is always wrapped in Q1 and subtracts PT [j] .pkt_count, so far completes the behaviour for deleting PT [j] from Q1
Make.T is set by PT [j] .idx, PT [j] .Q is set as Q2, establishes the index that PT [j] arrives Q2.
3) if scheduling is triggered by data stream packet module, the grouping of Q2 index is submitted to connection management module by group;
If scheduling avoids module from triggering by starvation, the grouping indexed in Q1 is submitted to connection management module by group.
Starvation avoids module:
1) it by the triggering information in acquisition data stream packet module 6), triggers and is indexed in packet scheduler submission Q1 in due course
Grouping.
2) grouping module collision counter C1 and Q2 schedule counter C2 is designed, the former is responsible for recording the Hash of grouping module
Conflict number, every primary conflict C1 of generation increase by 1, and C1 subtracts 1 (setting 0 if C1 is less than 0) when buffer area is full;The latter is responsible for note
Total packet number of the accumulative scheduling of Q2 is recorded, when dispatching Q2 every time, C2 adds up Q2.pkt_count, and when Q1 is scheduled, C2 is set to 0;
3) when the value that Q1.pkt_count=K or C1 is more than 3 or C2 is more than 10 times of Q1.pkt_count, just
The value for triggering scheduler schedules Q1, and emptying counter is 0.
Connection management module:
1) Q (Q1 or Q2) sent for Scheduler module finds the PT [i] to one for each single item i in Q, uses
First packet in PT [i] carries out the search procedure of true flow table.
If 2) find corresponding flow entry, the flow entry is successively updated using each packet in PT [i].If not finding,
A new flow entry is established, and successively updates stream mode.
The present invention is assessed using two datasets, data set basic information is as shown in table 1.
1. data set basic information of table
By under different time scale to whether using quick flow stream searching method as compare it is of the invention to assess
Effect.Evaluation index uses two dimensions of flow table average length of search and flow table mean access time.As a result as shown in figs. 4-7,
Wherein Fig. 4 is flow table access time comparison diagram in scenario A, and Fig. 5 is that flow stream searching length vs scheme in scenario A, and Fig. 6 is scenario B
Middle flow table access time comparison diagram, Fig. 7 are that flow stream searching length vs scheme in scenario B.The experimental results showed that in various flow rate ring
Under border, quick flow stream searching method proposed by the present invention has good performance promotion, and the access efficiency of flow table can be improved.
Specific steps of the invention have used the implementation of Hash table and queue, but do not limit to and both data knots
Structure also can be used other linear structures (such as stack) instead of queue, key assignments mapping class data structure can be realized (such as red and black with other
Tree) replace Hash table structure.
The above embodiments are merely illustrative of the technical solutions of the present invention rather than is limited, the ordinary skill of this field
Personnel can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the spirit and scope of the present invention, this
The protection scope of invention should be subject to described in claims.
Claims (10)
1. quick flow stream searching method under a kind of high concurrent network environment, which comprises the following steps:
1) flow for entering network interface is counted, according to the buffering window of the current traffic conditions setting buffer area of statistics
Mouthful;
2) according to the size of the buffer window of setting, division operation is executed using data packet of the five-tuple information to arrival;
3) grouping of each caching is scheduled according to preset scheduling strategy, connection management mould successively is sent in each grouping
Block;
4) connection management module extracts the five-tuple information of each grouping, carries out flow stream searching process, finds corresponding flow table
, and use the information of the data packet update flow entry in grouping.
2. the method as described in claim 1, which is characterized in that step 1) passes through to current packet arrival rate and inter-packet gap information
It is acquired, selects suitable buffer window size in conjunction with the delay degrees of tolerance of system.
3. the method as described in claim 1, which is characterized in that step 2) uses Hash table to carry out dividing for data packet for main structure
Group, while introducing index queue and grouped data packet is indexed, and foundation is double between Hash table and index queue
To index relative.
4. the method as described in claim 1, which is characterized in that the step 3) scheduling strategy includes:
A) for the grouping cached, the packet priority more than packet number is scheduled, and scheduling is delayed in the few grouping of packet number, waits and caching more
More data packet, to reduce flow table access expense;
B) hunger phenomenon should not occur in the grouping for being delayed scheduling, i.e., do not obtain dispatcher meeting for a long time, to avoid system is generated
System missing inspection.
5. quick flow stream searching system under a kind of high concurrent network environment using claim 1 the method, which is characterized in that
Module and connection management module are avoided including buffer window management module, data stream packet module, packet scheduler, starvation;
The buffer window management module sets buffer area for caching traffic statistics, and according to current traffic conditions
Buffer window;
The data stream packet module executes division operation to the data packet of arrival, and work as according to the buffer window size of setting
Scheduling occasion triggers the packet scheduler when reaching;
After the packet scheduler receives triggering command, the grouping of each caching is scheduled according to preset scheduling strategy,
The connection management module successively is sent in each grouping;
The starvation avoids module from being responsible for from the packet scheduler collection scheduling information, and triggers the packet scheduler in due course
Unscheduled grouping is scheduled;
The connection management module extracts the five-tuple information for each grouping that packet scheduler is sent, and carries out flow stream searching mistake
Journey finds corresponding flow entry, and the information of flow entry is updated using the data packet in grouping.
6. system as claimed in claim 5, which is characterized in that the buffer window management module is by reaching current packet
Speed and inter-packet gap information are acquired, and select suitable window size in conjunction with the delay degrees of tolerance of system.
7. such as system described in claim 5 or 6, which is characterized in that the data stream packet module is tied using based on Hash table
Structure carries out the grouping of data packet, while introducing index queue and being indexed to grouped data packet, and in Hash table and rope
Draw and establishes two-way index relative between queue.
8. system as claimed in claim 7, which is characterized in that the step of data stream packet module is grouped is as follows:
1) when being reached a data packet x, total packet number cached_num of current buffer increases by 1, extracts the five-tuple letter of x
Breath represents its number of failing to be sold at auction x.fid, and calculates position j of the x in Hash table PT by x.fid, turns 2);
If 2) PT [j] is sky, show that x is not belonging to the grouping that any one has currently been cached, 3) execution establishes a new grouping simultaneously
Establish the two-way index between PT and index queue Q1;If PT [j] is not sky, execute 4);
3) x and its five-tuple information being stored in the position PT [j], PT [j] buffered packet number PT [j] .pkt_count increases by 1,
And increase a j at the t of the tail portion Q1, it is established that the unidirectional index of Q1 to PT [j];Total packet number Q1.pkt_count in Q1 increases
1, while increasing Q1 and t information: PT [j] .Q=Q1, PT [j] .idx=t, it is established that the single way cable of PT to Q1 in the position [j] PT
Draw, the two-way index between PT [j] and Q1 is successfully established at this time, is turned 6);
4) PT [j] for sky, show the position j of PT maintained one grouping and index information, by PT [j] .fid with
X.fid is compared, if equal, indicated that x belongs to the grouping of PT [j] maintenance, is executed 5);If unequal, it is meant that grouping process hair
Hash-collision has been given birth to, conflict label submitflag is set 1, is turned 6);If hash-collision has occurred in grouping process, if PT
[j] .pkt_count is more than history average, then PT [j] is transferred to rope from index queue Q1 by the packet scheduler
Draw in queue Q2;
5) by x storage in PT [j], PT [j] buffered packet number PT [j] .pkt_count increases total in Q corresponding to 1, j
Packet number Q.pkt_count increases by 1, and wherein Q is Q1 or Q2;Turn 6);
6) if submitflag is 1, submitflag is set 0, the packet scheduler is triggered and is scheduled, PT after scheduling
The position of [j] is sky, is executed 4);Alternatively, indicate that buffer area has been expired if total packet number cached_num=K of current buffer,
Also the packet scheduler is triggered to be scheduled;Later, it continues back at 1).
9. system as claimed in claim 8, which is characterized in that the step of packet scheduler is scheduled is as follows:
1] for the step 4) that is grouped in data grouping module and 5), if hash-collision, Huo Zheruo has occurred in grouping process
PT [j] .pkt_count is more than history average, then implementation procedure 2] PT [j] is transferred to index queue from index queue Q1
In Q2;
2] element number for increasing element a j, Q2 at the tail portion t of Q2 increases by 1, it is established that the unidirectional index of Q2 to PT [j],
Q2.pkt_count adds PT [j] .pkt_count;The unidirectional index that Q1 is arrived by PT [j], finds its position PT in Q1
[j] .idx finds PT [i] by the element i that the tail portion Q1 t is stored, sets PT [j] .idx identical value for PT [i] .idx, simultaneously
I is set by the element at position PT [j] .idx in Q1, the element number of Q1 is subtracted 1, Q1.pkt_count is always wrapped in Q1 and is subtracted
PT [j] .pkt_count is removed, the operation for deleting PT [j] from Q1 is so far completed;T, PT [j] .Q are set by PT [j] .idx
It is set as Q2, establishes the index that PT [j] arrives Q2;
3] if scheduling is triggered by data stream packet module, the grouping of Q2 index is submitted to connection management module by group;If adjusting
Degree avoids module from triggering by starvation, then the grouping indexed in Q1 is submitted to connection management module by group.
10. system as claimed in claim 9, which is characterized in that the starvation avoids the treatment process of module are as follows:
1] it by the triggering information of step 6) in acquisition data stream packet module, triggers and is indexed in packet scheduler submission Q1 in due course
Grouping;
2] grouping module collision counter C1 and Q2 schedule counter C2 are designed, the hash-collision that C1 is responsible for recording grouping module is secondary
Number, every primary conflict C1 of generation increase by 1, and C1 subtracts 1 when buffer area is full, sets 0 if C1 is less than 0;It is accumulative that C2 is responsible for record Q2
Total packet number of scheduling, when dispatching Q2 every time, C2 adds up Q2.pkt_count, and when Q1 is scheduled, C2 is set to 0;
3] it when the value that Q1.pkt_count=K or C1 is more than 3 or C2 is more than 10 times of Q1.pkt_count, just triggers
Scheduler schedules Q1, and the value for emptying counter is 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610330417.3A CN106059957B (en) | 2016-05-18 | 2016-05-18 | Quickly flow stream searching method and system under a kind of high concurrent network environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610330417.3A CN106059957B (en) | 2016-05-18 | 2016-05-18 | Quickly flow stream searching method and system under a kind of high concurrent network environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106059957A CN106059957A (en) | 2016-10-26 |
CN106059957B true CN106059957B (en) | 2019-09-10 |
Family
ID=57177171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610330417.3A Active CN106059957B (en) | 2016-05-18 | 2016-05-18 | Quickly flow stream searching method and system under a kind of high concurrent network environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106059957B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106789733B (en) * | 2016-12-01 | 2019-12-20 | 北京锐安科技有限公司 | Device and method for improving large-scale network flow table searching efficiency |
CN107707479B (en) * | 2017-10-31 | 2021-08-31 | 北京锐安科技有限公司 | Five-tuple rule searching method and device |
CN109921996B (en) * | 2018-12-29 | 2021-11-09 | 长沙理工大学 | High-performance OpenFlow virtual flow table searching method |
CN111163104B (en) * | 2020-01-02 | 2021-03-16 | 深圳市高德信通信股份有限公司 | Network security protection system for enterprise |
CN111538694B (en) * | 2020-07-09 | 2020-11-10 | 常州楠菲微电子有限公司 | Data caching method for network interface to support multiple links and retransmission |
CN112131223B (en) * | 2020-09-24 | 2024-02-02 | 曙光网络科技有限公司 | Traffic classification statistical method, device, computer equipment and storage medium |
CN113098911B (en) * | 2021-05-18 | 2022-10-04 | 神州灵云(北京)科技有限公司 | Real-time analysis method of multi-segment link network and bypass packet capturing system |
CN115665051B (en) * | 2022-12-29 | 2023-03-28 | 北京浩瀚深度信息技术股份有限公司 | Method for realizing high-speed flow table based on FPGA + RLDRAM3 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248573A (en) * | 2013-04-08 | 2013-08-14 | 北京天地互连信息技术有限公司 | Centralization management switch for OpenFlow and data processing method of centralization management switch |
-
2016
- 2016-05-18 CN CN201610330417.3A patent/CN106059957B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248573A (en) * | 2013-04-08 | 2013-08-14 | 北京天地互连信息技术有限公司 | Centralization management switch for OpenFlow and data processing method of centralization management switch |
Non-Patent Citations (1)
Title |
---|
Fast Hash Table Lookup Using Extended Bloom Filter: An Aid to Network Processing;Haoyu Song、等;《ACM SIGCOMM Computer 》;20051031;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN106059957A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106059957B (en) | Quickly flow stream searching method and system under a kind of high concurrent network environment | |
US11855901B1 (en) | Visibility sampling | |
US10673770B1 (en) | Intelligent packet queues with delay-based actions | |
KR102337092B1 (en) | Traffic measurement method, device, and system | |
US9380007B2 (en) | Method, apparatus and system for packet reassembly and reordering | |
US9430511B2 (en) | Merging independent writes, separating dependent and independent writes, and error roll back | |
US11665104B1 (en) | Delay-based tagging in a network switch | |
CN110061927A (en) | Congestion aware and labeling method towards micro- burst flow in a kind of more queuing data center environments | |
CN108833299B (en) | Large-scale network data processing method based on reconfigurable switching chip architecture | |
US20220045972A1 (en) | Flow-based management of shared buffer resources | |
US20110149991A1 (en) | Buffer processing method, a store and forward method and apparatus of hybrid service traffic | |
CN101136854B (en) | Method and apparatus for implementing data packet linear speed processing | |
US9485326B1 (en) | Scalable multi-client scheduling | |
CN107453948A (en) | The storage method and system of a kind of network measurement data | |
CN101753584A (en) | Method for improving rapid message processing speed of intelligent transformer substation under VxWorks system | |
US9374303B1 (en) | Method and apparatus for processing multicast packets | |
CN113141320A (en) | System, method and application for rate-limited service planning and scheduling | |
CN111385222B (en) | Real-time, time-aware, dynamic, context-aware, and reconfigurable Ethernet packet classification | |
CN101150490B (en) | A queue management method and system for unicast and multicast service data packet | |
CN104360902A (en) | Sliding window-based multi-priority metadata task scheduling method | |
Li et al. | Packet rank-aware active queue management for programmable flow scheduling | |
US6272143B1 (en) | Quasi-pushout method associated with upper-layer packet discarding control for packet communication systems with shared buffer memory | |
CN115567460A (en) | Data packet processing method and device | |
CN108173784B (en) | Aging method and device for data packet cache of switch | |
CN101945054A (en) | Dispatching algorithm suitable for feedback two-stage exchange structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |