CN103023806B - The cache resources control method of shared buffer memory formula Ethernet switch and device - Google Patents

The cache resources control method of shared buffer memory formula Ethernet switch and device Download PDF

Info

Publication number
CN103023806B
CN103023806B CN201210551390.2A CN201210551390A CN103023806B CN 103023806 B CN103023806 B CN 103023806B CN 201210551390 A CN201210551390 A CN 201210551390A CN 103023806 B CN103023806 B CN 103023806B
Authority
CN
China
Prior art keywords
stream
buffer memory
queue
credit
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210551390.2A
Other languages
Chinese (zh)
Other versions
CN103023806A (en
Inventor
罗婷
汪学舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Wuhan FiberHome Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan FiberHome Networks Co Ltd filed Critical Wuhan FiberHome Networks Co Ltd
Priority to CN201210551390.2A priority Critical patent/CN103023806B/en
Publication of CN103023806A publication Critical patent/CN103023806A/en
Application granted granted Critical
Publication of CN103023806B publication Critical patent/CN103023806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of cache resources control method and device of shared buffer memory formula Ethernet switch, relate to the communications field, the method comprises the following steps: when receiving packet, carry out Hash calculation, obtain the stream index of packet, according to stream index retrieve data fluxion group, obtain stream information; According to queue indexed search queue array, obtain the fluxion amount in current queue size and queue; The cached variable of the use value in stream information is upgraded according to the length of packet; According to the comparative result taking the equal buffer memory of buffer memory, current credit and system, attack stream credit threshold, attack stream cache threshold in stream information, make different Flow Behaviors and judge process; After forwarding data bag, upgrade and use buffer memory, current credit, the active stream counting of queue, queue length in stream information.The present invention can respond in time according to the service condition of current cache, finds attack fast, and prevent attack from consuming cache resources, effective net control is congested.

Description

The cache resources control method of shared buffer memory formula Ethernet switch and device
Technical field
The present invention relates to the communications field, particularly relate to a kind of cache resources control method and device of shared buffer memory formula Ethernet switch.
Background technology
Along with the issue of 40G, 100G ethernet standard, the application of Ethernet switch is more and more extensive, and the cache management of switch just seems important all the more.Along with the raising of network transmission speed, the buffer memory of switch may be consumed fast by malicious attack stream.The cache resources of current Ethernet switch is limited, and therefore the Preservation tactics research of switch buffer memory becomes an important research topic.In the current network equipment, cache management be one the most urgent, and challenging problem.When not having suitable cache management, when the sub-fraction of network or even network reaches capacity state, network throughput can sharply decline, and packet delay can sharply rise due to a large amount of out of order messages.
Reduce or stop buffer memory to exhaust and be considered to very natural, even think the unique method solving Cache sharing problem, but some methods do not consider the impact on data flow completely.The existence of network attack data flow and focus stream, causes network performance to decline equally.Reduce or eliminate the obstruction that attack stream or focus stream cause if indiscriminate, the performance of network can not be improved.But, by stoping or reducing the negative effect of attack stream to other data flow, normal communication service bandwidth can be ensured.
Shown in Figure 1, in the exchange model of shared buffer memory formula Ethernet switch, all of the port shares the buffer memory in this shared buffer memory formula Ethernet switch, when data message to enter the shared buffer memory of this shared buffer memory formula Ethernet switch through port queue from input port, output port buffer memory is entered again through scheduler, scheduler according to first in first out strategy, the data in output port buffer memory are dispatched.Shared buffer memory administrative unit is responsible for allocation of free space from shared buffer memory and is used for storing the message entered, and the address of memory space is marked this message as index.After this, all process to data message of this shared buffer memory formula Ethernet switch, comprising and search, revise, queue up, is all process for data message index.Finally, find the actual storage address of data message in shared buffer memory by scheduler by index, take out data message, sent by outbound port.Then, the memory space corresponding to this data message reclaims by shared buffer memory administrative unit, returns shared buffer memory.
Congested for avoiding, when in output port buffer memory, packet reaches certain threshold value, start to carry out packet loss, the packet inputted is reduced, and its feedback control model is shown in Figure 2.The arrival rate of tentation data stream is w (t), the data transmission rate that switching equipment can carry out forwarding is q (t), q (t) carries out feedback regulation according to buffer occupancy x (t), model shown in so upper figure, its dynamic behaviour can be described as following equation:
x ( t ) = Sat K { q ( t - τ 1 ) + w ( t ) + γ ‾ } - - - ( 1 )
Wherein:
Sat K ( x ) = K , x , 0 , x > K 0 &le; x &le; K x < 0 - - - ( 2 )
The input function (buffer occupancy) of system, by generation one delay, removes nonlinear restriction, after simplifying, represents the adjustment process of buffering area by following linear equation:
x(n+1)=x(n)+λ(n-τ)+d(n)-μ (3)
Wherein λ (n) represents the quantity of packet in current buffer, and d (n) represents that the packet that the n-th cycle abandoned, μ represent the packet that in one-period, outbound port sends.
Traffic classification, concerning the end of a thread obstructing problem, is an important problem, and unique to eliminate the method that the end of a thread blocks be separated with non-congested stream congested stream in shared queue.Transmission distribution in network is not be uniformly distributed, and there is outburst transmission, belongs to packet same node point in network of identical message, occupy the major part in Internet Transmission.Transmission life period locality in network and space locality two characteristics.Time locality shows the relation between packet, the packet arrived recently, and its destination address is identical with the destination address of the interior bag arrived for the previous period.Space locality shows that the pass between packet is most packet, and its destination address concentrates on a small amount of port.Because Time and place locality depends on the characteristic of transmission classification, because forwarding data bag has the feature of Time and place locality, distribute a queue to each input port, there is no need completely.In IP transmission network, only a part of queue keeps using all the time.Space locality means a large amount of packets, and its destination address is limited several ports.Space locality causes a large amount of packets within a period of time, use identical queue.Locality due to Time and place causes the utilance of cache management low, is necessary to develop more high efficiency mechanism.
In order to adapt to TCP (Transmission Control Protocol, transmission control protocol) Dynamic Congestion Control algorithm, the average RTT (Round-Trip Time, round-trip delay) that the buffer memory that router needs equals data flow is multiplied by the network interface speed of router.In desirable network environment, generally suppose RTT≤1ms, therefore under the link rate of 1Gbps (1024 megabits per second), 1ms can buffer memory 125000 byte, approximates the Ethernet bag of 1488 64 byte longs.The buffer memory of the Ethernet switch used in existing network, each GE (Gigabit Ethernet, gigabit Ethernet) the port mean allocation cache resources of 200KB-2MB.Each GE mouth of such as Cisco 6500 switch is configured with 439KB and 1.2MB two kinds of cached configuration according to different hardware modules.
In the network of high bandwidth, the consumption of buffer memory is not only high but also fast, and the time that exhausts be buffered in 100Mbps link of 256KB is 20ms, and only needs 2ms in the link of 1Gbps.When the buffer memory of core switch all of the port exhausts, packet will unconditionally abandon; If switch employs queue scheduling algorithm and realizes congestion control, so once the buffer memory of certain queue exhausts, then destination is that the packet of this queue all will be dropped.In the face of cache resources exhausts this situation; some exchange opportunity reserved part cache resources gives crucial data message; such as Spanning-Tree Protocol bag and BGP (Border Gateway Protocol; Border Gateway Protocol; be used for connecting the routing protocol of the upper autonomous system of Internet) etc. Routing Protocol control message; but just cannot be protected for data message to be forwarded normally, switch easily under fire, cannot effectively be dispatched by normal data message.
Summary of the invention
The object of the invention is the deficiency in order to overcome above-mentioned background technology; a kind of cache resources control method and device of shared buffer memory formula Ethernet switch are provided; can respond in time according to the service condition of current cache; quick discovery attack, prevent attack from consuming cache resources, effective net control is congested; for enough bandwidth are reserved in other key business; ensure effective scheduling of legitimate traffic, guarantee that shared buffer memory formula Ethernet switch is immune against attacks, buffer memory and queue are protected.
The cache resources control method of shared buffer memory formula Ethernet switch provided by the invention, comprises the following steps:
When the physical port shared buffer memory administrative unit of S1, shared buffer memory formula Ethernet switch receives packet, the particular content according to packet carries out Hash calculation, obtains the stream index of this packet; During each stream initialization, current credit obtains credit max value, credit max value be data flow use buffer memory to be greater than system average cache time the maximum packet number allowing to forward;
S2, shared buffer memory administrative unit, according to stream index retrieve data fluxion group, obtain stream information, and stream information comprises the queue index using buffer memory, current credit and this stream of this stream; According to information and the scheduling rule of packet, be distribution of flows queue, obtain the queue index of this packet;
S3, shared buffer memory administrative unit, according to queue indexed search queue array, obtain the fluxion amount in current queue size and queue, by queue length divided by the fluxion amount in queue, obtain the average cache that in current queue, each stream occupies;
S4, shared buffer memory administrative unit upgrade the cached variable of the use value in stream information according to the length receiving packet: add the length sending packet, as the new buffer memory of use by using buffer memory;
S5, shared buffer memory administrative unit arrange attack stream credit threshold and attack stream cache threshold, attack stream credit threshold is judge that whether data flow is the credit threshold of attack stream, the largest buffered value that attack stream cache threshold can take for attack stream, according to the comparative result taking buffer memory, current credit and system average cache, attack stream credit threshold, attack stream cache threshold in stream information, make following 7 kinds of different Flow Behaviors and judge and process:
(1) if the buffer memory of use of stream does not exceed system average cache, then non-attack bag is judged to be, the normal forwarding data bag of shared buffer memory administrative unit;
(2) if the buffer memory of use of stream exceedes system average cache but current credit equals system credit max value, according to probability random drop packet;
(3) if the current credit < attack stream credit threshold of stream, judge that this stream is as attack stream, within being used by attack stream buffer memory to be limited to attack stream cache threshold, abandon the subsequent packet of attack stream place packet, and the credit value of this stream is decremented to 0;
(4) if stream use buffer memory > attack stream cache threshold and stream current credit≤attack stream credit threshold, then judge that this data flow is as attack stream, according to probability random drop packet, the credit value of this stream successively decreases at random;
(5) if the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and this stream uses buffer memory still to exceed system average cache, then according to probability random drop packet, the credit value of this stream successively decreases at random;
(6) if the buffer memory of the use≤system average cache of stream and current credit≤attack stream credit threshold, illustrate that this data flow is attack stream, but buffer memory uses less, normal forwarding data bag, but the current credit of this stream increases progressively;
(7) if stream the buffer memory of use≤system average cache and current credit > attack stream credit threshold, illustrate that this stream was once attack stream, but become normal stream, the property value of this stream is set to again the credit max value of system, and carries out packet forwarding by normal stream;
After S6, shared buffer memory administrative unit forwarding data bag, upgrade and use buffer memory, current credit, the active stream counting of queue, queue length in stream information;
S7, outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information.
In technique scheme, stream index described in step S1 is used for retrieve data fluxion group and uses, and described data flow array takies situation for the buffer memory recording each stream, whether credit value, this stream are active stream and the invoked queue number of this stream.
In technique scheme, further comprising the steps of in step S2: if the queue index of this stream is invalid value, to be this flow assignment queue index; Queue index for retrieving the queue array of shared buffer memory formula Ethernet switch physical port, the current length of each queue under recording this port and movable fluxion amount.
In technique scheme, step S6 comprises the following steps: after shared buffer memory administrative unit forwarding data bag, upgrades the cached variable of the use value in stream information according to the length receiving packet: deduct using buffer memory the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue.
In technique scheme, further comprising the steps of before step S6: if enter queue first, also need, according to queue index, traffic count device is added one, then enter Packet Generation flow process.
In technique scheme, step S6 is further comprising the steps of: after packet discard, and whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies.
In technique scheme, step S7 comprises the following steps: first carry out transmission data processing according to stream index and packet, upgrades the cached variable of the use value in stream information according to the length sending packet: deduct using buffer memory the length sending packet; Judge that whether the cached variable of the use value of this stream is as zero, if be zero, then subtract one according to queue index by traffic count device, then sends packet again.
The present invention also provides a kind of cache resources control device of shared buffer memory formula Ethernet switch, this cache resources control device is arranged in shared buffer memory administrative unit, this cache resources control device comprises receiver module, stream index computing module, queue index acquisition module, average cache computing module, update module, stream initialization module, judge module, data packet discarding module, attack stream determination module, stream credit value processing module, wherein:
Receiver module, for: receive packet, produce stream index and calculate triggering signal and queue index acquisition triggering signal, and stream index computing module is sent to together with packet and stream index calculating triggering signal, together with packet and queue index acquisition triggering signal, be sent to queue index acquisition module;
Stream index computing module, for: receive packet that receiver module sends and stream index calculate triggering signal time, the particular content according to packet carries out Hash calculation, obtains stream index;
Queue index acquisition module, for: receive packet that receiver module sends and queue index obtain triggering signal time, according to information and the scheduling rule of packet, for distribution of flows queue, obtain queue index, produce average cache and calculate triggering signal, and be sent to average cache computing module together with queue index and average cache calculating triggering signal;
Average cache computing module, for: receive queue index that queue index acquisition module sends and average cache calculate triggering signal time, according to the average cache that each stream in queue index calculation current queue occupies, produce and upgrade triggering signal, renewal triggering signal is sent to update module;
Update module, for: when receiving the renewal triggering signal that average cache computing module sends, upgrade and use that the active stream in cached variable value, queuing message counts, queue length in stream information; And outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information;
Stream initialization module, for: initialization is carried out in convection current, produces credit max value and arranges triggering signal, this stream and credit max value is arranged triggering signal and is sent to stream credit value processing module, this stream is sent to judge module;
Judge module, for: after receiving stream, if it is determined that the use buffer memory of stream exceedes system average cache, then produce random drop triggering signal first, random drop triggering signal will be sent to data packet discarding module first; If it is determined that the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and the buffer memory that this stream uses still exceedes system average cache, then produce random drop triggering signal, this stream and random drop triggering signal are sent to data packet discarding module; If it is determined that the credit value of stream is less than attack stream credit threshold, then produces attack stream and judge triggering signal, and this stream and attack stream are judged that triggering signal is sent to attack stream determination module;
Data packet discarding module, for: when receiving the triggering signal of random drop first that judge module sends, start random drop packet; When receiving the stream and random drop triggering signal that judge module sends, according to probability random drop packet, and produce stream credit value and to successively decrease at random triggering signal, triggering signal of this stream and stream credit value being successively decreased at random is sent to stream credit value processing module; When receiving attack stream that attack stream determination module sends and unconditionally abandon triggering signal, packet follow-up for attack stream place packet is all abandoned;
Attack stream determination module, for: receive stream that judge module sends and attack stream judge triggering signal time, this stream is judged to be attack stream, within the buffer memory that this attack stream uses is limited to attack stream cache threshold, and generation unconditionally abandons triggering signal and credit value is decremented to zero triggering signal, by this attack stream with unconditionally abandon triggering signal and be sent to data packet discarding module, this attack stream and stream credit value are decremented to zero triggering signal and are sent to stream credit value processing module; If it is determined that attack stream reduces transmission rate, and system cache resource sufficient, do not have congested, then produce and recover credit max value triggering signal, by this attack stream with recover credit max value triggering signal and be sent to and flow credit value processing module;
Stream credit value processing module, for: receive stream that stream initialization module sends and credit max value triggering signal is set time, the credit value of this stream is set to credit max value; Receive stream that data packet discarding module sends and stream credit value successively decrease at random triggering signal time, the credit value of this stream that successively decreases at random; Receive attack stream that attack stream determination module sends and stream credit value when being decremented to zero triggering signal, the credit value of this attack stream is decremented to 0; When receiving attack stream that attack stream determination module sends and recover credit max value triggering signal, the credit value of this attack stream is returned to credit max value, and this attack stream becomes normal stream, then this normal stream is sent to judge module.
In technique scheme, described update module upgrades the cached variable of the use value in stream information according to the length receiving packet: deduct using buffer memory the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue.
In technique scheme, after described data packet discarding module packet discard, whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies.
Compared with prior art, advantage of the present invention is as follows:
The present invention can respond in time according to the service condition of current cache; quick discovery attack; restriction attack uses system resource; prevent attack from consuming cache resources; effective net control is congested, for enough bandwidth are reserved in other key business, ensures effective scheduling of legitimate traffic; guarantee that shared buffer memory formula Ethernet switch is immune against attacks, buffer memory and queue are protected.
Accompanying drawing explanation
Fig. 1 is the exchange model schematic diagram of shared buffer memory formula Ethernet switch in prior art.
Fig. 2 is the schematic diagram of shared buffer memory feedback control model in prior art.
Fig. 3 is the flow chart of FBMP algorithm in the embodiment of the present invention.
Embodiment
Below in conjunction with drawings and the specific embodiments, the present invention is described in further detail.
Shown in Figure 3; the embodiment of the present invention provides a kind of cache resources control method of shared buffer memory formula Ethernet switch; be again FBMP (Flow Based Memory Protectionmechanism, the buffer protection strategy based on stream) algorithm, comprise the following steps:
When S1, shared buffer memory administrative unit (physical port of shared buffer memory formula Ethernet switch) receive packet, particular content according to packet carries out Hash calculation, obtain the stream index of this packet, stream index is used for retrieve data fluxion group and uses, and data flow array takies situation for the buffer memory recording each stream, whether credit value, this stream are active stream and the invoked queue number of this stream; During each stream initialization, current credit obtains credit max value, credit max value be data flow use buffer memory to be greater than system average cache time the maximum packet number allowing to forward;
S2, shared buffer memory administrative unit, according to stream index retrieve data fluxion group, obtain stream information, and stream information comprises the queue index using buffer memory, current credit and this stream of this stream; According to information and the scheduling rule of packet, be distribution of flows queue, obtain the queue index of this packet; If the queue index of this stream is invalid value, explanation is a new data flow arrival, needs for this flow assignment queue index; Queue index for retrieving the queue array of shared buffer memory formula Ethernet switch physical port, the current length of each queue under recording this port and movable fluxion amount;
S3, shared buffer memory administrative unit, according to queue indexed search queue array, obtain the fluxion amount in current queue size and queue, by queue length divided by the fluxion amount in queue, obtain the average cache that in current queue, each stream occupies;
S4, shared buffer memory administrative unit upgrade the cached variable of the use value in stream information according to the length receiving packet, be about to use buffer memory to add the length sending packet, as the new buffer memory of use;
S5, shared buffer memory administrative unit arrange attack stream credit threshold and attack stream cache threshold, attack stream credit threshold is judge that whether data flow is the credit threshold of attack stream, the largest buffered value that attack stream cache threshold can take for attack stream, according to the comparative result taking buffer memory, current credit and system average cache, attack stream credit threshold, attack stream cache threshold in stream information, make following 7 kinds of different Flow Behaviors and judge and process:
(1) if the buffer memory of use of stream does not exceed system average cache, be then judged to be non-attack bag, shared buffer memory administrative unit can normal forwarding data bag;
(2) if the buffer memory of use of stream exceedes system average cache but current credit equals system credit max value, the doubtful attack stream of this stream is represented, will according to probability random drop packet;
(3) if the current credit < attack stream credit threshold of stream, judge that this stream is as attack stream, within being used by attack stream buffer memory to be limited to attack stream cache threshold, abandon the subsequent packet of attack stream place packet, and the credit value of this stream is decremented to 0, ensure the impact of the stream under attack of the cache resources minimum degree of switch;
(4) if stream use buffer memory > attack stream cache threshold and stream current credit≤attack stream credit threshold, then judge that this data flow is as attack stream, according to probability random drop packet, the credit value of this stream also will successively decrease at random;
(5) if the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and this stream uses buffer memory still to exceed system average cache, then according to probability random drop packet, the credit value of this stream also will successively decrease at random;
(6) if the buffer memory of the use≤system average cache of stream and current credit≤attack stream credit threshold, illustrate that this data flow is attack stream, but buffer memory uses less, still can normal forwarding data bag, but the current credit of this stream increases progressively;
(7) if stream the buffer memory of use≤system average cache and current credit > attack stream credit threshold, illustrate that this stream was once attack stream, but reduce transmission rate at present, and system cache resource is sufficient, not congested, become normal stream, the property value of this stream is set to again the credit max value of system, and carries out packet forwarding by normal stream;
After S6, shared buffer memory administrative unit forwarding data bag, upgrade the cached variable of the use value in stream information according to the length receiving packet, be about to use buffer memory to deduct the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue; If enter queue first, also need, according to queue index, traffic count device is added one, then enter Packet Generation flow process; After packet discard, whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies;
S7, outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information: first carry out transmission data processing according to stream index and packet, upgrade the cached variable of the use value in stream information according to the length sending packet, be about to use buffer memory to deduct the length sending packet; Judge that whether the cached variable of the use value of this stream is as zero, if be zero, then subtract one according to queue index by traffic count device, then sends packet again.
The embodiment of the present invention also provides a kind of cache resources control device of shared buffer memory formula Ethernet switch, this cache resources control device is arranged in shared buffer memory administrative unit, this cache resources control device comprises receiver module, stream index computing module, queue index acquisition module, average cache computing module, update module, stream initialization module, judge module, data packet discarding module, attack stream determination module, stream credit value processing module, wherein:
Receiver module, for: receive packet, produce stream index and calculate triggering signal and queue index acquisition triggering signal, and stream index computing module is sent to together with packet and stream index calculating triggering signal, together with packet and queue index acquisition triggering signal, be sent to queue index acquisition module;
Stream index computing module, for: receive packet that receiver module sends and stream index calculate triggering signal time, the particular content according to packet carries out Hash calculation, obtains stream index;
Queue index acquisition module, for: receive packet that receiver module sends and queue index obtain triggering signal time, according to information and the scheduling rule of packet, for distribution of flows queue, obtain queue index, produce average cache and calculate triggering signal, and be sent to average cache computing module together with queue index and average cache calculating triggering signal;
Average cache computing module, for: receive queue index that queue index acquisition module sends and average cache calculate triggering signal time, according to the average cache that each stream in queue index calculation current queue occupies, produce and upgrade triggering signal, renewal triggering signal is sent to update module;
Update module, for: when receiving the renewal triggering signal that average cache computing module sends, upgrade and use that the active stream in cached variable value, queuing message counts, queue length in stream information, concrete, upgrade the cached variable of the use value in stream information according to the length receiving packet: deduct using buffer memory the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue; And outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information;
Stream initialization module, for: initialization is carried out in convection current, produces credit max value and arranges triggering signal, this stream and credit max value is arranged triggering signal and is sent to stream credit value processing module, this stream is sent to judge module;
Judge module, for: after receiving stream, if it is determined that the use buffer memory of stream exceedes system average cache, then produce random drop triggering signal first, random drop triggering signal will be sent to data packet discarding module first; If it is determined that the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and the buffer memory that this stream uses still exceedes system average cache, then produce random drop triggering signal, this stream and random drop triggering signal are sent to data packet discarding module; If it is determined that the credit value of stream is less than attack stream credit threshold, then produces attack stream and judge triggering signal, and this stream and attack stream are judged that triggering signal is sent to attack stream determination module;
Data packet discarding module, for: when receiving the triggering signal of random drop first that judge module sends, start random drop packet; When receiving the stream and random drop triggering signal that judge module sends, according to probability random drop packet, and produce stream credit value and to successively decrease at random triggering signal, triggering signal of this stream and stream credit value being successively decreased at random is sent to stream credit value processing module; When receiving attack stream that attack stream determination module sends and unconditionally abandon triggering signal, packet follow-up for attack stream place packet is all unconditionally abandoned; After packet discard, whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies;
Attack stream determination module, for: receive stream that judge module sends and attack stream judge triggering signal time, this stream is judged to be attack stream, within the buffer memory that this attack stream uses being limited to this steady state value of attack stream cache threshold, and generation unconditionally abandons triggering signal and credit value is decremented to zero triggering signal, by this attack stream with unconditionally abandon triggering signal and be sent to data packet discarding module, this attack stream and stream credit value are decremented to zero triggering signal and are sent to stream credit value processing module; If it is determined that attack stream reduces transmission rate, and system cache resource sufficient, do not have congested, then produce and recover credit max value triggering signal, by this attack stream with recover credit max value triggering signal and be sent to and flow credit value processing module;
Stream credit value processing module, for: receive stream that stream initialization module sends and credit max value triggering signal is set time, the credit value of this stream is set to credit max value; Receive stream that data packet discarding module sends and stream credit value successively decrease at random triggering signal time, the credit value of this stream that successively decreases at random; Receive attack stream that attack stream determination module sends and stream credit value when being decremented to zero triggering signal, the credit value of this attack stream is decremented to 0, ensures the impact of the stream under attack of the cache resources minimum degree of switch; When receiving attack stream that attack stream determination module sends and recover credit max value triggering signal, the credit value of this attack stream is returned to credit max value, and this attack stream becomes normal stream, then this normal stream is sent to judge module.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention comprises these change and modification.
The content be not described in detail in this specification belongs to the known prior art of professional and technical personnel in the field.

Claims (10)

1. a cache resources control method for shared buffer memory formula Ethernet switch, is characterized in that, comprise the following steps:
When the physical port shared buffer memory administrative unit of S1, shared buffer memory formula Ethernet switch receives packet, the particular content according to packet carries out Hash calculation, obtains the stream index of this packet; During each stream initialization, current credit obtains credit max value, credit max value be data flow use buffer memory to be greater than system average cache time the maximum packet number allowing to forward;
S2, shared buffer memory administrative unit, according to stream index retrieve data fluxion group, obtain stream information, and stream information comprises the queue index using buffer memory, current credit and this stream of this stream; According to information and the scheduling rule of packet, be distribution of flows queue, obtain the queue index of this packet;
S3, shared buffer memory administrative unit, according to queue indexed search queue array, obtain the fluxion amount in current queue size and queue, by queue length divided by the fluxion amount in queue, obtain the average cache that in current queue, each stream occupies;
S4, shared buffer memory administrative unit upgrade the cached variable of the use value in stream information according to the length receiving packet: add the length sending packet, as the new buffer memory of use by using buffer memory;
S5, shared buffer memory administrative unit arrange attack stream credit threshold and attack stream cache threshold, attack stream credit threshold is judge that whether data flow is the credit threshold of attack stream, the largest buffered value that attack stream cache threshold can take for attack stream, according to the comparative result taking buffer memory, current credit and system average cache, attack stream credit threshold, attack stream cache threshold in stream information, make following 7 kinds of different Flow Behaviors and judge and process:
(1) if the buffer memory of use of stream does not exceed system average cache, then non-attack bag is judged to be, the normal forwarding data bag of shared buffer memory administrative unit;
(2) if the buffer memory of use of stream exceedes system on average use buffer memory but current credit equals system credit max value, according to probability random drop packet;
(3) if the current credit < attack stream credit threshold of stream, judge that this stream is as attack stream, within being used by attack stream buffer memory to be limited to attack stream cache threshold, abandon the subsequent packet of attack stream place packet, and the credit value of this stream is decremented to 0;
(4) if stream use buffer memory > attack stream cache threshold and stream current credit≤attack stream credit threshold, then judge that this data flow is as attack stream, according to probability random drop packet, the credit value of this stream successively decreases at random;
(5) if the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and this stream uses buffer memory still to exceed system average cache, then according to probability random drop packet, the credit value of this stream successively decreases at random;
(6) if the buffer memory of the use≤system average cache of stream and current credit≤attack stream credit threshold, illustrate that this data flow is attack stream, but buffer memory uses less, normal forwarding data bag, but the current credit of this stream increases progressively;
(7) if stream the buffer memory of use≤system average cache and current credit > attack stream credit threshold, illustrate that this stream was once attack stream, but become normal stream, the property value of this stream is set to again the credit max value of system, and carries out packet forwarding by normal stream;
After S6, shared buffer memory administrative unit forwarding data bag, upgrade and use buffer memory, current credit, the active stream counting of queue, queue length in stream information;
S7, outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information.
2. the cache resources control method of shared buffer memory formula Ethernet switch as claimed in claim 1, it is characterized in that: stream index described in step S1 is used for retrieve data fluxion group and uses, described data flow array takies situation for the buffer memory recording each stream, whether credit value, this stream are active stream and the invoked queue number of this stream.
3. the cache resources control method of shared buffer memory formula Ethernet switch as claimed in claim 1, is characterized in that: further comprising the steps of in step S2: if the queue index of this stream is invalid value, be this flow assignment queue index; Queue index for retrieving the queue array of shared buffer memory formula Ethernet switch physical port, the current length of each queue under recording this port and movable fluxion amount.
4. the cache resources control method of shared buffer memory formula Ethernet switch as claimed in claim 1, it is characterized in that: step S6 comprises the following steps: after shared buffer memory administrative unit forwarding data bag, upgrade the cached variable of the use value in stream information according to the length receiving packet: deducting using buffer memory the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue.
5. the cache resources control method of shared buffer memory formula Ethernet switch as claimed in claim 4, it is characterized in that: further comprising the steps of before step S6: if enter queue first, also need, according to queue index, traffic count device is added one, then enter Packet Generation flow process.
6. the cache resources control method of shared buffer memory formula Ethernet switch as claimed in claim 5, it is characterized in that: step S6 is further comprising the steps of: after packet discard, whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies.
7. the cache resources control method of the shared buffer memory formula Ethernet switch according to any one of claim 1 to 6, it is characterized in that: step S7 comprises the following steps: first carry out transmission data processing according to stream index and packet, upgrade the cached variable of the use value in stream information according to the length sending packet: deducting using buffer memory the length sending packet; Judge that whether the cached variable of the use value of this stream is as zero, if be zero, then subtract one according to queue index by traffic count device, then sends packet again.
8. the cache resources control device of a shared buffer memory formula Ethernet switch, it is characterized in that: this cache resources control device is arranged in shared buffer memory administrative unit, this cache resources control device comprises receiver module, stream index computing module, queue index acquisition module, average cache computing module, update module, stream initialization module, judge module, data packet discarding module, attack stream determination module, stream credit value processing module, wherein:
Receiver module, for: receive packet, produce stream index and calculate triggering signal and queue index acquisition triggering signal, and stream index computing module is sent to together with packet and stream index calculating triggering signal, together with packet and queue index acquisition triggering signal, be sent to queue index acquisition module;
Stream index computing module, for: receive packet that receiver module sends and stream index calculate triggering signal time, the particular content according to packet carries out Hash calculation, obtains stream index;
Queue index acquisition module, for: receive packet that receiver module sends and queue index obtain triggering signal time, according to information and the scheduling rule of packet, for distribution of flows queue, obtain queue index, produce average cache and calculate triggering signal, and be sent to average cache computing module together with queue index and average cache calculating triggering signal;
Average cache computing module, for: receive queue index that queue index acquisition module sends and average cache calculate triggering signal time, according to the average cache that each stream in queue index calculation current queue occupies, produce and upgrade triggering signal, renewal triggering signal is sent to update module;
Update module, for: when receiving the renewal triggering signal that average cache computing module sends, upgrade and use that the active stream in cached variable value, queuing message counts, queue length in stream information; And outgoing direction dispatch deal is carried out to the data flow in port queue, upgrade queuing data information;
Stream initialization module, for: initialization is carried out in convection current, produces credit max value and arranges triggering signal, this stream and credit max value is arranged triggering signal and is sent to stream credit value processing module, this stream is sent to judge module;
Judge module, for: after receiving stream, if it is determined that the use buffer memory of stream exceedes system average cache, then produce random drop triggering signal first, random drop triggering signal will be sent to data packet discarding module first; If it is determined that the transmission source of stream does not respond the event of abandoning, continuation two-forty is given out a contract for a project, and the buffer memory that this stream uses still exceedes system average cache, then produce random drop triggering signal, this stream and random drop triggering signal are sent to data packet discarding module; If it is determined that the credit value of stream is less than attack stream credit threshold, then produces attack stream and judge triggering signal, and this stream and attack stream are judged that triggering signal is sent to attack stream determination module;
Data packet discarding module, for: when receiving the triggering signal of random drop first that judge module sends, start random drop packet; When receiving the stream and random drop triggering signal that judge module sends, according to probability random drop packet, and produce stream credit value and to successively decrease at random triggering signal, triggering signal of this stream and stream credit value being successively decreased at random is sent to stream credit value processing module; When receiving attack stream that attack stream determination module sends and unconditionally abandon triggering signal, packet follow-up for attack stream place packet is all abandoned;
Attack stream determination module, for: receive stream that judge module sends and attack stream judge triggering signal time, this stream is judged to be attack stream, within the buffer memory that this attack stream uses is limited to attack stream cache threshold, and generation unconditionally abandons triggering signal and credit value is decremented to zero triggering signal, by this attack stream with unconditionally abandon triggering signal and be sent to data packet discarding module, this attack stream and stream credit value are decremented to zero triggering signal and are sent to stream credit value processing module; If it is determined that attack stream reduces transmission rate, and system cache resource sufficient, do not have congested, then produce and recover credit max value triggering signal, by this attack stream with recover credit max value triggering signal and be sent to and flow credit value processing module;
Stream credit value processing module, for: receive stream that stream initialization module sends and credit max value triggering signal is set time, the credit value of this stream is set to credit max value; Receive stream that data packet discarding module sends and stream credit value successively decrease at random triggering signal time, the credit value of this stream that successively decreases at random; Receive attack stream that attack stream determination module sends and stream credit value when being decremented to zero triggering signal, the credit value of this attack stream is decremented to 0; When receiving attack stream that attack stream determination module sends and recover credit max value triggering signal, the credit value of this attack stream is returned to credit max value, and this attack stream becomes normal stream, then this normal stream is sent to judge module.
9. the cache resources control device of shared buffer memory formula Ethernet switch as claimed in claim 8, is characterized in that: described update module upgrades the cached variable of the use value in stream information according to the length receiving packet: deduct using buffer memory the length sending packet; Whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies again; According to the length of queue index upgrade queue, and upgrade the active stream counting of current credit in stream information, queue.
10. the cache resources control device of shared buffer memory formula Ethernet switch as claimed in claim 8 or 9, it is characterized in that: after described data packet discarding module packet discard, whether be zero, if be zero, then according to queue index, number of data streams is subtracted one if judging that the buffer memory of this stream takies.
CN201210551390.2A 2012-12-18 2012-12-18 The cache resources control method of shared buffer memory formula Ethernet switch and device Active CN103023806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210551390.2A CN103023806B (en) 2012-12-18 2012-12-18 The cache resources control method of shared buffer memory formula Ethernet switch and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210551390.2A CN103023806B (en) 2012-12-18 2012-12-18 The cache resources control method of shared buffer memory formula Ethernet switch and device

Publications (2)

Publication Number Publication Date
CN103023806A CN103023806A (en) 2013-04-03
CN103023806B true CN103023806B (en) 2015-09-16

Family

ID=47971949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210551390.2A Active CN103023806B (en) 2012-12-18 2012-12-18 The cache resources control method of shared buffer memory formula Ethernet switch and device

Country Status (1)

Country Link
CN (1) CN103023806B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200131B (en) * 2013-04-03 2015-08-19 清华大学深圳研究生院 A kind of data source and sink
CN103618675B (en) * 2013-11-11 2017-01-18 西安交通大学 Content-network-oriented content-influence-based caching method
CN104394100B (en) * 2014-11-07 2017-12-08 深圳市国微电子有限公司 Credit assignment method and interchanger
CN104539553B (en) * 2014-12-18 2017-12-01 盛科网络(苏州)有限公司 The method and device of flow control is realized in Ethernet chip
CN106254274B (en) * 2016-09-27 2019-04-23 国家电网公司 The method that the transmission of substation's interchanger GOOSE message reduces the end of a thread obstruction
CN109391559B (en) * 2017-08-10 2022-10-18 华为技术有限公司 Network device
CN110221911B (en) * 2018-03-02 2021-09-28 大唐移动通信设备有限公司 Ethernet data protection method and device
CN110708253B (en) * 2018-07-09 2023-05-12 华为技术有限公司 Message control method, flow table updating method and node equipment
CN109388609B (en) * 2018-09-30 2020-02-21 中科驭数(北京)科技有限公司 Data processing method and device based on acceleration core
CN112416820B (en) * 2020-11-04 2022-05-27 国网山东省电力公司信息通信公司 Data packet classification storage method and system
CN114244738A (en) * 2021-12-16 2022-03-25 杭州奥博瑞光通信有限公司 Switch cache scheduling method and system
CN115344405A (en) * 2022-08-10 2022-11-15 北京有竹居网络技术有限公司 Data processing method, network interface card, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1434391A2 (en) * 2002-12-23 2004-06-30 Synad Technologies Limited Method and device for prefetching frames
CN1972239A (en) * 2005-11-24 2007-05-30 武汉烽火网络有限责任公司 Ethernet cache exchanging and scheduling method and apparatus
EP1876779A2 (en) * 2001-05-31 2008-01-09 Telefonaktiebolaget LM Ericsson (publ) Congestion and delay handling in a packet data network
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1876779A2 (en) * 2001-05-31 2008-01-09 Telefonaktiebolaget LM Ericsson (publ) Congestion and delay handling in a packet data network
EP1434391A2 (en) * 2002-12-23 2004-06-30 Synad Technologies Limited Method and device for prefetching frames
CN1972239A (en) * 2005-11-24 2007-05-30 武汉烽火网络有限责任公司 Ethernet cache exchanging and scheduling method and apparatus
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache

Also Published As

Publication number Publication date
CN103023806A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
CN103023806B (en) The cache resources control method of shared buffer memory formula Ethernet switch and device
Suter et al. Design considerations for supporting TCP with per-flow queueing
US7359321B1 (en) Systems and methods for selectively performing explicit congestion notification
Feng et al. BLUE: A new class of active queue management algorithms
CN111201757A (en) Network access node virtual structure dynamically configured on underlying network
US8576863B2 (en) Coordinated queuing between upstream and downstream queues in a network device
Ahammed et al. Anakyzing the performance of active queue management algorithms
CN101834790B (en) Multicore processor based flow control method and multicore processor
JPH10233802A (en) Method for improving performance of tcp connection
KR102177574B1 (en) Queuing system to predict packet lifetime in a computing device
US20050068798A1 (en) Committed access rate (CAR) system architecture
US10728156B2 (en) Scalable, low latency, deep buffered switch architecture
CN111224888A (en) Method for sending message and message forwarding equipment
Rashid et al. Dynamic Prediction based Multi Queue (DPMQ) drop policy for probabilistic routing protocols of delay tolerant network
Hamadneh et al. Dynamic weight parameter for the random early detection (RED) in TCP networks
US7391785B2 (en) Method for active queue management with asymmetric congestion control
An et al. MACRE: A novel distributed congestion control algorithm in DTN
EP1414213B1 (en) Packet classifier and processor in a telecommunication router
Omidvar et al. A Congestion-Aware Routing Algorithms Based on Traffic Priority in Wireless Sensor Networks.
Olmedilla et al. Optimizing packet dropping by efficient congesting-flow isolation in lossy data-center networks
Hu et al. BCN: A Fast Notified Backpressure Congestion Management
Rottenstreich et al. Redefining Switch Reordering
HEMALATHA et al. NOVEL CONGESTION CONTROL MECHANISM TO IMPROVE PERFORMANCE OF MOBILE ADHOC NETWORK WITH QUEUE MODEL
Danasekar et al. Quality of Service based Active Queue Management for Reliable Packet Transmission in Wireless Network
Bidgoli et al. Differentiated Services Fuzzy Assured Forward Queuing for Congestion Control in Intermediate Routers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170412

Address after: 430000 East Lake high tech Development Zone, Hubei Province, No. 6, No., high and new technology development zone, No. four

Patentee after: Fenghuo Communication Science &. Technology Co., Ltd.

Address before: East Lake high tech city of Wuhan province Hubei Dongxin road 430074 No. 5 East optical communication industry building

Patentee before: Wuhan Fenghuo Network Co., Ltd.