CN102404219B - Method and device for allocating caches as well as network equipment - Google Patents

Method and device for allocating caches as well as network equipment Download PDF

Info

Publication number
CN102404219B
CN102404219B CN201110380457.6A CN201110380457A CN102404219B CN 102404219 B CN102404219 B CN 102404219B CN 201110380457 A CN201110380457 A CN 201110380457A CN 102404219 B CN102404219 B CN 102404219B
Authority
CN
China
Prior art keywords
port
buffer memory
congested
num
share factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110380457.6A
Other languages
Chinese (zh)
Other versions
CN102404219A (en
Inventor
夹尚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Star Net Ruijie Networks Co Ltd
Ruijie Networks Co Ltd
Original Assignee
Beijing Star Net Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Star Net Ruijie Networks Co Ltd filed Critical Beijing Star Net Ruijie Networks Co Ltd
Priority to CN201110380457.6A priority Critical patent/CN102404219B/en
Publication of CN102404219A publication Critical patent/CN102404219A/en
Application granted granted Critical
Publication of CN102404219B publication Critical patent/CN102404219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses method and device for allocating caches as well as a piece of network equipment, wherein the method comprises the following steps of: calculating a cache share factor according to the recorded quantity of ports having message reception and transmission at present and the quantity of congestion ports therein, and storing the cache share factor; when a port receives a message, determining the maximum share of the allocatable caches of each port having message reception and transmission according to the cache share factor and the total of the presently available caches; judging whether the size of the message is greater than the maximum share; if no, then allocating a cache which is not greater than the maximum share to the port; if so, then discarding the message. Caches are only allocated to the ports having message reception and transmission at present by using the method disclosed by the invention, so that the caches are adequately utilized, and the waste of cache resource is reduced; moreover, the maximum share of the caches allocated to ports is greater than the quantity of all the presently available caches averagely allocated to each port having message reception and transmission at present, so that the ports receiving the message can apply for more caches, thereby enhancing the processing capacity for a burst traffic.

Description

A kind of method, device and network equipment that distributes buffer memory
Technical field
The present invention relates to communication technical field, relate in particular to a kind of method, device and network equipment that distributes buffer memory.
Background technology
At present, cache allocation method in the equipment such as switch is when device start, each port by its whole buffer memory mean allocation to switch device, for example the buffer memory total amount in switch is Total_Buffer, the quantity of port is Port_Num, so known, the cache size that each port in switch can be assigned to is Port_Buffer=Total_Buffer/Port_Num.Use above-mentioned cache allocation method to distribute buffer memory, due to each port assignment in switch device to cache size buffer memory total amount is averagely allocated to each port, in actual applications, if certain port in switch does not have connection device, just there is no the transmitting-receiving of message yet, the buffer memory that is so its distribution will, always in idle condition, can cause the waste of cache resources; And, the message flow of inputting when certain port is larger, while there is larger burst flow, likely owing to being that the buffer memory of its distribution is very few, and cause the loss of message, but the buffer memory that is other port assignment may, also in idle condition, will cause the low problem causing burst flow disposal ability deficiency of Buffer Utilization like this.
Summary of the invention
The embodiment of the present invention provides a kind of method, device and network equipment that distributes buffer memory, in order to solve the low problem causing burst flow disposal ability deficiency of existing Buffer Utilization.
A kind of method of distributing buffer memory that the embodiment of the present invention provides, comprising:
According to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculate the buffer memory share factor and preserve;
When port receives message, according to the described buffer memory share factor and current available cache memory total amount, determine the top limit that exists described in each port of packet sending and receiving can distribute buffer memory; Described top limit is not less than described current available cache memory total amount is averagely allocated to the current buffer memory quantity that has the port of packet sending and receiving described in each;
Whether the size that judges described message is greater than described top limit; If not, distribute the buffer memory that is not more than described top limit to this port, if so, abandon described message.
A kind of device that distributes buffer memory that the embodiment of the present invention provides, comprising:
Buffer memory share factor calculating unit, for according to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculates the buffer memory share factor and preserves;
Top limit computing unit, for when port receives message, according to the described buffer memory share factor and current available cache memory total amount, determines the top limit that exists described in each port of packet sending and receiving can distribute buffer memory; Described top limit is not less than described current available cache memory total amount is averagely allocated to the current buffer memory quantity that has the port of packet sending and receiving described in each;
Judging unit, for judging whether the size of described message is greater than described top limit; If not, distribute the buffer memory that is not more than described top limit to this port, if so, abandon described message.
A kind of network equipment that the embodiment of the present invention provides, comprises the device of the above-mentioned distribution buffer memory that the embodiment of the present invention provides.
The beneficial effect of the embodiment of the present invention comprises:
A kind of method, device and network equipment that distributes buffer memory that the embodiment of the present invention provides, according to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculates the buffer memory share factor and preserves; When port receives message, according to the buffer memory share factor and current available cache memory total amount, determine the top limit that each port that has packet sending and receiving can distribute buffer memory; Whether the size that judges message is greater than top limit; If not, the buffer memory that distribution is not more than top limit is to this port, if so, dropping packets.The present invention, with respect in prior art, buffer memory being averagely allocated to each port in equipment, just to the current port assignment buffer memory that has packet sending and receiving, takes full advantage of buffer memory, has reduced the waste of cache resources; And owing to being the top limit of port assignment buffer memory, that current available cache memory total amount is averagely allocated to the buffer memory quantity of each current port that has a packet sending and receiving is large than of the prior art, the port that receives so message just can be applied for more buffer memory, thereby has improved the disposal ability to burst flow.
Accompanying drawing explanation
The flow chart of the method for the distribution buffer memory that Fig. 1 provides for the embodiment of the present invention;
One of flow chart of the quantity of the congested port of renewal that Fig. 2 provides for the embodiment of the present invention;
Two of the flow chart of the quantity of the congested port of renewal that Fig. 3 provides for the embodiment of the present invention;
The flow chart of the distribution caching method example that Fig. 4 provides for the embodiment of the present invention;
The structural representation of the device of the distribution buffer memory that Fig. 5 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the embodiment of method, device and the network equipment of the distribution buffer memory that the embodiment of the present invention is provided is described in detail.
A kind of method of distributing buffer memory that the embodiment of the present invention provides, as shown in Figure 1, can comprise the following steps:
S101, according to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculate the buffer memory share factor and preserve;
S102, when port receives message, according to the buffer memory share factor and current available cache memory total amount, determine the top limit that each port that has packet sending and receiving can distribute buffer memory; This top limit is not less than current available cache memory total amount is averagely allocated to each current buffer memory quantity that has the port of packet sending and receiving;
S103, judge whether the size of message is greater than top limit; If not, execution step S104, if so, performs step S105;
S104, distribution are not more than the buffer memory of top limit to this port; When practical application, can distribute buffer memory to this port according to the size of message;
S105, dropping packets.
Below the specific implementation of above steps is described in detail.
Particularly, the step S101 in the said method that the embodiment of the present invention provides according to the quantity of the recorded current port that has a packet sending and receiving and wherein the quantity of congested port calculate the buffer memory share factor, can calculate by following formula (1):
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n n > 1 - - - ( 1 )
Or, the quantity of the port that can also comprise packet sending and receiving by other and the wherein formula of the quantity of congested port, for example following formula (2) calculates:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 n ≥ 2 - - - ( 2 )
Wherein, alpha represents the buffer memory share factor; Valid_port_num represents the current quantity that has the port of packet sending and receiving; Busy_port_num represents the quantity of congested port, and n is generally integer.Preferably, n=2.
Accordingly, in step S102, according to the buffer memory share factor and current available cache memory total amount, determine the top limit that each port that has packet sending and receiving can distribute buffer memory, can calculate by formula (3):
max_buffer=alpha×cur_buffer?????????????(3);
Wherein, max_buffer represents top limit; Alpha represents the buffer memory share factor; Cur_buffer represents current available cache memory total amount.
Above-mentioned formula (1), (2) and (3) are only the examples of a specific algorithm providing of the embodiment of the present invention, those skilled in the art are according to can above-mentioned formula, know other various variant by inference, the embodiment of the present invention does not limit which kind of algorithm of concrete employing.
First, with respect in prior art, buffer memory being averagely allocated to each port in equipment, the method of the distribution buffer memory that the embodiment of the present invention provides, when distributing buffer memory, be to existing the port of packet sending and receiving to distribute, for not existing the port of packet sending and receiving not distribute buffer memory, take full advantage of buffer memory, reduced the waste of cache resources.
Secondly, from formula (1), can find out, in the method for the distribution buffer memory providing in the embodiment of the present invention, port and the congested port of normal transmitting-receiving message have been distinguished, do not have the port of congested port label be equivalent to a congested port n and distribute buffer memory, like this, when there is congested port, the share that each port assignment arrives wants large with respect to the equal score value of the mean allocation of prior art, i.e. alpha > 1/valid_port_num.
From formula (2), can find out, the size of the buffer memory share factor equals under the constant prerequisite of the quantity of the current port that has a packet sending and receiving, the size of the buffer memory share factor and the quantity of congested port are directly proportional, in other words, the quantity of the congested port in the current port that has a packet sending and receiving is more, the buffer memory share factor is larger, and, because having guaranteed coefficient n-1, n>=2 are more than or equal to 1, more than one of the quantity of common congested port, therefore, the share that each port assignment arrives wants large with respect to the equal score value of the mean allocation of prior art, i.e. alpha > 1/valid_port_num.
In sum, by above-mentioned formula (1) (2) (3), can draw, each port can apply for that the top limit max_buffer of buffer memory has also increased accordingly with respect to prior art, and, the quantity of the congested port in the current port that has a packet sending and receiving is more, the buffer memory share factor is larger, each port can apply for that the top limit of buffer memory is just larger, like this, for those, there is congested port, the buffer memory method of salary distribution of the prior art of comparing, can there is more buffer memory for the treatment of congested data, improved the overall utilance of buffer memory, also can improve the disposal ability to message burst flow.
And, the said method that the embodiment of the present invention provides is when practical application, although for each top limit that exists the port of packet sending and receiving can distribute buffer memory larger than equal score value, but, because the size of the actual transmitting-receiving of each port message is different, the flow that congested port does not occur is less, needs the buffer memory of use also less, the actual buffer memory of applying for often far can not reach top limit, and the buffer memory that in fact all of the port is assigned to so can not exceed current available cache memory total amount yet.
Preferably, in the step S101 of the distribution caching method that the embodiment of the present invention provides, the buffer memory share factor can also be carried out real-time update, make the monthly dynamics of buffer memory, particularly, when the state of port transmitting-receiving message changes, can upgrade the current quantity that has the port of packet sending and receiving, and recalculate the buffer memory share factor, the buffer memory share factor that use is recalculated is upgraded the buffer memory share factor of preserving, like this, the buffer memory share factor just can be along with port is received and dispatched the state variation of message and real-time update.
Or, when determining that the port of dropping packets is not marked as congested port, by port label, be congested port; And periodically travel through all of the port, be marked as the port of congested port when aging determining, remove this congestion marking, the quantity of congested port is upgraded in capital, according to the quantity of the congested port after upgrading, recalculate the buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving, so also can upgrade the buffer memory share factor.
Below to the quantity of above-mentioned congested port more news be specifically described:
Particularly, when definite port is marked as congested port, upgrades the process of the quantity of congested port and can after the port dropping packets that receives message, carry out at above-mentioned S105, as shown in Figure 2, this process comprises the following steps:
S201, judge whether port has been marked as congested port; If so, perform step S202; If not, execution step S203;
The congestion time stamp of S202, renewal port is current time;
S203, by port label, be congested port;
S204, the congestion time stamp that records port are current time;
S205, upgrade the quantity of congested port.
Particularly, when being marked as the port of congested port when aging, upgrade the detailed process of the quantity of congested port, as shown in Figure 3, can comprise the following steps:
S301, periodically travel through all of the port;
In the specific implementation, can use timer, for example every 1 hour traversal all of the port;
S302, for each port, judge whether this port is marked as congested port; If so, perform step S303; If not, return to step S301;
S303, judge that whether the duration that the congestion time of this port stabs is this moment less than set point, for example 1 hour, if so, performs step S304; If not, execution step S305;
S304, keep the congested port label of this port;
S305, remove the congested port label of this port;
S306, upgrade the quantity of congested port.
In the said method providing in the embodiment of the present invention, after giving port assignment buffer memory or message be forwarded successfully, during the buffer memory of release busy, current available cache memory total amount can be upgraded, like this, both can there is dynamic change in the buffer memory share factor and current available cache memory total amount.
Like this, when port receives message, can use current available cache memory total amount and the current buffer memory share factor, determine the top limit that can distribute buffer memory when front port, this top limit can be according in current available cache memory total amount in the same time not, the current quantity of port that has a packet sending and receiving and the quantity of congested port and dynamic change, with respect to prior art, realized dynamic assignment buffer memory, the overall utilization rate that is conducive to buffer memory in raising equipment, has further strengthened the disposal ability of equipment to message burst flow.
The above-mentioned distribution caching method providing below by a concrete example explanation embodiment of the present invention.
Idiographic flow for the port application buffer memory in equipment, as shown in Figure 4, comprises the following steps:
S401, port receive message, determine the size of message;
S402, port be to equipment application buffer memory, the size that the size of application buffer memory is message;
S403, judge whether the size of port application buffer memory is not less than the top limit that can distribute buffer memory; If not, execution step S404; If so, perform step S406;
S404, be port assignment buffer memory;
S405, upgrade current available cache memory total amount; Execution step S412;
S406, apply for unsuccessfully port dropping packets;
S407, judge whether this port is marked as congested port; If so, perform step S408; If not, execution step S409;
The congestion time stamp of S408, renewal port is current time; Execution step S412;
S409, by port label, be congested port, the congestion time stamp that records port is current time;
S410, upgrade the quantity of congested port;
S411, the renewal buffer memory share factor;
S412, process ends.
The method of the distribution buffer memory that the embodiment of the present invention provides, generally be applied to the situation such as devices allocation buffer memorys such as switches, be particularly useful for the situation of distributing buffer memory under network traffics complexity and the larger cloud computing environment of burst flow, the embodiment of the present invention does not limit this.
Based on same inventive concept, the embodiment of the present invention also provides a kind of device and network equipment that distributes buffer memory, because the principle that this device and equipment are dealt with problems is similar to a kind of aforementioned method of buffer memory of distributing, therefore the enforcement of this device and equipment can, referring to the enforcement of method, repeat part and repeat no more.
A kind of device that distributes buffer memory that the embodiment of the present invention provides, as shown in Figure 5, comprising:
Buffer memory share factor calculating unit 501, for according to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculates the buffer memory share factor and preserves;
Top limit computing unit 502, for when port receives message, according to the buffer memory share factor and current available cache memory total amount, determines the top limit that each port that has packet sending and receiving can distribute buffer memory; This top limit is not less than current available cache memory total amount is averagely allocated to each current buffer memory quantity that has the port of packet sending and receiving;
Judging unit 503, for judging whether the size of message is greater than top limit; If not, the buffer memory that distribution is not more than top limit is to this port, if so, dropping packets.
Further, the buffer memory share factor calculating unit 501 in said apparatus, specifically for calculate buffer memory share factor alpha by following formula:
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n n > 1
Wherein, valid_port_num represents the current quantity that has the port of packet sending and receiving; Busy_port_num represents the quantity of congested port.
Or, further, the buffer memory share factor calculating unit 501 in said apparatus, specifically for calculate buffer memory share factor alpha by following formula:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 n ≥ 2
Wherein, valid_port_num represents the described current quantity that has the port of packet sending and receiving; Busy_port_num represents the quantity of described congested port.
Further, the top limit computing unit 502 in said apparatus, specifically for calculating top limit max_buffer:max_buffer=alpha * cur_buffer by following formula; Wherein, alpha represents the buffer memory share factor; Cur_buffer represents current available cache memory total amount.
Further, the said apparatus that the embodiment of the present invention provides, as shown in Figure 5, can also comprise: congested port label unit 504, for after dropping packets, when definite port is not marked as congested port, by port label, be congested port, and upgrade the quantity of congested port; And/or, periodically travel through all of the port, be marked as the port of congested port when aging determining, remove this congestion marking, and upgrade the quantity of congested port.
Particularly, congested port label unit 504 can, for after dropping packets, judge whether port is marked as congested port; If so, the congestion time stamp that upgrades port is current time; If not, by port label, be congested port, the congestion time stamp that records port is current time, and upgrades the quantity of described congested port.
Particularly, congested port label unit 504 also can be for periodically traveling through all of the port; For the port that is marked as congested port, judge whether the duration that the congestion time of this port stabs is this moment less than set point, if so, keeps the congested port label of port, if not, remove the congested port label of port; And upgrade the quantity of congested port.
Further, buffer memory share factor calculating unit 501 in said apparatus, also for after upgrading the quantity of described congested port, according to the quantity of the congested port of renewal after upgrading, recalculate the buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving; And/or, when the state of port transmitting-receiving message changes, upgrade the current quantity that has the port of packet sending and receiving, and recalculate the buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving.
The embodiment of the present invention also provides a kind of network equipment, comprises the device of the above-mentioned distribution buffer memory that the embodiment of the present invention provides.
Through the above description of the embodiments, those skilled in the art can be well understood to the embodiment of the present invention and can realize by hardware, and the mode that also can add necessary general hardware platform by software realizes.Understanding based on such, the technical scheme of the embodiment of the present invention can embody with the form of software product, it (can be CD-ROM that this software product can be stored in a non-volatile memory medium, USB flash disk, portable hard drive etc.) in, comprise some instructions with so that computer equipment (can be personal computer, server, or the network equipment etc.) carry out the method described in each embodiment of the present invention.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the module in accompanying drawing or flow process might not be that enforcement the present invention is necessary.
It will be appreciated by those skilled in the art that the module in the device in embodiment can be distributed in the device of embodiment according to embodiment description, also can carry out respective change and be arranged in the one or more devices that are different from the present embodiment.The module of above-described embodiment can be merged into a module, also can further split into a plurality of submodules.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
A kind of method, device and network equipment that distributes buffer memory that the embodiment of the present invention provides, according to the quantity of the recorded current port that has a packet sending and receiving and the quantity of congested port wherein, calculates the buffer memory share factor and preserves; When port receives message, according to the buffer memory share factor and current available cache memory total amount, determine the top limit that each port that has packet sending and receiving can distribute buffer memory; Whether the size that judges message is greater than top limit; If not, the buffer memory that distribution is not more than top limit is to this port, if so, dropping packets.The present invention, with respect in prior art, buffer memory being averagely allocated to each port in equipment, just to the current port assignment buffer memory that has packet sending and receiving, has saved cache resources; And owing to being the top limit of port assignment buffer memory, that current available cache memory total amount is averagely allocated to the buffer memory quantity of each current port that has a packet sending and receiving is large than of the prior art, the port that receives so message just can be applied for more buffer memory, thereby has improved the disposal ability to burst flow.
Further, the method of the distribution buffer memory that the embodiment of the present invention provides, can not real-time change with respect to buffer memory that in prior art, each port assignment arrives, the buffer memory share factor and the dynamic change of current available cache memory total amount, like this, when port receives message, can use current available cache memory total amount and the current buffer memory share factor, determine the current top limit of distributing buffer memory of port, this top limit is according in current available cache memory total amount in the same time not, the current quantity of port that has a packet sending and receiving and the quantity of congested port and dynamic change, realized dynamic assignment buffer memory, the overall utilization rate that is conducive to buffer memory in raising equipment, further strengthened the disposal ability of equipment to message burst flow.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (9)

1. a method of distributing buffer memory, is characterized in that, comprising:
According to the quantity valid_port_num of the recorded current port that has a packet sending and receiving and the quantity busy_port_num of congested port wherein, by following formula, calculate buffer memory share factor alpha and preserve: alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n N > 1, wherein, n is and the congested port quantity of normal transmitting-receiving message port of equal value mutually;
Or, alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 N >=2, wherein, n is integer;
When port receives message, according to the described buffer memory share factor and current available cache memory total amount, determine the top limit that exists described in each port of packet sending and receiving can distribute buffer memory; Described top limit is not less than described current available cache memory total amount is averagely allocated to the current buffer memory quantity that has the port of packet sending and receiving described in each;
Whether the size that judges described message is greater than described top limit; If not, distribute the buffer memory that is not more than described top limit to this port, if so, abandon described message.
2. method as claimed in claim 1, is characterized in that, according to the described buffer memory share factor and current available cache memory total amount, determines the top limit that exists described in each port of packet sending and receiving can distribute buffer memory, specifically comprises:
By following formula, calculate described top limit max_buffer:
max_buffer=alpha×cur_buffer;
Wherein, alpha represents the described buffer memory share factor; Cur_buffer represents described current available cache memory total amount.
3. the method for claim 1, is characterized in that, also comprises:
After abandoning described message, when definite described port is not marked as congested port, by described port label, is congested port, and upgrades the quantity of described congested port; And/or
Periodically travel through all of the port, be marked as the port of congested port when aging determining, remove this congestion marking, and upgrade the quantity of described congested port.
4. method as claimed in claim 3, is characterized in that, also comprises:
After upgrading the quantity of described congested port, the quantity according to the congested port after upgrading, recalculates the described buffer memory share factor, uses the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving; And/or
When the state of port transmitting-receiving message changes, upgrade the described current quantity that has the port of packet sending and receiving, and recalculate the described buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving.
5. a device that distributes buffer memory, is characterized in that, comprising:
Buffer memory share factor calculating unit, for according to the quantity valid_port_num of the recorded current port that has a packet sending and receiving and the quantity busy_port_num of congested port wherein, is calculated buffer memory share factor alpha and is preserved by following formula:
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n N > 1, wherein, n is and the congested port quantity of normal transmitting-receiving message port of equal value mutually;
Or, alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 N >=2, wherein, n is integer;
Top limit computing unit, for when port receives message, according to the described buffer memory share factor and current available cache memory total amount, determines the top limit that exists described in each port of packet sending and receiving can distribute buffer memory; Described top limit is not less than described current available cache memory total amount is averagely allocated to the current buffer memory quantity that has the port of packet sending and receiving described in each;
Judging unit, for judging whether the size of described message is greater than described top limit; If not, distribute the buffer memory that is not more than described top limit to this port, if so, abandon described message.
6. device as claimed in claim 5, is characterized in that, described top limit computing unit, specifically for calculating described top limit max_buffer by following formula:
Max_buffer=alpha * cur_buffer; Wherein, alpha represents the described buffer memory share factor; Cur_buffe table r shows described current available cache memory total amount.
7. device as claimed in claim 5, is characterized in that, also comprises: congested port label unit, for after abandoning described message, when definite described port is not marked as congested port, by described port label, is congested port, and upgrades the quantity of described congested port; And/or, periodically travel through all of the port, be marked as the port of congested port when aging determining, remove this congestion marking, and upgrade the quantity of described congested port.
8. device as claimed in claim 7, it is characterized in that, described buffer memory share factor calculating unit, also for after upgrading the quantity of described congested port, according to the quantity of the congested port after upgrading, recalculate the described buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving; And/or, when the state of port transmitting-receiving message changes, upgrade the described current quantity that has the port of packet sending and receiving, and recalculate the described buffer memory share factor, use the buffer memory share factor recalculating to upgrade the buffer memory share factor of preserving.
9. a network equipment, is characterized in that, comprises the device of the distribution buffer memory as described in claim 5-8 any one.
CN201110380457.6A 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment Active CN102404219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110380457.6A CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110380457.6A CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Publications (2)

Publication Number Publication Date
CN102404219A CN102404219A (en) 2012-04-04
CN102404219B true CN102404219B (en) 2014-07-30

Family

ID=45886022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110380457.6A Active CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Country Status (1)

Country Link
CN (1) CN102404219B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063004A1 (en) * 2006-09-13 2008-03-13 International Business Machines Corporation Buffer allocation method for multi-class traffic with dynamic spare buffering
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN101873269A (en) * 2010-06-24 2010-10-27 杭州华三通信技术有限公司 Data retransmission device and method for distributing buffer to ports
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063004A1 (en) * 2006-09-13 2008-03-13 International Business Machines Corporation Buffer allocation method for multi-class traffic with dynamic spare buffering
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN101873269A (en) * 2010-06-24 2010-10-27 杭州华三通信技术有限公司 Data retransmission device and method for distributing buffer to ports
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment

Also Published As

Publication number Publication date
CN102404219A (en) 2012-04-04

Similar Documents

Publication Publication Date Title
KR101867286B1 (en) Distributed processing apparatus and method for big data using hardware acceleration based on work load
CN103580842A (en) Method and system for conducting parallel transmission through multiple types of wireless links
CN104521198A (en) System and method for virtual ethernet interface binding
MX2015016012A (en) Methods and systems for data context and management via dynamic spectrum controller and dynamic spectrum policy controller.
CN104780118A (en) Fluid control method and device based on tokens
CN103763740B (en) Method and device for balancing loads of single boards
CN103440202A (en) RDMA-based (Remote Direct Memory Access-based) communication method, RDMA-based communication system and communication device
CN105227489A (en) A kind of bandwidth management method and electronic equipment
US7477607B2 (en) Method for allocating blocks of internet protocol (IP) addresses in networks
CN101568182B (en) Wireless resource allocation method and device
CN102404219B (en) Method and device for allocating caches as well as network equipment
CN102202419A (en) Data allocation method and device thereof with multiple radio access technologies serving one user equipment
CN101566933B (en) Method and device for configurating cache and electronic equipment and data read-write equipment
US20170223546A1 (en) Radio Resource Allocation Method and Radio Network Controller
CN107241251A (en) The software implementation method of multichannel CAN message real-time reception
CN104881326A (en) Journal file processing method and device
CN103309698A (en) Virtual machine memory managing system and method
WO2017193675A1 (en) Uplink resource scheduling method and device
CN105095146A (en) Memory controller based bandwidth allocation method and apparatus
CN104660525A (en) Bandwidth allocation method, controller and communication system
CN107800516B (en) Method and device for high-speed downlink packet access (HSDPA) storage management
CN104486442B (en) Data transmission method, the device of distributed memory system
CN103885888B (en) Memory management method, system and device for embedded real-time system based on TLSF
US9063841B1 (en) External memory management in a network device
CN107734511A (en) Network capacity extension method and access network equipment

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
C14 Grant of patent or utility model