CN102404219A - Method and device for allocating caches as well as network equipment - Google Patents

Method and device for allocating caches as well as network equipment Download PDF

Info

Publication number
CN102404219A
CN102404219A CN2011103804576A CN201110380457A CN102404219A CN 102404219 A CN102404219 A CN 102404219A CN 2011103804576 A CN2011103804576 A CN 2011103804576A CN 201110380457 A CN201110380457 A CN 201110380457A CN 102404219 A CN102404219 A CN 102404219A
Authority
CN
China
Prior art keywords
port
buffer memory
congested
num
share factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103804576A
Other languages
Chinese (zh)
Other versions
CN102404219B (en
Inventor
夹尚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Star Net Ruijie Networks Co Ltd
Ruijie Networks Co Ltd
Original Assignee
Beijing Star Net Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Star Net Ruijie Networks Co Ltd filed Critical Beijing Star Net Ruijie Networks Co Ltd
Priority to CN201110380457.6A priority Critical patent/CN102404219B/en
Publication of CN102404219A publication Critical patent/CN102404219A/en
Application granted granted Critical
Publication of CN102404219B publication Critical patent/CN102404219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses method and device for allocating caches as well as a piece of network equipment, wherein the method comprises the following steps of: calculating a cache share factor according to the recorded quantity of ports having message reception and transmission at present and the quantity of congestion ports therein, and storing the cache share factor; when a port receives a message, determining the maximum share of the allocatable caches of each port having message reception and transmission according to the cache share factor and the total of the presently available caches; judging whether the size of the message is greater than the maximum share; if no, then allocating a cache which is not greater than the maximum share to the port; if so, then discarding the message. Caches are only allocated to the ports having message reception and transmission at present by using the method disclosed by the invention, so that the caches are adequately utilized, and the waste of cache resource is reduced; moreover, the maximum share of the caches allocated to ports is greater than the quantity of all the presently available caches averagely allocated to each port having message reception and transmission at present, so that the ports receiving the message can apply for more caches, thereby enhancing the processing capacity for a burst traffic.

Description

A kind of method, device and network equipment that distributes buffer memory
Technical field
The present invention relates to communication technical field, relate in particular to a kind of method, device and network equipment that distributes buffer memory.
Background technology
At present; Cache allocation method in equipment such as switch is when device start; With its whole buffer memory mean allocation each port to switch device, for example the buffer memory total amount in the switch is Total_Buffer, and the quantity of port is Port_Num; Can know that so the cache size that each port in the switch can be assigned to is Port_Buffer=Total_Buffer/Port_Num.Use above-mentioned cache allocation method to distribute buffer memory; Since each port assignment in the switch device to cache size give each port with buffer memory total amount mean allocation; In practical application,, just there is not the transmitting-receiving of message if certain port in switch does not have connection device yet; Idle condition will be in for it divides other buffer memory so always, the waste of cache resources can be caused; And; The message flow of importing when certain port is bigger; When promptly having bigger burst flow, might be very few owing to the buffer memory that is its distribution, and cause losing of message; But the buffer memory that is other port assignment possibly also be in idle condition, will cause the low problem that causes burst flow disposal ability deficiency of buffer memory utilance like this.
Summary of the invention
The embodiment of the invention provides a kind of method, device and network equipment that distributes buffer memory, and existing buffer memory utilance is low to be caused the not enough problem of burst flow disposal ability in order to solve.
A kind of method of distributing buffer memory that the embodiment of the invention provides comprises:
According to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculate the buffer memory share factor and preserve;
When port receives message,, confirm the top limit that each said port that has a packet sending and receiving can distribute buffer memory according to the said buffer memory share factor and current available cache memory total amount; Said top limit is not less than gives each said current buffer memory quantity that has the port of packet sending and receiving with said current available cache memory total amount mean allocation;
Whether the size of judging said message is greater than said top limit; If not, distribute the buffer memory that is not more than said top limit to give this port, if abandon said message.
A kind of device that distributes buffer memory that the embodiment of the invention provides comprises:
Buffer memory share factor calculating unit is used for according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor and preserves;
The top limit computing unit is used for when port receives message, according to the said buffer memory share factor and current available cache memory total amount, confirms the top limit that each said port that has a packet sending and receiving can distribute buffer memory; Said top limit is not less than gives each said current buffer memory quantity that has the port of packet sending and receiving with said current available cache memory total amount mean allocation;
Judging unit, whether the size that is used to judge said message is greater than said top limit; If not, distribute the buffer memory that is not more than said top limit to give this port, if abandon said message.
A kind of network equipment that the embodiment of the invention provides comprises the device of the above-mentioned distribution buffer memory that the embodiment of the invention provides.
The beneficial effect of the embodiment of the invention comprises:
A kind of method, device and network equipment that distributes buffer memory that the embodiment of the invention provides according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor and preserves; When port receives message,, confirm the top limit that each port that has packet sending and receiving can distribute buffer memory according to the buffer memory share factor and current available cache memory total amount; Whether the size of judging message is greater than top limit; If not, distribute the buffer memory that is not more than top limit to give this port, if, dropping packets.The present invention with respect in the prior art with each port in the buffer memory average mark dispense equipment, just to the current port assignment buffer memory that has packet sending and receiving, made full use of buffer memory, reduced the waste of cache resources; And owing to be the top limit of port assignment buffer memory; Than of the prior art that current available cache memory total amount mean allocation is big to the buffer memory quantity of each current port that has a packet sending and receiving; The port that receives message so just can be applied for more buffer memory, thereby has improved the disposal ability to burst flow.
Description of drawings
The flow chart of the method for the distribution buffer memory that Fig. 1 provides for the embodiment of the invention;
One of flow chart of the quantity of the congested port of renewal that Fig. 2 provides for the embodiment of the invention;
Two of the flow chart of the quantity of the congested port of renewal that Fig. 3 provides for the embodiment of the invention;
The flow chart of the distribution caching method instance that Fig. 4 provides for the embodiment of the invention;
The structural representation of the device of the distribution buffer memory that Fig. 5 provides for the embodiment of the invention.
Embodiment
Below in conjunction with accompanying drawing, the embodiment of method, device and the network equipment of the distribution buffer memory that the embodiment of the invention is provided is at length explained.
A kind of method of distributing buffer memory that the embodiment of the invention provides, as shown in Figure 1, can may further comprise the steps:
S101, according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculate the buffer memory share factor and preserve;
S102, when port receives message, according to the buffer memory share factor and current available cache memory total amount, confirm the top limit that each port that has packet sending and receiving can distribute buffer memory; This top limit is not less than current available cache memory total amount mean allocation to each current buffer memory quantity that has the port of packet sending and receiving;
S103, judge message size whether greater than top limit; If not, execution in step S104, if, execution in step S105;
S104, distribution are not more than the buffer memory of top limit and give this port; When practical application, can distribute buffer memory to give this port according to the size of message;
S105, dropping packets.
Carry out detailed explanation in the face of the concrete implementation of above-mentioned each step down.
Particularly, the step S101 in the said method that the embodiment of the invention provides according to the quantity of the current port that has a packet sending and receiving that is write down and wherein the quantity of congested port calculate the buffer memory share factor, can calculate through following formula (1):
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n , n > 1 - - - ( 1 )
Perhaps, can also be through other quantity and formula of the quantity of congested port wherein that comprises the port of packet sending and receiving, for example following formula (2) calculates:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 , n ≥ 2 - - - ( 2 )
Wherein, alpha representes the buffer memory share factor; Valid_port_num representes the current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of congested port, and n is generally integer.Preferably, n=2.
Accordingly, in step S102,, confirm the top limit that each port that has packet sending and receiving can distribute buffer memory, can calculate through formula (3) according to the buffer memory share factor and current available cache memory total amount:
max_buffer=alpha×cur_buffer (3);
Wherein, max_buffer representes top limit; Alpha representes the buffer memory share factor; Cur_buffer representes current available cache memory total amount.
Above-mentioned formula (1), (2) and (3) only are the examples of a specific algorithm providing of the embodiment of the invention, and those skilled in the art know other various variant by inference according to can above-mentioned formula, and the embodiment of the invention is not done qualification to which kind of algorithm of concrete employing.
At first; With respect in the prior art with each port in the buffer memory average mark dispense equipment; The method of the distribution buffer memory that the embodiment of the invention provides is just distributed the port that has packet sending and receiving when distributing buffer memory, does not distribute buffer memory for the port that does not have packet sending and receiving; Make full use of buffer memory, reduced the waste of cache resources.
Secondly, can find out from formula (1), in the method for the distribution buffer memory that the embodiment of the invention provides; The port and the congested port of normal transmitting-receiving message have been distinguished; Do not have the port of congested port label be equivalent to a congested port n and distribute buffer memory, like this, when having congested port; The share that each port assignment arrives wants big with respect to the equal score value of the mean allocation of prior art, i.e. alpha>1/valid_port_num.
Can find out from formula (2); The size of the buffer memory share factor equals
Figure BDA0000112405730000051
under the constant prerequisite of the quantity of the current port that has a packet sending and receiving, and the size of the buffer memory share factor and the quantity of congested port are directly proportional, in other words; The quantity of the congested port in the current port that has a packet sending and receiving is many more; Then the buffer memory share factor is big more, and, because n>=2 have guaranteed that coefficient n-1 is more than or equal to 1; More than one of the quantity of common congested port; Therefore, the share that each port assignment arrives wants big with respect to the equal score value of the mean allocation of prior art, i.e. alpha>1/valid_port_num.
In sum, can draw through above-mentioned formula (1) (2) (3), each port can apply for that the top limit max_buffer of buffer memory has also increased with respect to prior art accordingly; And the quantity of the congested port in the current port that has a packet sending and receiving is many more, and the buffer memory share factor is big more; Then each port can apply for that the top limit of buffer memory is just big more, like this, for those congested port takes place; The buffer memory method of salary distribution of the prior art of comparing; Can there be more buffer memory to be used to handle congested data, have improved the overall utilance of buffer memory, also can improve disposal ability the message burst flow.
And, the said method that the embodiment of the invention provides when practical application, though for each top limit that exists the port of packet sending and receiving can distribute buffer memory bigger than equal score value; But; Because the size of the actual transmitting-receiving of each port message has nothing in common with each other, the flow that congested port does not take place is less, needs the buffer memory of use also less; The buffer memory that reality is applied for often far can not reach top limit, and the buffer memory that in fact is assigned to of all of the port can not exceed current available cache memory total amount yet like this.
Preferably; The buffer memory share factor can also be carried out real-time update among the step S101 of the distribution caching method that the embodiment of the invention provides, and makes the distribution mobilism of buffer memory, particularly; When the state of port transmitting-receiving message changes; Can upgrade the current quantity that has the port of packet sending and receiving, and recomputate the buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved; Like this, the buffer memory share factor just can be received and dispatched state variation and the real-time update of message along with port.
Perhaps, when the port of confirming dropping packets is not marked as congested port, be congested port with port label; And periodically travel through all of the port; When the port of confirming to be marked as congested port is aging, remove this congestion marking, all can upgrade the quantity of congested port; Quantity according to the congested port after upgrading; Recomputate the buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved, so also can upgrade the buffer memory share factor.
Down in the face of the quantity of above-mentioned congested port more news specify:
Particularly, when definite port is marked as congested port, upgrades the process of the quantity of congested port and can after the port dropping packets that receives message, carry out at above-mentioned S105, as shown in Figure 2, this process may further comprise the steps:
S201, judge whether port has been marked as congested port; If, execution in step S202; If not, execution in step S203;
The congestion time of S202, renewal port stabs and is current time;
S203, be congested port with port label;
The congestion time of S204, record port stabs and is current time;
The quantity of S205, the congested port of renewal.
Particularly, when the port that is marked as congested port is aging, upgrade the detailed process of the quantity of congested port, as shown in Figure 3, can may further comprise the steps:
S301, periodically travel through all of the port;
In the specific implementation, can use timer, for example per 1 hour traversal all of the port;
S302, to each port, judge whether this port is marked as congested port; If, execution in step S303; If not, return step S301;
Whether S303, the congestion time of judging this port stab this moment duration less than set point, and for example 1 hour, if, execution in step S304; If not, execution in step S305;
S304, keep the congested port label of this port;
S305, remove the congested port label of this port;
The quantity of S306, the congested port of renewal.
In the said method that the embodiment of the invention provides; After giving the port assignment buffer memory or message be forwarded success, during the buffer memory of release busy, current available cache memory total amount can be upgraded; Like this, dynamic change can take place in the buffer memory share factor and current available cache memory total amount both.
Like this; When port receives message; Can use the current available cache memory total amount and the current buffer memory share factor, confirm the top limit that can distribute buffer memory when front port, this top limit can be according in the quantity of the quantity of constantly current available cache memory total amount of difference, the current port that has a packet sending and receiving and congested port and dynamic change; With respect to prior art; Realized the dynamic assignment buffer memory, helped the overall utilization rate of buffer memory in the raising equipment, further strengthened the disposal ability of equipment the message burst flow.
The above-mentioned distribution caching method that provides through a concrete instance explanation embodiment of the invention below.
For the idiographic flow of the port application buffer memory in the equipment, as shown in Figure 4, may further comprise the steps:
S401, port receive message, confirm the size of message;
S402, port are to equipment application buffer memory, and the size of application buffer memory is the size of message;
S403, judge whether the size of port application buffer memory is not less than the top limit that can distribute buffer memory; If not, execution in step S404; If, execution in step S406;
S404, be the port assignment buffer memory;
S405, the current available cache memory total amount of renewal; Execution in step S412;
S406, application failure, the port dropping packets;
S407, judge whether this port is marked as congested port; If, execution in step S408; If not, execution in step S409;
The congestion time of S408, renewal port stabs and is current time; Execution in step S412;
S409, be congested port with port label, the congestion time of record port stabs and is current time;
The quantity of S410, the congested port of renewal;
S411, the renewal buffer memory share factor;
S412, process ends.
The method of the distribution buffer memory that the embodiment of the invention provides; Generally be applied to the for example situation of devices allocation buffer memory such as switch; Be particularly useful for the situation of under the bigger cloud computing environment of network traffics complicacy and burst flow, distributing buffer memory, the embodiment of the invention is not done qualification to this.
Based on same inventive concept; The embodiment of the invention also provides a kind of device and network equipment that distributes buffer memory; Because the principle that this device and equipment are dealt with problems is similar with aforementioned a kind of method of buffer memory of distributing; Therefore the enforcement of this device and equipment can repeat part and repeat no more referring to the enforcement of method.
A kind of device that distributes buffer memory that the embodiment of the invention provides, as shown in Figure 5, comprising:
Buffer memory share factor calculating unit 501 is used for according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor and preserves;
Top limit computing unit 502 is used for when port receives message, according to the buffer memory share factor and current available cache memory total amount, confirms the top limit that each port that has packet sending and receiving can distribute buffer memory; This top limit is not less than current available cache memory total amount mean allocation to each current buffer memory quantity that has the port of packet sending and receiving;
Judging unit 503, whether the size that is used to judge message is greater than top limit; If not, distribute the buffer memory that is not more than top limit to give this port, if, dropping packets.
Further, the buffer memory share factor calculating unit 501 in the said apparatus specifically is used for calculating buffer memory share factor alpha through following formula:
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n , n > 1
Wherein, valid_port_num representes the current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of congested port.
Perhaps, further, the buffer memory share factor calculating unit 501 in the said apparatus specifically is used for calculating buffer memory share factor alpha through following formula:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 , n ≥ 2
Wherein, valid_port_num representes the said current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of said congested port.
Further, the top limit computing unit 502 in the said apparatus specifically is used for calculating top limit max_buffer:max_buffer=alpha * cur_buffer through following formula; Wherein, alpha representes the buffer memory share factor; Cur_buffer representes current available cache memory total amount.
Further, the said apparatus that the embodiment of the invention provides, as shown in Figure 5; Can also comprise: congested port label unit 504 is used for after dropping packets, when definite port is not marked as congested port; With port label is congested port, and upgrades the quantity of congested port; And/or, periodically travel through all of the port, when the port of confirming to be marked as congested port is aging, removes this congestion marking, and upgrade the quantity of congested port.
Particularly, congested port label unit 504 can be used for after dropping packets, judges whether port is marked as congested port; If the congestion time that upgrades port stabs and is current time; If not, be congested port with port label, the congestion time of record port stabs and is current time, and upgrades the quantity of said congested port.
Particularly, congested port label unit 504 also can be used for periodically traveling through all of the port; For the port that is marked as congested port, whether the congestion time of judging this port stabs this moment duration less than set point, if, keep the congested port label of port, if not, remove the congested port label of port; And upgrade the quantity of congested port.
Further; Buffer memory share factor calculating unit 501 in the said apparatus; Also be used for after the quantity of upgrading said congested port; Quantity according to the congested port of renewal after upgrading recomputates the buffer memory share factor, uses the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved; And/or, when the state of port transmitting-receiving message changes, upgrade the current quantity that has the port of packet sending and receiving, and recomputate the buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved.
The embodiment of the invention also provides a kind of network equipment, comprises the device of the above-mentioned distribution buffer memory that the embodiment of the invention provides.
Through the description of above execution mode, those skilled in the art can be well understood to the embodiment of the invention and can realize through hardware, also can realize by the mode that software adds necessary general hardware platform.Based on such understanding; The technical scheme of the embodiment of the invention can be come out with the embodied of software product, this software product can be stored in a non-volatile memory medium (can be CD-ROM, USB flash disk; Portable hard drive etc.) in; Comprise some instructions with so that computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out the described method of each embodiment of the present invention.
It will be appreciated by those skilled in the art that accompanying drawing is the sketch map of a preferred embodiment, module in the accompanying drawing or flow process might not be that embodiment of the present invention is necessary.
It will be appreciated by those skilled in the art that the module in the device among the embodiment can be distributed in the device of embodiment according to the embodiment description, also can carry out respective change and be arranged in the one or more devices that are different from present embodiment.The module of the foregoing description can be merged into a module, also can further split into a plurality of submodules.
The invention described above embodiment sequence number is not represented the quality of embodiment just to description.
A kind of method, device and network equipment that distributes buffer memory that the embodiment of the invention provides according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor and preserves; When port receives message,, confirm the top limit that each port that has packet sending and receiving can distribute buffer memory according to the buffer memory share factor and current available cache memory total amount; Whether the size of judging message is greater than top limit; If not, distribute the buffer memory that is not more than top limit to give this port, if, dropping packets.The present invention with respect in the prior art with each port in the buffer memory average mark dispense equipment, just to the current port assignment buffer memory that has packet sending and receiving, practiced thrift cache resources; And owing to be the top limit of port assignment buffer memory; Than of the prior art that current available cache memory total amount mean allocation is big to the buffer memory quantity of each current port that has a packet sending and receiving; The port that receives message so just can be applied for more buffer memory, thereby has improved the disposal ability to burst flow.
Further; The method of the distribution buffer memory that the embodiment of the invention provides, the buffer memory that arrives with respect to each port assignment in the prior art can not real-time change, the buffer memory share factor and the dynamic change of current available cache memory total amount; Like this; When port receives message, can use the current available cache memory total amount and the current buffer memory share factor, confirm the current top limit of distributing buffer memory of port; This top limit is according in the quantity of the quantity of constantly current available cache memory total amount of difference, the current port that has a packet sending and receiving and congested port and dynamic change; Realized the dynamic assignment buffer memory, helped the overall utilization rate of buffer memory in the raising equipment, further strengthened the disposal ability of equipment the message burst flow.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, belong within the scope of claim of the present invention and equivalent technologies thereof if of the present invention these are revised with modification, then the present invention also is intended to comprise these changes and modification interior.

Claims (13)

1. a method of distributing buffer memory is characterized in that, comprising:
According to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculate the buffer memory share factor and preserve;
When port receives message,, confirm the top limit that each said port that has a packet sending and receiving can distribute buffer memory according to the said buffer memory share factor and current available cache memory total amount; Said top limit is not less than gives each said current buffer memory quantity that has the port of packet sending and receiving with said current available cache memory total amount mean allocation;
Whether the size of judging said message is greater than said top limit; If not, distribute the buffer memory that is not more than said top limit to give this port, if abandon said message.
2. the method for claim 1 is characterized in that, according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor, specifically comprises:
Calculate said buffer memory share factor alpha through following formula:
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n , n > 1
Wherein, valid_port_num representes the said current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of said congested port.
3. the method for claim 1 is characterized in that, according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor, specifically comprises:
Calculate said buffer memory share factor alpha through following formula:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 , n ≥ 2
Wherein, valid_port_num representes the said current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of said congested port.
4. like claim 2 or 3 said methods, it is characterized in that,, confirm the top limit that each said port that has a packet sending and receiving can distribute buffer memory, specifically comprise according to the said buffer memory share factor and current available cache memory total amount:
Calculate said top limit max_buffer through following formula:
max_buffer=alpha×cur_buffer;
Wherein, alpha representes the said buffer memory share factor; Cur_buffer representes said current available cache memory total amount.
5. like each described method of claim 1-3, it is characterized in that, also comprise:
After abandoning said message, when definite said port is not marked as congested port, is congested port with said port label, and upgrades the quantity of said congested port; And/or
Periodically travel through all of the port, when the port of confirming to be marked as congested port is aging, removes this congestion marking, and upgrade the quantity of said congested port.
6. method as claimed in claim 5 is characterized in that, also comprises:
After the quantity of upgrading said congested port, the quantity according to the congested port after upgrading recomputates the said buffer memory share factor, uses the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved; And/or
When the state of port transmitting-receiving message changes, upgrade the said current quantity that has the port of packet sending and receiving, and recomputate the said buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved.
7. a device that distributes buffer memory is characterized in that, comprising:
Buffer memory share factor calculating unit is used for according to the quantity of the current port that has a packet sending and receiving that is write down and the quantity of congested port wherein, calculates the buffer memory share factor and preserves;
The top limit computing unit is used for when port receives message, according to the said buffer memory share factor and current available cache memory total amount, confirms the top limit that each said port that has a packet sending and receiving can distribute buffer memory; Said top limit is not less than gives each said current buffer memory quantity that has the port of packet sending and receiving with said current available cache memory total amount mean allocation;
Judging unit, whether the size that is used to judge said message is greater than said top limit; If not, distribute the buffer memory that is not more than said top limit to give this port, if abandon said message.
8. device as claimed in claim 7 is characterized in that, said buffer memory share factor calculating unit specifically is used for calculating said buffer memory share factor alpha through following formula:
alpha = 1 busy _ port _ num + valid _ port _ num - busy _ port _ num n , n > 1
Wherein, valid_port_num representes the said current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of said congested port.
9. device as claimed in claim 7 is characterized in that, said buffer memory share factor calculating unit specifically is used for calculating said buffer memory share factor alpha through following formula:
alpha = busy _ port _ num × n + valid _ port _ num - busy _ port _ num valid _ port _ num - 1 , n ≥ 2
Wherein, valid_port_num representes the said current quantity that has the port of packet sending and receiving; Busy_port_num representes the quantity of said congested port.
10. like claim 8 or 9 described devices, it is characterized in that said top limit computing unit specifically is used for calculating said top limit max_buffer through following formula:
Max_buffer=alpha * cur_buffer; Wherein, alpha representes the said buffer memory share factor; Cur_buffer representes said current available cache memory total amount.
11. like each described device of claim 7-9, it is characterized in that, also comprise: congested port label unit; Be used for after abandoning said message; When definite said port is not marked as congested port, is congested port with said port label, and upgrades the quantity of said congested port; And/or, periodically travel through all of the port, when the port of confirming to be marked as congested port is aging, removes this congestion marking, and upgrade the quantity of said congested port.
12. device as claimed in claim 11; It is characterized in that; Said buffer memory share factor calculating unit also is used for after the quantity of upgrading said congested port, according to the quantity of the congested port after upgrading; Recomputate the said buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved; And/or, when the state of port transmitting-receiving message changes, upgrade the said current quantity that has the port of packet sending and receiving, and recomputate the said buffer memory share factor, use the buffer memory share factor that recomputates to upgrade the buffer memory share factor of being preserved.
13. a network equipment is characterized in that, comprises the device like each described distribution buffer memory of claim 7-12.
CN201110380457.6A 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment Active CN102404219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110380457.6A CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110380457.6A CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Publications (2)

Publication Number Publication Date
CN102404219A true CN102404219A (en) 2012-04-04
CN102404219B CN102404219B (en) 2014-07-30

Family

ID=45886022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110380457.6A Active CN102404219B (en) 2011-11-25 2011-11-25 Method and device for allocating caches as well as network equipment

Country Status (1)

Country Link
CN (1) CN102404219B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259247A (en) * 2020-02-11 2021-08-13 华为技术有限公司 Cache device in network equipment and data management method in cache device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063004A1 (en) * 2006-09-13 2008-03-13 International Business Machines Corporation Buffer allocation method for multi-class traffic with dynamic spare buffering
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN101873269A (en) * 2010-06-24 2010-10-27 杭州华三通信技术有限公司 Data retransmission device and method for distributing buffer to ports
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063004A1 (en) * 2006-09-13 2008-03-13 International Business Machines Corporation Buffer allocation method for multi-class traffic with dynamic spare buffering
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN101873269A (en) * 2010-06-24 2010-10-27 杭州华三通信技术有限公司 Data retransmission device and method for distributing buffer to ports
CN102025631A (en) * 2010-12-15 2011-04-20 中兴通讯股份有限公司 Method and exchanger for dynamically adjusting outlet port cache
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259247A (en) * 2020-02-11 2021-08-13 华为技术有限公司 Cache device in network equipment and data management method in cache device

Also Published As

Publication number Publication date
CN102404219B (en) 2014-07-30

Similar Documents

Publication Publication Date Title
US8392565B2 (en) Network memory pools for packet destinations and virtual machines
KR101867286B1 (en) Distributed processing apparatus and method for big data using hardware acceleration based on work load
CN103580842A (en) Method and system for conducting parallel transmission through multiple types of wireless links
DE60227291D1 (en) DEVICE AND METHOD FOR DISTRIBUTING THE PRODUCT
CN101316233B (en) Flow control method and system, bearing layer equipment
CN102882939A (en) Load balancing method, load balancing equipment and extensive domain acceleration access system
US10496427B2 (en) Method for managing memory of virtual machine, physical host, PCIE device and configuration method thereof, and migration management device
CA2521516A1 (en) Methods and devices for flexible bandwidth allocation
MX2015016012A (en) Methods and systems for data context and management via dynamic spectrum controller and dynamic spectrum policy controller.
CN105338297A (en) Video data storage and playback system, device and method
CN103440202A (en) RDMA-based (Remote Direct Memory Access-based) communication method, RDMA-based communication system and communication device
CN101799788B (en) Level-to-level administration method and system of storage resources
CN106844050A (en) A kind of memory allocation method and device
US20120093170A1 (en) Direct Memory Access Memory Management
US7562168B1 (en) Method of optimizing buffer usage of virtual channels of a physical communication link and apparatuses for performing the same
CN104780118B (en) A kind of flow control method and device based on token
CN102264109A (en) Method of distributing bandwidth for service and for terminal service execution, and equipment thereof
CN103049540A (en) Method and device for burning large files
EP1826672A3 (en) Apparatus and method for managing resources in a multiple application environment
US10133688B2 (en) Method and apparatus for transmitting information
US9584446B2 (en) Memory buffer management method and system having multiple receive ring buffers
CN102404219B (en) Method and device for allocating caches as well as network equipment
CN104965793A (en) Cloud storage data node apparatus
CN106844248B (en) The method and system of data transmission
CN101566933B (en) Method and device for configurating cache and electronic equipment and data read-write equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant