CN108055213A - The management method and system of the cache resources of the network switch - Google Patents

The management method and system of the cache resources of the network switch Download PDF

Info

Publication number
CN108055213A
CN108055213A CN201711296155.4A CN201711296155A CN108055213A CN 108055213 A CN108055213 A CN 108055213A CN 201711296155 A CN201711296155 A CN 201711296155A CN 108055213 A CN108055213 A CN 108055213A
Authority
CN
China
Prior art keywords
memory space
resources
cache
surplus
network switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711296155.4A
Other languages
Chinese (zh)
Inventor
麻孝强
赵茂聪
蒋震
周杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centec Networks Suzhou Co Ltd
Original Assignee
Centec Networks Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centec Networks Suzhou Co Ltd filed Critical Centec Networks Suzhou Co Ltd
Priority to CN201711296155.4A priority Critical patent/CN108055213A/en
Publication of CN108055213A publication Critical patent/CN108055213A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides a kind of management method and system of the cache resources of the network switch, the described method includes:Obtain the surplus resources for being currently used in the upper level memory space corresponding to the monitoring object for receiving message forwarding;A default dynamic factor, the cache resources threshold value of the memory space according to corresponding to the surplus resources of the setting factor beforehand and memory space obtain the currently monitored object;The message of the open close forwarding of current monitoring object excessively is prejudged into after the currently monitored object, whether the cache resources occupied needed for the currently monitored object are more than the cache resources threshold value of its opposite memory space, if so, abandoning the message forwarded by the currently monitored object;If it is not, normally forward the message.The management method and system of the cache resources of the network switch of the present invention, by the surplus resources for monitoring current memory space, and introduce dynamic factor, can make the network switch cache resource allocation it is more smooth, more rationally, promote the transmission rate of the network switch.

Description

The management method and system of the cache resources of the network switch
Technical field
The present invention relates to network communication field more particularly to a kind of cache resources of the network switch management method and be System.
Background technology
The cache resources of the network switch are limited, in order to ensure the utilization of each output port effectively, fully, fair Cache resources, dynamic resource management method are essential.
Existing dynamic management approach distributes resource allocation method, such as China Patent Publication No. for a kind of congestion level It is each output port or the fixed drop threshold of queue assignment shown in CN105610725A, which gathers around with current Plug degree is related.Independent resource management can be carried out in each shared drive, the list of cache resources is occupied by accounting message First number determines current Congestion Level SPCC, and different Congestion Level SPCCs distributes different drop thresholds, to be abandoned according to corresponding Threshold value come abandon it is current be more than threshold value message.
Existing dynamic management approach can more effective dynamically distributes cache resources, and under special screne(Such as Test single port maximum resource occupancy)There is certain advantage.However, although existing dynamic management approach can be more It is effective to carry out dynamic resource management, but this method is managed according to Congestion Level SPCC, the distribution of different Congestion Level SPCCs Different resource, therefore cache resources will change in different congestion levels, due to the grade limited amount of Congestion Level SPCC, this Sample, which may result in resource allocation, has mutation to shake, unsmooth.
The content of the invention
It is an object of the invention to provide a kind of management methods and system of the cache resources of the network switch.
One of to achieve the above object, the management method of the cache resources of the network switch of an embodiment of the present invention, The described method includes:Obtain the remaining money for being currently used in the upper level memory space corresponding to the monitoring object for receiving message forwarding Source;
The currently monitored object is queue or port, and the corresponding upper level spatial cache of the queue is port or memory pool, The corresponding upper spatial cache in the port is memory pool;
A default dynamic factor, according to corresponding to the surplus resources of the setting factor beforehand and memory space obtain the currently monitored object Memory space cache resources threshold value;
The message of the open close forwarding of current monitoring object excessively is prejudged into after the currently monitored object, is occupied needed for the currently monitored object Whether cache resources are more than the cache resources threshold value of its opposite memory space,
If so, abandon the message forwarded by the currently monitored object;
If it is not, normally forward the message.
As being further improved for an embodiment of the present invention, the monitoring object institute for being currently used in and receiving message forwarding is obtained The surplus resources of corresponding upper level memory space specifically include:
Resource Counter is set in the corresponding memory space of the currently monitored object, for monitoring the occupancy of current memory space money Source;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of Resource Counter accordingly increases, when depositing When storing up the monitoring object forwarding completion in space, the numerical value of Resource Counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Represent the surplus resources of current memory space, total_cnt represents the total resources of current memory space, and used_cnt is represented The numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter be unit number, Mei Gedan Member represents 288bytes.
As being further improved for an embodiment of the present invention, the monitoring object institute for being currently used in and receiving message forwarding is obtained The surplus resources of corresponding upper level memory space specifically include:
Surplus resources counter is set in the corresponding memory space of the currently monitored object, for monitoring the surplus of current memory space Remaining resource;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of surplus resources counter is corresponding Increase, when the monitoring object in memory space, which forwards, to be completed, the numerical value of surplus resources counter accordingly reduces;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes.
As being further improved for an embodiment of the present invention, a dynamic factor is preset, according to the setting factor beforehand and is deposited The cache resources threshold value that the surplus resources in storage space obtain the memory space corresponding to the currently monitored object specifically includes:
Default dynamic factor represents with a,
Then cache resources threshold value=a*remain_cnt, wherein, the value range of a is any value between 0 to 1, described Cache resources threshold value is positive integer.
As being further improved for an embodiment of the present invention, the monitoring object is queue, and the memory space is end Mouthful;
The value of a is one in 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9.
As being further improved for an embodiment of the present invention, the method further includes:
It is pre-configured on the port of the network switch, by making it have two kinds of interconvertible monitoring moulds to the selection of port Formula;
Under one of which pattern, dynamic allocation of resources is carried out by the way of specified drop threshold;
Under another pattern, dynamically distributes money is carried out by the surplus resources of the upper level memory space corresponding to monitoring object Source.
It is another to achieve the above object, the management system of the cache resources of the network switch of an embodiment of the present invention, The system comprises:Acquisition module is deposited for obtaining the upper level being currently used in corresponding to the monitoring object for receiving message forwarding Store up the surplus resources in space;
The currently monitored object is queue or port, and the corresponding upper level spatial cache of the queue is port or memory pool, The corresponding upper spatial cache in the port is memory pool;
Processing module presets a dynamic factor, is obtained according to the surplus resources of the setting factor beforehand and memory space the currently monitored The cache resources threshold value of memory space corresponding to object;
Output module, it is the currently monitored after the message for prejudging open close excessively current monitoring object forwarding enters the currently monitored object Whether the cache resources occupied needed for object are more than the cache resources threshold value of its opposite memory space,
If so, abandon the message forwarded by the currently monitored object;
If it is not, normally forward the message.
As being further improved for an embodiment of the present invention, the acquisition module is specifically used for:
Resource Counter is set in the corresponding memory space of the currently monitored object, for monitoring the occupancy of current memory space money Source;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of Resource Counter accordingly increases, when depositing When storing up the monitoring object forwarding completion in space, the numerical value of Resource Counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Remain_cnt represents the surplus resources of current memory space, and total_cnt represents the total resources of current memory space, Used_cnt represents the numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter is Unit number, each unit represent 288bytes.
As being further improved for an embodiment of the present invention, the acquisition module is specifically used for:
Surplus resources counter is set in the corresponding memory space of the currently monitored object, for monitoring the surplus of current memory space Remaining resource;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of surplus resources counter is corresponding Increase, when the monitoring object in memory space, which forwards, to be completed, the numerical value of surplus resources counter accordingly reduces;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes.
As being further improved for an embodiment of the present invention, default dynamic factor is represented with a, then cache resources threshold value =a*remain_cnt, wherein, the value range of a is any value between 0 to 1, and the cache resources threshold value is just whole Number.
As being further improved for an embodiment of the present invention, the monitoring object is queue, and the memory space is end Mouthful;
The value of a is one in 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9.
The processing module is additionally operable to:It is pre-configured on the port of the network switch, by making to the selection of port There are two types of interconvertible monitoring patterns for its tool;
Under one of which pattern, dynamic allocation of resources is carried out by the way of specified drop threshold;
Under another pattern, dynamically distributes money is carried out by the surplus resources of the upper level memory space corresponding to monitoring object Source.
Compared with prior art, the beneficial effects of the invention are as follows:The management of the cache resources of the network switch of the present invention Method and system by monitoring the surplus resources of current memory space and introducing dynamic factor, can make the network switch Cache resource allocation it is more smooth, more rationally, promote the transmission rate of the network switch.
Description of the drawings
Fig. 1 is the flow chart of the management method of the cache resources of the network switch in an embodiment of the present invention;
Fig. 2 is the module diagram of the management system of the cache resources of the network switch in an embodiment of the present invention.
Specific embodiment
Below with reference to specific embodiment shown in the drawings, the present invention will be described in detail.But these embodiments are simultaneously The present invention is not limited, structure that those of ordinary skill in the art are made according to these embodiments, method or functionally Conversion is all contained in protection scope of the present invention.
As shown in Figure 1, in an embodiment of the present invention, the management method of the cache resources of the network switch, including:
S1, acquisition are currently used in the surplus resources of the upper level memory space corresponding to the monitoring object for receiving message forwarding;Institute The currently monitored object is stated as queue or port, the corresponding upper level spatial cache of the queue is port or memory pool, the end The corresponding upper spatial cache of mouth is memory pool.
Present invention is mainly used for the cache resources of the management network switch, in the network switch, according to its resource distribution from Small order is arrived greatly, is divided into shared drive pond, port and queue, wherein, multiple queues can share the money of same port Source, multiple ports can share the resource of same shared drive pond;Message passes through in network switch repeating process, sequenced through Different queue/ports are forwarded to.When multiple messages are forwarded to by same queue/port, and current queue/port Capacity be difficult to the same time accommodate the plurality of message when, congestion will occur for current queue/port;Afterwards, it is follow-up to enter Message between current queue/port congestion problem does not solve, will be dropped;So that the resource allocation of interchanger is uneven, Transmission speed is slow, and drop probabilities are higher.
The present invention is when the network switch dispatches from the factory, you can to its selective preset configuration, the currently monitored object is only specified Unique upper level memory space, before self-defined adjustment next time, the memory space that the currently monitored object is specified can not To be modified.
In an embodiment of the present invention, the step S1 is specifically included:In the corresponding memory space of the currently monitored object Resource Counter is set, for monitoring the occupancy resource of current memory space;When the surplus resources in memory space are currently supervised When surveying object occupancy, the numerical value of Resource Counter accordingly increases, when the monitoring object in memory space, which forwards, to be completed, money The numerical value of source counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Remain_cnt represents the surplus resources of current memory space, and total_cnt represents the total resources of current memory space, Used_cnt represents the numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter is Unit number, each unit represent 288bytes.
In the specific embodiment of the invention, for convenience describe, using the currently monitored object as queue, with its corresponding upper one Grade memory space is does specific introduction exemplified by port.
Correspondingly, corresponding each port sets Resource Counter, after message enters the network switch, the resource count The value of device used_cnt adds 1, after message leaves the network switch, and the value of the Resource Counter used_cnt subtracts 1, if there is Congestion, into message persistently increase, the message that goes out is reduced, then Resource Counter used_cnt can continue to increase, and further, is led to The surplus resources of present port can be drawn by crossing above-mentioned formula remain_cnt=total_cnt-used_cnt.
In another embodiment of the present invention, can surplus resources directly be set in the corresponding memory space of the currently monitored object Counter, for monitoring the surplus resources of current memory space;When the surplus resources in memory space are accounted for by the currently monitored object Used time, the numerical value of surplus resources counter accordingly increase, when the monitoring object in memory space, which forwards, to be completed, residue money The numerical value of source counter accordingly reduces;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes, is not described in detail herein.
Further, the management method of the cache resources of the network switch further includes:S2, a default dynamic factor, The caching money of memory space according to corresponding to the surplus resources of the setting factor beforehand and memory space obtain the currently monitored object Source threshold value.
In the specific embodiment of the invention, the dynamic factor of introducing is represented with a, can be influenced shared by the currently monitored object Memory space remaining bandwidth ratio, value is bigger, and the ratio of occupancy is higher.In the specific embodiment, monitoring object For queue, memory space is its corresponding port;The then cache resources threshold value thrd=a*remain_cnt, wherein, a Value range be any value between 0 to 1, the cache resources threshold value is positive integer, in the concrete application, the a's Value be 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9 in one;Under normal conditions, will Its system default value is arranged to 1/2;Certainly, the size of value, can be adjusted according to the needs of user, not do herein in detail It repeats.
Further, the management method of the cache resources of the network switch further includes:S3, the open close excessively current prison of anticipation After the message of survey object forwarding enters the currently monitored object, whether the cache resources occupied needed for the currently monitored object are more than its phase To memory space cache resources threshold value, if so, abandoning the message that is forwarded by the currently monitored object;If it is not, normal forwarding The message.
In this way, by linear Dynamic Resource Allocation for Multimedia, resource occupation can be made more fair, smoother.
It, can be by the management method of the cache resources of the network switch of the present invention with gathering around in a preferred embodiment of the invention Plug ranking score is combined with resource allocation method, and cutting for two methods is realized for consolidated network interchanger by way of port controlling It changes, specifically, the method further includes:It is pre-configured on the port of the network switch, by making it to the selection of port There are two types of interconvertible monitoring patterns for tool;Under one of which pattern, dynamically distributes money is carried out by the way of specified drop threshold Source;Under another pattern, dynamic allocation of resources is carried out by the surplus resources of the upper level memory space corresponding to monitoring object.
The method for managing resource based on congestion level directly designated port maximum can occupy resource, special Under scene, such as:When testing single port maximum resource occupancy, there is certain advantage, in the management method, user wishes Some port of interchanger occupies all resources of entire interchanger, it is possible to directly specify the drop threshold of the port to exchange Machine maximum resource, the management methods of the cache resources of the network switch of the application is according to current remaining bandwidth come dynamically distributes Resource, in only a port, single port cannot occupy entire resource, when during the test, it is necessary to which single port accounts for When with all resources or meeting the needs of user wishes oneself specified drop threshold, mode, mode can be added on port Represent that the drop threshold of user configuration according to current remaining bandwidth dynamic allocation of resources, is indifferent in the port for 0;Mode is 1 It represents that the port manages resource according to the drop threshold of user configuration completely, is indifferent to current remaining bandwidth;By holding Increase the switching that control carries out method for managing resource on mouth, can combine the advantages of the two methods, meet the needs of different scenes. In this way, advantage of the congestion level distribution resource allocation method under special screne can be retained, moreover it is possible to solve congestion level distribution resource Method shake, rough problem, can it is more efficient, reasonably distribute cache resources.
With reference to shown in Fig. 2, in one embodiment of the present invention, the management system bag of the cache resources of the network switch It includes:Acquisition module 100, processing module 200, output module 300.
The upper level storage that acquisition module 100 is currently used in for acquisition corresponding to the monitoring object for receiving message forwarding is empty Between surplus resources;The currently monitored object is queue or port, and the corresponding upper level spatial cache of the queue is port Or memory pool, the corresponding upper spatial cache in the port are memory pool.
Present invention is mainly used for the cache resources of the management network switch, in the network switch, according to its resource distribution from Small order is arrived greatly, is divided into shared drive pond, port and queue, wherein, multiple queues can share the money of same port Source, multiple ports can share the resource of same shared drive pond;Message passes through in network switch repeating process, sequenced through Different queue/ports are forwarded to.When multiple messages are forwarded to by same queue/port, and current queue/port Capacity be difficult to the same time accommodate the plurality of message when, congestion will occur for current queue/port;Afterwards, it is follow-up to enter Message between current queue/port congestion problem does not solve, will be dropped;So that the resource allocation of interchanger is uneven, Transmission speed is slow, and drop probabilities are higher.
The present invention is when the network switch dispatches from the factory, you can to its selective preset configuration, the currently monitored object is only specified Unique upper level memory space, before self-defined adjustment next time, the memory space that the currently monitored object is specified can not To be modified.
In an embodiment of the present invention, acquisition module 100 is specifically used in the corresponding memory space of the currently monitored object Resource Counter is set, for monitoring the occupancy resource of current memory space;When the surplus resources in memory space are currently supervised When surveying object occupancy, the numerical value of Resource Counter accordingly increases, when the monitoring object in memory space, which forwards, to be completed, money The numerical value of source counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Remain_cnt represents the surplus resources of current memory space, and total_cnt represents the total resources of current memory space, Used_cnt represents the numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter is Unit number, each unit represent 288bytes.
In the specific embodiment of the invention, for convenience describe, using the currently monitored object as queue, with its corresponding upper one Grade memory space is does specific introduction exemplified by port.
Correspondingly, corresponding each port sets Resource Counter, after message enters the network switch, the resource count The value of device used_cnt adds 1, after message leaves the network switch, and the value of the Resource Counter used_cnt subtracts 1, if there is Congestion, into message persistently increase, the message that goes out is reduced, then Resource Counter used_cnt can continue to increase, and further, is led to The surplus resources of present port can be drawn by crossing above-mentioned formula remain_cnt=total_cnt-used_cnt.
In another embodiment of the present invention, acquisition module 100 can be directly in the corresponding memory space of the currently monitored object Surplus resources counter is set, for monitoring the surplus resources of current memory space;When the surplus resources in memory space are worked as When preceding monitoring object occupies, the numerical value of surplus resources counter accordingly increases, when the monitoring object in memory space has forwarded Cheng Shi, the numerical value of surplus resources counter accordingly reduce;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes, is not described in detail herein.
Further, processing module 200 is for presetting a dynamic factor, according to the surplus of the setting factor beforehand and memory space The cache resources threshold value of memory space corresponding to the remaining the currently monitored object of resource acquisition.
In the specific embodiment of the invention, the dynamic factor of introducing is represented with a, can be influenced shared by the currently monitored object Memory space remaining bandwidth ratio, value is bigger, and the ratio of occupancy is higher.In the specific embodiment, monitoring object For queue, memory space is its corresponding port;The then cache resources threshold value thrd=a*remain_cnt, wherein, a Value range be any value between 0 to 1, the cache resources threshold value is positive integer, in the concrete application, the a's Value be 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9 in one;Under normal conditions, will Its system default value is arranged to 1/2;Certainly, the size of value, can be adjusted according to the needs of user, not do herein in detail It repeats.
Further, output module 300 is used to prejudging the message of the open close forwarding of current monitoring object excessively into the currently monitored After object, whether the cache resources occupied needed for the currently monitored object are more than the cache resources threshold value of its opposite memory space, If so, abandon the message forwarded by the currently monitored object;If it is not, normally forward the message.
In this way, by linear Dynamic Resource Allocation for Multimedia, resource occupation can be made more fair, smoother.
It, can be by the management method of the cache resources of the network switch of the present invention with gathering around in a preferred embodiment of the invention Plug ranking score is combined with resource allocation method, and cutting for two methods is realized for consolidated network interchanger by way of port controlling It changes, specifically, the processing module 200 is additionally operable to:It is pre-configured on the port of the network switch, passes through the choosing to port It selects and makes it have two kinds of interconvertible monitoring patterns;Under one of which pattern, into Mobile state by the way of specified drop threshold Distribute resource;Under another pattern, divided by the surplus resources of the upper level memory space corresponding to monitoring object into Mobile state With resource.The method for managing resource based on congestion level directly designated port maximum can occupy resource, in particular field Under scape, such as:When testing single port maximum resource occupancy, there is certain advantage, in the management method, user wishes to hand over Some port changed planes occupies all resources of entire interchanger, it is possible to which the drop threshold for directly specifying the port is interchanger Maximum resource, the management method of the cache resources of the network switch of the application are provided according to current remaining bandwidth come dynamically distributes Source, in only a port, single port cannot occupy entire resource, when during the test, it is necessary to which single port occupies All resources or meet user wish oneself specify drop threshold the needs of when, mode can be added on port, mode is 0 represents that the drop threshold of user configuration according to current remaining bandwidth dynamic allocation of resources, is indifferent in the port;Mode is 1 table Show that the port manages resource according to the drop threshold of user configuration completely, be indifferent to current remaining bandwidth;By in port The upper switching for increasing control and carrying out method for managing resource, can combine the advantages of the two methods, meet the needs of different scenes.Such as This, can retain advantage of the congestion level distribution resource allocation method under special screne, moreover it is possible to solve congestion level distribution resource side Method shake, rough problem, can it is more efficient, reasonably distribute cache resources.
In conclusion the management method and system of the cache resources of the network switch of the present invention, are currently deposited by monitoring Store up space surplus resources and introduce dynamic factor, can make the network switch cache resource allocation it is more smooth, more close Reason promotes the transmission rate of the network switch.
For convenience of description, it is divided into various modules during description apparatus above with function to describe respectively.Certainly, this is being implemented The function of each module is realized can in the same or multiple software and or hardware during invention.
Device embodiments described above are only schematical, wherein the module illustrated as separating component It may or may not be physically separate, the component shown as module may or may not be physics mould Block, you can be located at a place or can also be distributed on multiple network modules.It can be selected according to the actual needs In some or all of module realize the purpose of present embodiment scheme.Those of ordinary skill in the art are not paying creation Property work in the case of, you can to understand and implement.
It should be appreciated that although this specification is described in terms of embodiments, but not each embodiment only includes one A independent technical solution, this description of the specification is merely for the sake of clarity, and those skilled in the art should will say For bright book as an entirety, the technical solution in each embodiment may also be suitably combined to form those skilled in the art can With the other embodiment of understanding.
Those listed above is a series of to be described in detail only for feasibility embodiment of the invention specifically Bright, they are not to limit the scope of the invention, all equivalent implementations made without departing from skill spirit of the present invention Or change should all be included in the protection scope of the present invention.

Claims (12)

1. a kind of management method of the cache resources of the network switch, which is characterized in that the described method includes:
Obtain the surplus resources for being currently used in the upper level memory space corresponding to the monitoring object for receiving message forwarding;
The currently monitored object is queue or port, and the corresponding upper level spatial cache of the queue is port or memory pool, The corresponding upper spatial cache in the port is memory pool;
A default dynamic factor, according to corresponding to the surplus resources of the setting factor beforehand and memory space obtain the currently monitored object Memory space cache resources threshold value;
The message of the open close forwarding of current monitoring object excessively is prejudged into after the currently monitored object, is occupied needed for the currently monitored object Whether cache resources are more than the cache resources threshold value of its opposite memory space,
If so, abandon the message forwarded by the currently monitored object;
If it is not, normally forward the message.
2. the management method of the cache resources of the network switch according to claim 1, which is characterized in that obtain current use It is specifically included in the surplus resources for receiving the upper level memory space corresponding to the monitoring object of message forwarding:
Resource Counter is set in the corresponding memory space of the currently monitored object, for monitoring the occupancy of current memory space money Source;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of Resource Counter accordingly increases, when depositing When storing up the monitoring object forwarding completion in space, the numerical value of Resource Counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Remain_cnt represents the surplus resources of current memory space, and total_cnt represents the total resources of current memory space, Used_cnt represents the numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter is Unit number, each unit represent 288bytes.
3. the management method of the cache resources of the network switch according to claim 1, which is characterized in that obtain current use It is specifically included in the surplus resources for receiving the upper level memory space corresponding to the monitoring object of message forwarding:
Surplus resources counter is set in the corresponding memory space of the currently monitored object, for monitoring the surplus of current memory space Remaining resource;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of surplus resources counter is corresponding Increase, when the monitoring object in memory space, which forwards, to be completed, the numerical value of surplus resources counter accordingly reduces;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes.
4. the management method of the cache resources of the network switch according to Claims 2 or 3, which is characterized in that
A default dynamic factor, according to corresponding to the surplus resources of the setting factor beforehand and memory space obtain the currently monitored object The cache resources threshold value of memory space specifically include:
Default dynamic factor represents with a,
Then cache resources threshold value=a*remain_cnt, wherein, the value range of a is any value between 0 to 1, described Cache resources threshold value is positive integer.
5. the management method of the cache resources of the network switch according to claim 4, which is characterized in that the monitoring pair As for queue, the memory space is port;
The value of a is one in 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9.
6. the management method of the cache resources of the network switch according to claim 1, which is characterized in that the method is also Including:
It is pre-configured on the port of the network switch, by making it have two kinds of interconvertible monitoring moulds to the selection of port Formula;
Under one of which pattern, dynamic allocation of resources is carried out by the way of specified drop threshold;
Under another pattern, dynamically distributes money is carried out by the surplus resources of the upper level memory space corresponding to monitoring object Source.
7. a kind of management system of the cache resources of the network switch, which is characterized in that the system comprises:
Acquisition module is currently used in the surplus of the upper level memory space corresponding to the monitoring object for receiving message forwarding for obtaining Remaining resource;
The currently monitored object is queue or port, and the corresponding upper level spatial cache of the queue is port or memory pool, The corresponding upper spatial cache in the port is memory pool;
Processing module presets a dynamic factor, is obtained according to the surplus resources of the setting factor beforehand and memory space the currently monitored The cache resources threshold value of memory space corresponding to object;
Output module, it is the currently monitored after the message for prejudging open close excessively current monitoring object forwarding enters the currently monitored object Whether the cache resources occupied needed for object are more than the cache resources threshold value of its opposite memory space,
If so, abandon the message forwarded by the currently monitored object;
If it is not, normally forward the message.
8. the management system of the cache resources of the network switch according to claim 7, which is characterized in that the acquisition mould Block is specifically used for:
Resource Counter is set in the corresponding memory space of the currently monitored object, for monitoring the occupancy of current memory space money Source;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of Resource Counter accordingly increases, when depositing When storing up the monitoring object forwarding completion in space, the numerical value of Resource Counter accordingly reduces;
Wherein, remain_cnt=total_cnt-used_cnt;
Remain_cnt represents the surplus resources of current memory space, and total_cnt represents the total resources of current memory space, Used_cnt represents the numerical value of Resource Counter;The unit of the numerical value of the surplus resources, total resources and Resource Counter is Unit number, each unit represent 288bytes.
9. the management system of the cache resources of the network switch according to claim 7, which is characterized in that the acquisition mould Block is specifically used for:
Surplus resources counter is set in the corresponding memory space of the currently monitored object, for monitoring the surplus of current memory space Remaining resource;When the surplus resources in memory space are occupied by the currently monitored object, the numerical value of surplus resources counter is corresponding Increase, when the monitoring object in memory space, which forwards, to be completed, the numerical value of surplus resources counter accordingly reduces;
Wherein, the surplus resources of current memory space are represented with remain_cnt, numerical value is equal to the number of surplus resources counter Value, unit are unit number, and each unit represents 288bytes.
10. the management system of the cache resources of the network switch according to claim 8 or claim 9, which is characterized in that
Default dynamic factor represents with a, then cache resources threshold value=a*remain_cnt, wherein, the value range of a is 0 Any value between to 1, the cache resources threshold value are positive integer.
11. the management system of the cache resources of the network switch according to claim 10, which is characterized in that
The monitoring object is queue, and the memory space is port;
The value of a is one in 1/129,1/65,1/33,1/17,1/9,1/5,1/3,1/2,2/3,4/5,8/9.
12. the management system of the cache resources of the network switch according to claim 7, which is characterized in that
The processing module is additionally operable to:It is pre-configured on the port of the network switch, by making its tool to the selection of port There are two types of interconvertible monitoring patterns;
Under one of which pattern, dynamic allocation of resources is carried out by the way of specified drop threshold;
Under another pattern, dynamically distributes money is carried out by the surplus resources of the upper level memory space corresponding to monitoring object Source.
CN201711296155.4A 2017-12-08 2017-12-08 The management method and system of the cache resources of the network switch Withdrawn CN108055213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711296155.4A CN108055213A (en) 2017-12-08 2017-12-08 The management method and system of the cache resources of the network switch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711296155.4A CN108055213A (en) 2017-12-08 2017-12-08 The management method and system of the cache resources of the network switch

Publications (1)

Publication Number Publication Date
CN108055213A true CN108055213A (en) 2018-05-18

Family

ID=62123622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711296155.4A Withdrawn CN108055213A (en) 2017-12-08 2017-12-08 The management method and system of the cache resources of the network switch

Country Status (1)

Country Link
CN (1) CN108055213A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765383A (en) * 2019-10-21 2020-02-07 支付宝(杭州)信息技术有限公司 Resource caching method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1980185A (en) * 2005-12-05 2007-06-13 中兴通讯股份有限公司 Device and method suitable to flow business scheduling
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment
CN107404443A (en) * 2017-08-03 2017-11-28 北京东土军悦科技有限公司 Queue cache resources control method and device, server and storage medium
US20170350878A1 (en) * 2011-09-13 2017-12-07 Theranos, Inc. Systems and methods for multi-analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1980185A (en) * 2005-12-05 2007-06-13 中兴通讯股份有限公司 Device and method suitable to flow business scheduling
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
US20170350878A1 (en) * 2011-09-13 2017-12-07 Theranos, Inc. Systems and methods for multi-analysis
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment
CN107404443A (en) * 2017-08-03 2017-11-28 北京东土军悦科技有限公司 Queue cache resources control method and device, server and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765383A (en) * 2019-10-21 2020-02-07 支付宝(杭州)信息技术有限公司 Resource caching method and device

Similar Documents

Publication Publication Date Title
US6486983B1 (en) Agile optical-core distributed packet switch
CN104202264B (en) Distribution method for beared resource, the apparatus and system of cloud data center network
CN107547418B (en) A kind of jamming control method and device
CN101990250B (en) Bandwidth management method, eNodeB, service gateway and communication system
CN101778114A (en) Method for multi-channel parallel transmission of streaming media services on basis of load balance
CN108055701B (en) Resource scheduling method and base station
CN103259743A (en) Method and device for controlling output flow based on token bucket
CN104852859B (en) A kind of aggregation interface method for processing business and equipment
CN106130925A (en) Link scheduling method, equipment and the system of a kind of SDN
CN102111327A (en) Method and system for cell dispatching
CN104025645B (en) A kind of method and device for managing shared network
CN108924880A (en) It is a kind of can automatic flow cutting transfer flow distributing system and its distribution method
CN102571586B (en) Method and device for setting customer virtual local area network (CVLAN) in transparent interconnect of lots of links (TRILL) network
CN110177056B (en) Automatic adaptive bandwidth control method
CN108055213A (en) The management method and system of the cache resources of the network switch
CN113727394B (en) Method and device for realizing shared bandwidth
CN103260196B (en) A kind of control method of transmission bandwidth, Apparatus and system
CN1245817C (en) Control method of network transmission speed and Ethernet interchanger using said method
CN101621409A (en) Service control method, service control device and broadband access servers
CN108848131A (en) A kind of industrial Internet of Things virtual Private Network implementation method of list point-to-multipoint
CN106162747B (en) A kind of method and device of load balancing
CN106559355A (en) IP Telecommunication Network edge gateway equipment resource management method based on fair algorithm
CN110769023A (en) Point-to-point content distribution network system based on intelligent home gateway
CN106385688B (en) A kind of base-band resource distribution method and system and controller
JPH02260956A (en) Real time net route assignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180518

WW01 Invention patent application withdrawn after publication