CN115706712A - Cache management method and equipment - Google Patents

Cache management method and equipment Download PDF

Info

Publication number
CN115706712A
CN115706712A CN202110932899.0A CN202110932899A CN115706712A CN 115706712 A CN115706712 A CN 115706712A CN 202110932899 A CN202110932899 A CN 202110932899A CN 115706712 A CN115706712 A CN 115706712A
Authority
CN
China
Prior art keywords
cache
threshold
management device
enqueue
traffic management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110932899.0A
Other languages
Chinese (zh)
Inventor
杨文斌
董红红
王震
李广
袁赛
白宇
王小忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110932899.0A priority Critical patent/CN115706712A/en
Publication of CN115706712A publication Critical patent/CN115706712A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a cache management method and equipment. The method comprises the following steps: acquiring working state parameters, wherein the working state parameters comprise a reference enqueuing cache value, a first cache threshold value and a first enqueuing cache threshold value, the reference enqueuing cache value represents the maximum value occupied by the enqueuing cache of the flow management equipment in a preset time period, the cache threshold value corresponding to the cache space opened by the flow management equipment in the preset time period is the first cache threshold value, and the enqueuing cache threshold value corresponding to the opened enqueuing cache space is the first enqueuing cache threshold value; determining to start a cache space corresponding to one of N cache thresholds based on the working state parameter, wherein the cache spaces corresponding to any two cache thresholds are different, and N is an integer greater than 1; and opening the determined cache space. The method is beneficial to reducing the network power consumption under the condition of ensuring to meet the service requirement.

Description

Cache management method and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a cache management method and apparatus.
Background
The cache is used as a buffer area for data exchange, and the hardware running speed of the network equipment can be improved. Currently, an off-chip cache chip (e.g., a High Bandwidth Memory (HBM)) connected to a network device can expand a cache space to better meet a service requirement in a network. Based on this, the network device may include caches that are divided into an on-chip cache (LMEM) and an off-chip cache (EM).
In the conventional cache management method, when it is detected that traffic exists in an enqueue cache of a network device or traffic flows into the network device, all caches (e.g., off-chip caches) included in the network device are opened. However, the network devices in the existing network generally receive relatively small traffic in a continuous time, and when the cache (e.g., off-chip cache) of the network device is managed based on this method, it is not beneficial to reduce the network power consumption.
Therefore, a cache management method is needed to reduce network power consumption while ensuring that service requirements are met.
Disclosure of Invention
The application provides a cache management method and equipment, and the method is favorable for reducing network power consumption under the condition of ensuring that service requirements are met.
In a first aspect, a cache management method is provided, where the method includes: the method comprises the steps that a traffic management device obtains working state parameters, wherein the working state parameters comprise a reference enqueue cache value, a first cache threshold value and a first enqueue cache threshold value, the reference enqueue cache value represents the maximum value occupied by the enqueue cache of the traffic management device in a preset time period, the cache threshold value corresponding to the opened cache space of the traffic management device in the preset time period is the first cache threshold value, and the enqueue cache threshold value corresponding to the opened enqueue cache space is the first enqueue cache threshold value;
the traffic management device determines to open a cache space corresponding to one of N cache thresholds based on the operating state parameter, where the cache space corresponding to any one cache threshold is a cache space included in the traffic management device, the cache spaces corresponding to any two cache thresholds are different, the N cache thresholds include the first cache threshold, and N is an integer greater than 1; the traffic management device opens the determined buffer space.
The reference enqueue cache value does not frequently vibrate within a preset time period, and can well reflect the condition of the service traffic flowing into the traffic management equipment within the preset time period and also well reflect the occupation condition of the enqueue cache of the traffic management equipment within the preset time period.
The buffer threshold corresponding to the buffer space opened by the traffic management device in the preset time period is the first buffer threshold, and the enqueue buffer threshold corresponding to the enqueue buffer space opened by the traffic management device is the first enqueue buffer threshold. It is understood that the buffer space corresponding to the first buffer threshold is opened before the preset time period, and the enqueue buffer space corresponding to the first enqueue buffer threshold is opened before the preset time period. In the present application, the starting time of the cache space corresponding to the first cache threshold and the starting time of the enqueue cache space corresponding to the first enqueue cache threshold are not specifically limited.
In the above technical solution, when the traffic management device determines the size of the buffer space that should be opened at the current time, the enqueue buffer occupancy value (i.e., the reference enqueue buffer value) within a period of time (i.e., a preset period of time) before the current time, the size of the buffer space opened before the current time (i.e., the buffer space corresponding to the first buffer threshold), and the size of the enqueue buffer space (i.e., the enqueue buffer space corresponding to the first enqueue buffer threshold) are considered. Based on this, the traffic management device determines to open the cache space corresponding to one of the N cache thresholds, so that the cache space determined by opening can meet the service requirement. Further, when the cache space corresponding to the cache threshold value started by the traffic management device is smaller than all the cache spaces included by the traffic management device, the method is favorable for reducing network power consumption under the condition of ensuring that the service requirement is met.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining, by the traffic management device, to open a cache space corresponding to one of the N cache thresholds based on the working state parameter includes: the traffic management device determines a change state of the traffic flowing into the traffic management device within the preset time period according to the first enqueue cache threshold value and the reference enqueue cache value; and the traffic management device determines to start a cache space corresponding to one of the N cache thresholds based on the change state of the service traffic and the first cache threshold.
The changing state of the service flow flowing into the flow management device in the preset time period includes: the traffic flow flowing into the traffic management device is in an increasing state, and the traffic flow flowing into the traffic management device is in a decreasing state.
In the technical scheme, the flow management equipment can flexibly determine the size of the cache space to be opened under different flow change states according to the change state of the service flow flowing into the flow management equipment, and the network power consumption is favorably reduced under the condition of ensuring that the service requirement is met.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining, by the traffic management device, to open the cache space corresponding to one of the N cache thresholds based on the change state of the service traffic and the first cache threshold, where the cache space corresponding to the first cache threshold is smaller than all cache spaces included in the traffic management device, includes: under the condition that the traffic flow flowing into the traffic management device is in an increasing state within the preset time period, in response to that a preset condition is met, the traffic management device determines to open a cache space corresponding to a second cache threshold, where the second cache threshold is one of the N cache thresholds, the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
The service flow is in a growth state, and it can be understood that the service flow is in a growth trend.
In some implementations, the cache space corresponding to the second cache threshold is smaller than all cache spaces included in the traffic management device. In this implementation, the second caching threshold is not the largest caching threshold of the N caching thresholds. Illustratively, the total buffer of the traffic management device is 8 gigabytes (GB or G), the buffer threshold N =3 of the traffic management device is set, and these 3 buffer thresholds can be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first caching threshold may be the caching threshold 1 (4G), the second caching threshold may be the caching threshold 2 (6G).
In some implementations, the cache space corresponding to the second cache threshold is equal to all cache spaces included in the traffic management device. In this implementation, the second caching threshold is the largest caching threshold of the N caching thresholds. Illustratively, the total cache of the traffic management device is 8G, the cache threshold N =3 of the traffic management device is set, and the 3 cache thresholds may be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, the second cache threshold may be the cache threshold 3 (8G), the first cache threshold may be the cache threshold 1 (4G), or may be the threshold cache 2 (6G).
In the foregoing technical solution, when the current traffic flowing into the traffic management device is in an increasing state, in order to better meet a service requirement, the traffic management device determines that a larger cache space (that is, a cache space corresponding to the second cache threshold) needs to be opened. Specifically, when the traffic flowing into the traffic management device is in an increasing trend, the Service Level Agreement (SLA) of the traffic flowing into the traffic management device starts to decrease, and risks such as non-linear speed transmission and packet loss exist, so a larger cache space (i.e., a cache space corresponding to the second cache threshold) should be opened to ensure that the service requirement is met.
With reference to the first aspect, in certain implementations of the first aspect, the meeting the preset condition includes: in the preset time period, the cache bandwidth corresponding to the first cache threshold is larger than the flow rate of the service flow flowing into the flow management equipment; or, in the preset time period, the cache occupancy value of the traffic management device is smaller than (preset coefficient × first cache threshold), where the preset coefficient is a number greater than zero and smaller than 1. For example, the predetermined coefficient is a number greater than zero and less than 1. For example, the preset coefficient may be, but is not limited to: 0.9,0.8,0.85, or 0.7, etc. Illustratively, when the preset coefficient is equal to 0.8 and the first buffer threshold is equal to 6G, the buffer occupancy value of the traffic management device needs to be smaller than (0.8 × 6) G in the preset time period.
In the preset time period, the cache bandwidth corresponding to the first cache threshold is greater than the speed of the traffic flowing into the traffic management device, which may be understood as that, in the preset time period, the speed of the traffic flowing into the traffic management device is linear. In the preset time period, the cache occupancy value of the traffic management device is smaller than (preset coefficient × first cache threshold), which can be understood that there is no packet loss phenomenon in the traffic flow flowing into the traffic management device in the preset time period.
It is understood that, when the preset condition is satisfied, it is understood that the SLA of the traffic flow flowing into the traffic management device in the preset time period is satisfied.
With reference to the first aspect, in certain implementations of the first aspect, the reference enqueue cache value is greater than the first enqueue cache threshold.
The reference enqueue cache value is greater than the first enqueue cache threshold, which can be understood as that the traffic flowing into the traffic management device is in an increasing state from the moment when the cache space corresponding to the first cache threshold is opened to a preset time period.
With reference to the first aspect, in certain implementations of the first aspect, the second caching threshold and the first caching threshold are two caching thresholds of the N caching thresholds, where the caching thresholds are adjacent in size.
Illustratively, the total cache of the traffic management device is 8G, the cache threshold N =3 of the traffic management device is set, and the 3 cache thresholds may be respectively recorded as: cache threshold 1 (4G), cache threshold 2 (6G), cache threshold 3 (8G). Based on this, when the first caching threshold is the caching threshold 1 (4G), the second caching threshold may be the caching threshold 2 (6G). Alternatively, when the first caching threshold is the caching threshold 2 (6G), the second caching threshold may be the caching threshold 3 (8G), and at this time, the first caching threshold cannot be the caching threshold 1 (4G).
In the above technical solution, the second cache threshold and the first cache threshold are two cache thresholds adjacent to each other in the size of the cache threshold in the N cache thresholds, and when the cache space corresponding to the second cache threshold is smaller than all the cache spaces included in the traffic management device, the method is favorable for reducing network power consumption under the condition of ensuring that the service requirement is met.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: under the condition that the traffic flow flowing into the traffic management device is in an increasing state within the preset time period, in response to that the preset condition is not met, the traffic management device determines to start a cache space corresponding to a third cache threshold, where the cache space corresponding to the third cache threshold is greater than or equal to the cache space corresponding to the second cache threshold, and the cache space corresponding to the third cache threshold is less than or equal to all cache spaces included in the traffic management device.
It may be appreciated that, in some implementations, the cache space corresponding to the third cache threshold is less than all of the cache spaces included by the traffic management device. In other implementations, the cache space corresponding to the third cache threshold is equal to all cache spaces included in the traffic management device.
In the above technical solution, the current flow flowing into the flow management device is in an increasing state and does not satisfy the preset condition, and in order to better satisfy the service requirement, the flow management device determines that a larger cache space (i.e., the cache space corresponding to the third cache threshold) needs to be opened, so as to ensure that the preset condition can be satisfied after the cache space corresponding to the third cache threshold is opened. When the cache space corresponding to the started third cache threshold is smaller than all the cache spaces included in the traffic management device, the method is favorable for reducing network power consumption under the condition of ensuring that the service requirement is met.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining, by the traffic management device, to open the cache space corresponding to one of the N cache thresholds based on the change state of the service traffic and the first cache threshold, where the cache space corresponding to the first cache threshold is greater than zero and is less than or equal to all cache spaces included in the traffic management device, includes:
under the condition that the traffic flow flowing into the traffic management device is in a reduced state within the preset time period, in response to that a preset condition is met, the traffic management device determines to open a cache space corresponding to a third cache threshold, where the cache space corresponding to the third cache threshold is smaller than the cache space corresponding to the first cache threshold, and the cache space corresponding to the third cache threshold is larger than zero.
The traffic flow is in a decreasing state, and may also be understood as that the traffic flow is in a decreasing or descending trend.
With reference to the first aspect, in certain implementations of the first aspect, the reference enqueue cache value is less than the first enqueue cache threshold value, and the reference enqueue cache value is less than a second enqueue cache threshold value, where the second enqueue cache threshold value is less than the first enqueue cache threshold value.
It can be understood that the enqueue cache threshold corresponding to the currently opened enqueue cache space of the traffic management device is the first enqueue cache threshold.
In the above technical solution, the current flow flowing into the flow management device is in a reduced state, and satisfies a preset condition, and the reference enqueue cache value is smaller than the second enqueue cache threshold value, which is smaller than the first enqueue cache threshold value. Based on this, in order to reduce network power consumption while ensuring that the service requirement is met, the inflow traffic management device determines that a smaller cache space (i.e., a cache space corresponding to the second cache threshold) needs to be opened. The method is beneficial to reducing the network power consumption under the condition of ensuring to meet the service requirement.
With reference to the first aspect, in certain implementations of the first aspect, the third caching threshold and the first caching threshold are two caching thresholds of the N caching thresholds, where the caching thresholds are adjacent in size.
Illustratively, the total cache of the traffic management device is 6G, the cache threshold N =3 of the traffic management device is set, and the 3 cache thresholds may be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first buffering threshold is the buffering threshold 2 (6G), the second buffering threshold may be the buffering threshold 1 (4G). Alternatively, when the first caching threshold is the caching threshold 3 (8G), the second caching threshold may be the caching threshold 2 (6G), and at this time, the first caching threshold cannot be the caching threshold 1 (4G).
In the above technical solution, the third cache threshold and the first cache threshold are two cache thresholds adjacent to each other in the size of the cache threshold in the N cache thresholds, and when the cache space corresponding to the third cache threshold is smaller than all the cache spaces included in the traffic management device, the method is favorable for reducing network power consumption under the condition that service requirements are satisfied.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: under the condition that the traffic flow flowing into the traffic management device is in a reduced state within the preset time period, in response to that the preset condition is not satisfied, the traffic management device determines to open a cache space corresponding to a second cache threshold, where the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
In the above technical solution, when the current flow flowing into the flow management device is in a reduced state and does not satisfy the preset condition, in order to satisfy the service requirement, the flow management device determines that a larger cache space (i.e., the cache space corresponding to the second cache threshold) needs to be opened. When the cache space corresponding to the second cache threshold is smaller than all the cache spaces included in the traffic management device, the method is favorable for reducing the network power consumption under the condition of ensuring that the service requirement is met.
In a second aspect, there is provided a traffic management device, including: the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for obtaining working state parameters, the working state parameters comprise a reference enqueue cache value, a first cache threshold value and a first enqueue cache threshold value, the reference enqueue cache value represents a maximum value occupied by an enqueue cache of the traffic management device in a preset time period, a cache threshold value corresponding to a cache space opened by the traffic management device in the preset time period is the first cache threshold value, and the enqueue cache threshold value corresponding to the opened enqueue cache space is the first enqueue cache threshold value; the determination unit is used for: determining to start a cache space corresponding to one of N cache thresholds based on the operating state parameter, where the cache space corresponding to any one cache threshold is a cache space included in the traffic management device, the cache spaces corresponding to any two cache thresholds are different, the N cache thresholds include the first cache threshold, and N is an integer greater than 1; and the processing unit is used for opening the determined cache space.
With reference to the second aspect, in some implementations of the second aspect, the determining unit is further configured to: determining a change state of the service traffic flowing into the traffic management device within the preset time period according to the first enqueue cache threshold and the reference enqueue cache value; and determining to open a cache space corresponding to one of the N cache thresholds based on the change state of the service flow and the first cache threshold.
With reference to the second aspect, in some implementations of the second aspect, the cache space corresponding to the first cache threshold is smaller than all cache spaces included in the traffic management device, and the determining unit is further configured to: under the condition that the traffic flow flowing into the traffic management device is in an increasing state within the preset time period, in response to that a preset condition is met, the traffic management device determines to open a cache space corresponding to a second cache threshold, where the second cache threshold is one of the N cache thresholds, the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
With reference to the second aspect, in certain implementations of the second aspect, the reference enqueue cache value is greater than the first enqueue cache threshold.
With reference to the second aspect, in some implementations of the second aspect, the second caching threshold and the first caching threshold are two caching thresholds of the N caching thresholds that are adjacent to the caching threshold size.
With reference to the second aspect, in some implementations of the second aspect, the determining unit is further configured to:
under the condition that the traffic flow flowing into the traffic management device is in an increasing state within the preset time period, in response to that the preset condition is not met, the traffic management device determines to start a cache space corresponding to a third cache threshold, where the cache space corresponding to the third cache threshold is greater than or equal to the cache space corresponding to the second cache threshold, and the cache space corresponding to the third cache threshold is less than or equal to all cache spaces included in the traffic management device.
With reference to the second aspect, in some implementation manners of the second aspect, the cache space corresponding to the first cache threshold is greater than zero and less than or equal to all cache spaces included in the traffic management device, and the determining unit is further configured to: under the condition that the traffic flow flowing into the traffic management device is in a reduced state within the preset time period, in response to that a preset condition is met, the traffic management device determines to open a cache space corresponding to a third cache threshold, where the cache space corresponding to the third cache threshold is smaller than the cache space corresponding to the first cache threshold, and the cache space corresponding to the third cache threshold is larger than zero.
With reference to the second aspect, in some implementations of the second aspect, the reference enqueue cache value is less than the first enqueue cache threshold value, and the reference enqueue cache value is less than a second enqueue cache threshold value, where the second enqueue cache threshold value is less than the first enqueue cache threshold value.
With reference to the second aspect, in certain implementations of the second aspect, the third caching threshold and the first caching threshold are two caching thresholds adjacent to a caching threshold size in the N caching thresholds.
With reference to the second aspect, in some implementation manners of the second aspect, in a case that the service traffic flowing into the traffic management device is in a reduced state within the preset time period, in response to that the preset condition is not met, the traffic management device determines to open a cache space corresponding to a second cache threshold, where the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
With reference to the second aspect, in certain implementations of the second aspect, the meeting the preset condition includes: in the preset time period, the cache bandwidth corresponding to the first cache threshold is larger than the flow rate of the service flow flowing into the flow management equipment; or, in the preset time period, the cache occupancy value of the traffic management device is smaller than (preset coefficient × first cache threshold), where the preset coefficient is a number greater than zero and smaller than 1.
In a third aspect, the present application provides a traffic management device having the functionality to implement the method in any one of the possible implementation manners of the first aspect and the first aspect. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more units corresponding to the above functions.
In a fourth aspect, the present application provides a network device comprising at least one processor and a communication interface. The at least one processor is configured to execute a computer program or instructions to enable the network device to implement the method of any one of the possible implementations of the first aspect and the first aspect.
Optionally, the network device further comprises at least one memory coupled with the at least one processor, the computer program or instructions being stored in the at least one memory. Wherein the memory may be integrated with the processor or provided separately from the processor.
In one implementation, the network device is a chip or system-on-a-chip. When the network device is a chip or a system of chips, the communication interface may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or related circuit on the chip or the system of chips, and the like. A processor may also be embodied as a processing circuit or a logic circuit.
In another implementation, the network device is a chip or system of chips configured in the network device.
Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
In a fifth aspect, a computer-readable storage medium is provided for storing a computer program comprising instructions for performing the method of the first aspect above and any possible implementation manner of the first aspect above.
In a sixth aspect, a chip system is provided, comprising at least one processor and an interface; the at least one processor is configured to invoke and execute a computer program to cause the chip system to execute the instructions of the method in the first aspect and any possible implementation manner of the first aspect.
The chip system may be a System On Chip (SOC), a baseband chip, and the like, where the baseband chip may include a processor, a channel encoder, a digital signal processor, a modem, an interface module, and the like.
Drawings
Fig. 1 is a schematic structural diagram of a network device 100 suitable for use in embodiments of the present application.
Fig. 2 is a schematic flow chart of a cache management method 200 according to an embodiment of the present application.
Fig. 3 is a schematic timing diagram provided in an embodiment of the present application.
Fig. 4 is an exemplary diagram of a reference enqueue buffer value provided in an embodiment of the present application.
Fig. 5 is a schematic flow chart of a cache management method 500 according to an embodiment of the present application.
Fig. 6 is a schematic flow chart of a cache management method 600 according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a traffic management device 700 according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The term "at least one" in the embodiments of the present application means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
And, unless specifically stated otherwise, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the order, sequence, priority, or importance of the plurality of objects.
Fig. 1 is a schematic structural diagram of a network device 100 suitable for use in embodiments of the present application. As shown in fig. 1, the network device 100 includes an ingress interface 110, a traffic management device 120 and an egress interface 130, and the functions of the modules included in the network device 100 are described in detail as follows:
the ingress interface 110 is configured to receive packet traffic (also referred to as service traffic) in the network and forward the packet traffic to the traffic management device 120.
The traffic management device 120 is used to monitor and store the message traffic flowing into the traffic management device 100. The traffic management device 120 includes an enqueue buffer unit 121 and a buffer unit 122. The connection structure of the buffer unit 122 and the enqueue buffer unit 121 may be serially connected. The message traffic flowing into the traffic management device 120 first enters the enqueue cache unit 121, and the enqueue cache unit 121 performs processing such as packet loss and speed limitation on the received message traffic according to the current network state, and sends the processed message traffic to the cache unit 122. After receiving the message traffic sent from the enqueue cache unit 121, the cache unit 122 stores the message traffic in the cache unit 122. Optionally, the traffic management device 120 may be configured to monitor a message received by the ingress interface 110, and acquire information carried in the message. For example, the traffic management device 120 may obtain at least one of the following information: the display congestion notification message includes an indication congestion notification (ECN), a service type (type of service, toS), a class of service (CoS), a source/destination Internet Protocol (IP) address of the message, and the like.
Optionally, the traffic management device 120 may also be configured to monitor the egress interface 130 and obtain the sending information of the egress interface 130. For example, the traffic management device 120 may obtain at least one of the following information: the egress interface 130 manages the length of a transmission queue of each egress port of the device 100 through traffic, the average delay of the transmission queue of each egress port, and the like.
The specific form of the flow management device 120 is not limited. In the above fig. 1, the traffic management device 120 is described as a module in the network device 100. Alternatively, the traffic management device 120 may be a stand-alone device. That is, the traffic management device 120 includes, but is not limited to: a chip, a module in a network device, etc. For example, when the traffic management device 120 is a module in a network device, the traffic management device 120 may specifically be a Traffic Management (TM) module.
It should be understood that fig. 1 is only an illustration and does not set any limit to the structures of the network device 100 and the traffic management device 120 applicable to the embodiments of the present application.
Next, a cache management method provided in an embodiment of the present application is described in detail with reference to fig. 2 to 6.
Fig. 2 is a schematic flowchart of a cache management method 200 according to an embodiment of the present application.
As shown in fig. 2, the method 200 includes steps 210 to 230. The method 200 may be applied, but not limited to, the network device 100 shown in fig. 1. When the method 200 is applied to the network device 100 shown in fig. 1, it may be the traffic management device 120 in the network device 100 that specifically performs the method 200. Next, steps 210 to 230 are described in detail.
Step 210, the traffic management device obtains a working state parameter, where the working state parameter includes a reference enqueuing buffer value, a first buffer threshold value and a first enqueuing buffer threshold value, the reference enqueuing buffer value indicates a maximum value occupied by an enqueuing buffer of the traffic management device in a preset time period, a buffer threshold value corresponding to a buffer space opened by the traffic management device in the preset time period is the first buffer threshold value, and an enqueuing buffer threshold value corresponding to the opened enqueuing buffer space is the first enqueuing buffer threshold value.
Optionally, before step 210, the traffic management device may further perform the following operations: the traffic management equipment performs threshold division on the size of the cache to obtain N cache thresholds, wherein the cache spaces corresponding to any two cache thresholds in the N cache thresholds are different, the N cache thresholds comprise a first cache threshold, and N is an integer of 1; the traffic management equipment performs threshold division on the size of the enqueue cache to obtain M enqueue cache thresholds, wherein enqueue cache spaces corresponding to any two of the M enqueue cache thresholds are different, the M enqueue cache thresholds comprise a first enqueue cache threshold, and M is an integer greater than 1.
The above-described threshold dividing operation may be performed by the traffic management device at the time of initial configuration. In some implementations, N and M may be equal. For example, when N = M =3, the traffic management device performs threshold division including: 3 cache thresholds and 3 enqueue cache thresholds. In other implementations, N and M may not be equal. For example, when N =4,m =3, the traffic management device performs threshold division to include: 3 cache thresholds and 4 enqueue cache thresholds.
In some implementations, the cache space corresponding to the kth cache threshold of the N cache thresholds increases with increasing k (k =1,2,3, \ 8230;, N). That is, the buffer space corresponding to the (k + 1) < th (N or less) buffer threshold is larger than the buffer space corresponding to the kth buffer threshold. For example, for convenience of description, N Buffer thresholds are denoted as Buffer _ Dyn _ Open _ Size k, (k =1,2,3, \8230;, N), and Buffer _ Dyn _ Open _ Size k represents the kth Buffer threshold, and the enqueue Buffer Size corresponding to the N Buffer thresholds is 16G. In this case, when N may be equal to 2, buffer _ Dyn _ Open _ Size 2 may correspond to a Buffer space of 1lg, and Buffer _dyn _ Open _ Size 1 may correspond to a Buffer space of 8G. In other implementations, the cache space corresponding to the k-th cache threshold of the N cache thresholds decreases as k (k =1,2,3, \ 8230;, N) increases. At this time, the cache space corresponding to the kth cache threshold is greater than the cache space corresponding to the (k + 1) (N or less) th cache threshold. For example, for convenience of description, N Buffer thresholds are denoted as Buffer _ Dyn _ Open _ Size k, (k =1,2,3, \8230;, N), and Buffer _ Dyn _ Open _ Size k denotes the kth Buffer threshold, and the enqueue Buffer Size corresponding to the N Buffer thresholds is 16G. In this case, when N may be equal to 2, buffer _ Dyn _ Open _ Size 1 may correspond to 1691, and Buffer_dyn__size 2 may correspond to 8G.
Based on the manner of obtaining N cache thresholds by dividing the cache threshold, the enqueue cache may be further subjected to threshold division to obtain M enqueue cache thresholds, where M is an integer greater than 1. Wherein, N and M may be equal, and N and M may not be equal.
For convenience of description, the following description will be given by taking as an example that the cache space corresponding to the k-th cache threshold of the N cache thresholds increases with the increase of k (k =1,2,3, \ 8230;, N), and the cache space corresponding to the p-th cache threshold of the M cache thresholds increases with the increase of p (p =1,2,3, \\ 8230;, M), where k is an integer, and 1 ≦ k ≦ N, p is an integer, and 1 ≦ p ≦ M.
In this embodiment, specific values of the first cache threshold and the first enqueue cache threshold are not limited. In one example, the first caching threshold may be a kth caching threshold of N caching thresholds, and the first enqueue caching threshold may be a pth enqueue caching threshold of M enqueue caching thresholds, where p is equal to k. In another example, the first caching threshold may be a kth caching threshold of the N caching thresholds, the first enqueue caching threshold may be a pth enqueue caching threshold of the M enqueue caching thresholds, k is an integer, and 1 ≦ k ≦ N, M is an integer greater than or equal to 1, where p is not equal to k.
In step 210, the cache threshold corresponding to the cache space opened by the traffic management device in the preset time period is the first cache threshold, and the enqueue cache threshold corresponding to the opened enqueue cache space is the first enqueue cache threshold. It can be understood that the cache space corresponding to the first cache threshold is already opened at a time before the preset time period, and the enqueue cache space corresponding to the first enqueue cache threshold is already opened at a time before the preset time period. In the present application, the time for opening the cache space corresponding to the first cache threshold and the time for opening the enqueue cache space corresponding to the first enqueue cache threshold are not specifically limited. In some implementations, the time to open the cache space corresponding to the first cache threshold is the same as the time to open the enqueue cache space corresponding to the first enqueue cache threshold. In other implementations, the time at which the cache space corresponding to the first cache threshold is opened is different from the time at which the enqueue cache space corresponding to the first enqueue cache threshold is opened. For convenience of description, the following description will take "the time when the buffer space corresponding to the first buffer threshold is opened is the same as the time when the enqueue buffer space corresponding to the first enqueue buffer threshold is opened, and this time is denoted as the first time" as an example. Based on this, the preset time period may be understood as a time period after the first time. The time length of the preset time period is not particularly limited in the present application. For example, the time length of the preset time period includes, but is not limited to: 0.5s,1s,2s,3s,4s,5s, 10s, or the like. Illustratively, fig. 3 shows a relationship between the first time and a preset time period. In fig. 3, the preset time period is a period of time after the first time, the starting time of the preset time period is the second time, the ending time of the preset time period is the third time, and the difference between the third time and the second time is equal to the specific time length of the preset time period.
In step 210, the reference enqueue buffer value indicates a maximum value of enqueue buffer occupancy of the traffic management device in a preset time period. It can be understood that the change of the traffic flowing into the traffic management device within the preset time period is stable, so that the change condition of the service traffic flowing into the traffic management device within the preset time period can be well reflected by referring to the enqueue cache value, and meanwhile, the occupation condition of the enqueue cache of the traffic management device within the preset time period can also be well reflected. For example, fig. 4 shows a trend graph of the proportion of the enqueue buffer occupied in the traffic management device to the total enqueue buffer over time. As shown in fig. 4, the abscissa represents time, which may be in milliseconds (ms), and the ordinate represents the ratio of the enqueue buffer occupied in the traffic management device to the total enqueue buffer. In fig. 4, 0ms to 2000ms correspond to the occupancy of the enqueue buffer corresponding to the initial state of the traffic flow entering the traffic management device, and 2000ms to 4000ms correspond to the occupancy of the enqueue buffer during the traffic flow. In this embodiment of the application, the reference enqueue buffer value may be determined based on a ratio of an enqueue buffer occupied in a corresponding traffic management device to a total pre-enqueue buffer in a time period from 2000ms to 4000ms as shown in fig. 4. For example, the preset time period may correspond to a time period of 2000ms to 3000ms, and when the total enqueue buffer size of one traffic management device is 512 Kilobytes (KB), the maximum occupation ratio of the enqueue buffer occupied in the traffic management device to the total enqueue buffer in the time period is 0.5 as can be known from fig. 4, and based on this, the reference enqueue buffer value is equal to (512 × 0.5) KB. Optionally, the reference enqueue buffer value may also be determined based on a ratio of an enqueue buffer occupied in the corresponding traffic management device to the total pre-enqueue buffer in the period of 0ms to 2000ms shown in fig. 4.
In this embodiment of the present application, in some implementations, the cache of the traffic management device may refer to a local cache (LMEM) of the traffic management device. For example, the on-chip cache may be an on-chip embedded static-access-memory (ESRAM). In other implementations, the cache of the traffic management device may also refer to an off-chip cache (EM) of the traffic management device. For example, the off-chip cache may be a High Bandwidth Memory (HBM). It should be understood that when the cache of the traffic management device refers to an on-chip cache, the enqueue cache of the traffic management device also refers to an on-chip cache. When the cache of the traffic management device refers to off-chip cache, the enqueue cache of the traffic management device refers to off-chip cache.
Step 220, the traffic management device determines to start a cache space corresponding to one of N cache thresholds based on the operating state parameter, where the cache space corresponding to any one cache threshold is a cache space included by the traffic management device, the cache spaces corresponding to any two cache thresholds are different, the N cache thresholds include a first cache threshold, and N is an integer greater than 1.
In step 220, the determining, by the traffic management device, to open the cache space corresponding to one of the N cache thresholds based on the operating state parameter includes:
the traffic management equipment determines the change state of the service traffic flowing into the traffic management equipment in a preset time period according to the first enqueue cache threshold value and the reference enqueue cache value; the traffic management device determines one of the N caching thresholds based on the change state of the traffic and the first caching threshold.
In the existing network, the changing state of the traffic flow flowing into the flow management device includes: an increasing state and a decreasing state. The traffic flow change state flowing into the traffic management device is an increasing state, and it can also be understood that the traffic flow change state flowing into the traffic management device is an increasing trend. The traffic flow change state flowing into the traffic management device is a decreasing state, which can also be understood as a decreasing trend.
Based on this, in the embodiment of the present application, the traffic management device determines, based on the change state of the service traffic and the first cache threshold, to open the cache space corresponding to one cache threshold of the N cache thresholds in two ways, which are a first way and a second way, respectively. The first and second modes are described in detail below.
The first method is as follows:
the method for determining the cache space corresponding to one of the N cache threshold values by the traffic management device based on the change state of the service traffic and the first cache threshold value includes: under the condition that the traffic flow flowing into the traffic management device is in an increasing state within a preset time period, in response to that a preset condition is met, the traffic management device determines to start a cache space corresponding to a second cache threshold, where the second cache threshold is one of the N cache thresholds, the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
The service flow entering the flow management device in the preset time period is in an increasing state, that is, the reference enqueue cache value is greater than the first enqueue cache threshold value. In other words, when the reference enqueue cache value is greater than the first enqueue cache threshold value, it indicates that the traffic flow flowing into the traffic management device is in an increasing state within the preset time period.
In the first manner, the second caching threshold and the first caching threshold are two caching thresholds whose caching thresholds are adjacent to each other in size among the N caching thresholds. Illustratively, the total cache of the traffic management device is 8G, the cache threshold N =3 of the traffic management device is set, and the 3 cache thresholds may be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first caching threshold is the caching threshold 1 (4G), the second caching threshold may be the caching threshold 2 (6G). Alternatively, when the first caching threshold is the caching threshold 2 (6G), the second caching threshold may be the caching threshold 3 (8G), and at this time, the first caching threshold cannot be the caching threshold 1 (4G).
Optionally, under the condition that the traffic flow flowing into the traffic management device is in the increase state within the preset time period, in response to that the preset condition is not satisfied, the traffic management device determines to open a cache space corresponding to a third cache threshold, where the cache space corresponding to the third cache threshold is greater than or equal to the cache space corresponding to the second cache threshold, and the cache space corresponding to the third cache threshold is less than or equal to all cache spaces included in the traffic management device. The second caching threshold and the first caching threshold are two caching thresholds which are adjacent to each other in the caching threshold size in the N caching thresholds. Based on this implementation manner, in an example, the cache space corresponding to the third cache threshold is greater than the cache space corresponding to the second cache threshold, and at this time, the third cache threshold and the first cache threshold are not two cache thresholds that are adjacent to each other in the cache threshold size in the N cache thresholds. In another example, the cache space corresponding to the third cache threshold is equal to the cache space corresponding to the second cache threshold, and at this time, the third cache threshold is the second cache threshold. Based on this, the third caching threshold and the first caching threshold in this implementation are two caching thresholds adjacent to the caching threshold size in the N caching thresholds.
For example, the cache space corresponding to the third cache threshold is larger than the cache space corresponding to the second cache threshold. For example, the total buffer of the traffic management device is 8G, the buffer threshold N =4 of the traffic management device is set, and these 4 buffer thresholds can be respectively recorded as: buffer threshold 1 (2G), buffer threshold 2 (4G), buffer threshold 3 (6G), and buffer threshold 4 (8G). Based on this, when the first caching threshold is the caching threshold 1 (2G) and the second caching threshold is the caching threshold 2 (4G), the third caching threshold may be the caching threshold 3 (6G) or the third caching threshold may be the caching threshold 4 (8G). That is, in this implementation, the third caching threshold and the first caching threshold are not two caching thresholds of the N caching thresholds that are adjacent in size to the caching threshold.
In the above first mode, a process of determining to open a cache space based on the cache management method provided in the embodiment of the present application when a traffic flow flowing into the traffic management device is in an increasing state within a preset time period is described. A specific embodiment of the method described in the above-mentioned manner one will be described in detail below with reference to fig. 5, and will not be described herein again.
The second method comprises the following steps:
the method for determining, by the traffic management device, to open the cache space corresponding to one of the N cache thresholds based on the change state of the traffic flow and the first cache threshold includes:
under the condition that the traffic flow flowing into the traffic management device is in a reduced state within a preset time period, in response to the preset condition being met, the traffic management device determines to start a cache space corresponding to a third cache threshold, wherein the cache space corresponding to the third cache threshold is smaller than the cache space corresponding to the first cache threshold, and the cache space corresponding to the third cache threshold is larger than zero.
The service flow entering the flow management device in the preset time period is in a reduced state, namely the reference enqueuing buffer value is smaller than the first enqueuing buffer threshold value. In other words, when the reference enqueuing buffer value is smaller than the first enqueuing buffer threshold value, it indicates that the traffic flow flowing into the traffic management device is in a reduced state within a preset time period.
In the second mode, the reference enqueue cache value is smaller than the first enqueue cache threshold, and the reference enqueue cache value is smaller than the second enqueue cache threshold, where the second enqueue cache threshold is smaller than the first enqueue cache threshold. That is to say, in the scheme described in the above mode two, the reference enqueue cache value needs to be smaller than not only the first enqueue cache threshold but also the second enqueue cache threshold, so that the traffic management device may determine to open the cache space corresponding to the third cache threshold.
For example, the above description refers to a value relationship between the enqueue cache value, the first enqueue cache threshold value and the second enqueue cache threshold value. One traffic management device includes M =4 enqueue buffer thresholds, which are divided into 64KB for enqueue buffer threshold 1, 128KB for enqueue buffer threshold 2, 192KB for enqueue buffer threshold 3, and 256KB for enqueue buffer threshold 4. Based on this, for example, when the reference enqueue buffer value is equal to 50KB, the first enqueue buffer threshold may be equal to 128KB and the second enqueue buffer threshold may be equal to 64KB. Based on this, for example, when the reference enqueue buffer value is equal to 100KB, the first enqueue buffer threshold may be equal to 192KB and the second enqueue buffer threshold may be equal to 128KB or 64KB.
In the second manner, the third caching threshold and the first caching threshold are two caching thresholds adjacent to the caching threshold in the N caching thresholds. Illustratively, the total cache of the traffic management device is 8G, the cache threshold N =3 of the traffic management device is set, and the 3 cache thresholds may be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first caching threshold is the caching threshold 2 (6G), the third caching threshold may be the caching threshold 1 (4G). Alternatively, when the first caching threshold is the caching threshold 3 (8G), the third caching threshold may be the caching threshold 2 (6G).
Optionally, under the condition that the traffic flow flowing into the traffic management device is in a reduced state within a preset time period, in response to that the preset condition is not satisfied, the traffic management device determines to open a cache space corresponding to the second cache threshold, where the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device. The relationship between the second caching threshold and the first caching threshold in the N caching thresholds is not specifically limited. In one example, the second caching threshold and the first caching threshold are not two caching thresholds of the N caching thresholds that are adjacent in size to the caching threshold. For example, the total buffer of the traffic management device is 8G, the buffer threshold N =3 of the traffic management device is set, and the 3 buffer thresholds can be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first buffering threshold is the buffering threshold 1 (4G), the second buffering threshold may be the buffering threshold 3 (8G). In another example, the second caching threshold and the first caching threshold are two caching thresholds of the N caching thresholds that are adjacent in size to the caching threshold. For example, the total buffer of the traffic management device is 8G, the buffer threshold N =3 of the traffic management device is set, and these 3 buffer thresholds can be respectively recorded as: buffer threshold 1 (4G), buffer threshold 2 (6G), buffer threshold 3 (8G). Based on this, when the first caching threshold is the caching threshold 1 (4G), the second caching threshold may be the caching threshold 2 (6G). Alternatively, when the first caching threshold is the caching threshold 2 (6G), the second caching threshold may be the caching threshold 3 (8G).
In the above second mode, a process of determining to open a cache space based on the cache management method provided in the embodiment of the present application when the traffic of the service flowing into the traffic management device is in a reduced state within the preset time period is described. A specific embodiment of the method described in the second embodiment will be described in detail below with reference to fig. 6, and will not be described herein again.
In the first and second embodiments, the satisfying of the preset condition includes: in a preset time period, the cache bandwidth corresponding to the first cache threshold is larger than the flow rate of the service flow flowing into the flow management equipment; or, in a preset time period, the cache occupancy value of the traffic management device is less than (preset coefficient × first cache threshold), where the preset coefficient is a number greater than zero and less than 1.
The preset coefficient is a number greater than zero and less than 1, and exemplary preset coefficients may be, but not limited to: 0.9,0.8,0.85, or 0.7, etc. For example, when the preset coefficient is equal to 0.8 and the first buffer threshold is equal to 6G, the buffer occupancy value of the traffic management device is required to be smaller than (0.8 × 6) G in the preset time period.
It should be understood that, in the first and second manners, it is determined to open the cache space corresponding to one of the N cache thresholds, where the cache space corresponding to the one cache threshold is not greater than the total cache space included in the traffic management device.
In step 230, the traffic management device opens the determined buffer space.
In step 230, after the traffic management device opens the determined cache space, the SLA of the service traffic flowing into the traffic management device may be satisfied, where the SLA includes, but is not limited to: packet loss rate and transmission delay. As an example, table 1 lists service flows including delay requirements and packet loss requirements corresponding to different types of services, and queue priorities corresponding to different types of services. Taking the service type corresponding to the queue priority 8 as an example, as a protocol and a control message, the SLA requirement of the service is satisfied, that is, the transmission delay of the service is less than 100 μ s, and the packet loss requirement is 0.
TABLE 1
Figure BDA0003211696050000121
Figure BDA0003211696050000131
It should be understood that the method 200 is merely an example, and does not limit the cache management method provided in the embodiment of the present application in any way. As long as the adjustment rule of the buffer space of the traffic management device is turned on (for example, but not limited to, the adjustment rule of the buffer space of the traffic management device is turned on when the traffic flowing into the traffic management device is in an increasing or decreasing state), the technical solutions that are the same as the adjustment rule of the buffer space of the traffic management device described in the method 200 above all belong to the technical solutions claimed in the embodiments of the present application.
In the above technical solution, when the traffic management device determines the size of the cache space that should be opened at the current time, an occupancy value of the enqueue cache (i.e., a reference enqueue cache value) within a period of time (i.e., a preset time period) before the current time, the size of the cache space that is opened before the current time (i.e., the cache space corresponding to the first cache threshold), and the size of the enqueue cache space (i.e., the enqueue cache space corresponding to the first enqueue cache threshold) are considered. Based on this, the traffic management device determines to open the cache space corresponding to one of the N cache thresholds, so that the cache space that is opened and determined can meet the service requirement. Further, when the cache space corresponding to the cache threshold value started by the traffic management device is smaller than all the cache spaces included by the traffic management device, the method is favorable for reducing the network power consumption under the condition of ensuring that the service requirement is met.
Referring to fig. 5, a traffic management device is taken as an example of a TM module in a chip, and a specific embodiment of the cache management method provided by the present application is described below. It should be understood that the example of fig. 5 is only for assisting the person skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the below given example of fig. 5, and such modifications or variations are intended to be included within the scope of embodiments of the present application.
Fig. 5 is a schematic flow chart of a cache management method 500 according to an embodiment of the present application.
As shown in fig. 5, the method 500 includes steps 510 to 580. The method 500 may be, but is not limited to, applied to the network device 100 shown in fig. 1. When the method 500 is applied to the network device 100 shown in fig. 1, the traffic management device 120 in the network device 100 may be used to execute the method 500, and the traffic management device 120 may specifically be a TM module. That is, the main body performing the method of the embodiment of the present application may specifically be a TM module. Next, steps 510 to 580 are described in detail.
It can be understood that, in the embodiment of the present application, the TM module corresponds to the traffic management device in the foregoing method 200, the reference enqueue buffer value corresponds to the reference enqueue buffer value in the foregoing method 200, the buffer threshold 1 corresponds to the first buffer threshold in the foregoing method 200, the enqueue buffer threshold 1 corresponds to the first enqueue buffer threshold in the foregoing method 200, the time period from the second time point to the third time point corresponds to the preset time period in the foregoing method 200, the buffer threshold 2 corresponds to the second buffer threshold in the foregoing method 200, and the buffer threshold 3 corresponds to the third buffer threshold in the foregoing method 200.
Optionally, before step 510, the TM module may also be initially configured. In an initialization configuration stage, N cache thresholds may be determined according to the total cache size of the TM module, and M enqueue cache thresholds may be determined according to the total enqueue cache size of the TM module, where the cache space corresponding to any one cache threshold is different in size, the enqueue cache space corresponding to any one enqueue cache threshold is different in size, and N and M are integers greater than 1. In this embodiment, the total buffer of the TM module is 16G, and 4 thresholds (i.e., N = 4) may be set for the 16G buffer in the initialization stage, which are respectively denoted as: cache threshold 1 (4G), cache threshold 2 (8G), cache threshold 3 (12G), cache threshold 4 (16G). The total enqueue buffer of the TM module is 512KB, and the initialization phase may set 4 thresholds (i.e., M = N) for the enqueue buffer of 512KB, which are respectively noted as: enqueue buffer threshold 1 (128 KB), enqueue buffer threshold 2 (256 KB), enqueue buffer threshold 3 (384 KB), enqueue buffer threshold 4 (512 KB). Taking the cache threshold 1 (4G) as an example, the size of the cache space corresponding to the cache threshold 1 is 4G. Taking enqueue cache threshold 1 as an example, the size of the cache space corresponding to enqueue cache threshold 1 is 128KB.
Step 510, at the first time, the cache threshold corresponding to the cache space opened by the TM module is cache threshold 1 (4G), and the enqueue cache threshold corresponding to the opened enqueue cache space is enqueue cache threshold 1 (128 KB).
The cache threshold 1 may be understood as a cache threshold corresponding to a cache space opened by the TM module at the first time. The enqueue cache threshold 1 may be understood as an enqueue cache threshold corresponding to an enqueue cache space opened by the TM module at the first time. In step 510, the TM module starts up 4G of buffer space and starts up 128KB of enqueue buffer space at the first time.
It should be understood that after the first time, the TM module still opens the cache space corresponding to the cache threshold 1, and opens the enqueue cache space corresponding to the enqueue cache threshold 1. And the TM module closes the cache space corresponding to the enqueue cache threshold value 1 and opens the enqueue cache space corresponding to the enqueue cache threshold value 1 when determining that the enqueue cache threshold value needs to be adjusted.
In step 520, the tm module determines that the reference enqueue buffer value is obtained from the second time to the third time after the first time.
The reference enqueue buffer value represents the maximum value occupied by the enqueue buffer of the TM module in the period from the second moment to the third moment. It can be understood that the reference enqueue buffer value can well reflect the change of the traffic flow flowing into the TM module in the period from the second time to the third time, and can well reflect the occupation condition of the enqueue buffer of the traffic management device in the period from the second time to the third time. In this embodiment, the size of the obtained reference enqueue buffer value is 200KB from the second time to the third time, and the specific length of the time from the second time to the third time may be 1 second.
It should be understood that, in the period from the second time to the third time, the cache threshold corresponding to the cache space opened by the TM module is still the cache threshold 1, and the enqueue cache threshold corresponding to the opened enqueue cache space is still the enqueue cache threshold 1.
In step 530, the TM module determines that the traffic flowing into the TM module is in an increased state according to the reference enqueue buffer value and enqueue buffer threshold 1.
Wherein the TM module may determine that traffic flowing into the TM module is in an increasing state after the first time by comparing that the reference enqueue buffer value (200 KB) is greater than the enqueue buffer threshold value of 1 (128 KB). At this point, the TM module also determines that the reference enqueue buffer value (200 KB) is less than enqueue buffer threshold 2 (256 KB).
In step 540, the TM module determines whether a predetermined condition is satisfied.
Wherein the preset conditions include: condition 1: in the second moment to the third moment, the cache bandwidth corresponding to the enqueue cache threshold value 1 is greater than the flow rate of the service flow flowing into the flow management equipment; alternatively, condition 2: in the second time to the third time, the enqueue cache occupancy value of the traffic management device is smaller than (a preset coefficient × a cache threshold 1), where the preset coefficient is a number greater than zero and smaller than 1.
In this embodiment of the present application, the preset coefficient may be equal to 0.8, based on which, in the period from the second time to the third time, the enqueue buffer occupancy value of the traffic management device is smaller than (0.8 × 4) G
In step 540, the TM module determines whether a preset condition is satisfied, including:
when the TM module determines that the preset condition is met, executing step 550 and step 560; alternatively, the first and second electrodes may be,
in the case where the TM module determines that the preset condition is not satisfied, steps 570 and 580 are performed.
Here, the satisfaction of the preset conditions may be understood as satisfaction of some of the preset conditions described in the above step 540 (for example, satisfaction of the condition 1 or satisfaction of the condition 2), or may be understood as satisfaction of all of the preset conditions described in the above step 540 (that is, satisfaction of both the condition 1 and the condition 2). The preset condition is not satisfied, and it is understood that all of the preset conditions (i.e., condition 1 and condition 2) described in the above step 540 are not satisfied.
In step 550, the tm module determines that the cache threshold corresponding to the cache space to be opened is cache threshold 2 (8G).
The cache threshold 2 (8G) and the cache threshold 1 (4G) may be understood as two cache thresholds whose cache thresholds are adjacent to each other in size among the above 4 cache thresholds. In step 560, the tm module opens the buffer space corresponding to the buffer threshold 2 (8G).
That is, after the third time, the TM module opens the buffer space corresponding to the threshold 2 (8G). In an example, the TM module opens the cache space corresponding to the cache threshold 2 (8G), which may be understood that when the TM module opens the cache space corresponding to the cache threshold 2 (8G), the TM module closes the cache space corresponding to the threshold 1 (4G). In another example, the TM module opens the cache space corresponding to the cache threshold 2 (8G), which may be understood as that the TM module opens a new cache space of 8G-4g =4g on the basis of opening the cache space corresponding to the cache threshold 1 (4G), and based on this, the TM module may be considered to open the cache space corresponding to the cache threshold 2 (8G).
In step 570, the tm module determines that the cache threshold corresponding to the cache space to be opened is cache threshold 3 (12G).
In the step 570, in a case that the preset condition is not satisfied, the TM module determines that the cache threshold corresponding to the cache space that needs to be opened is a cache threshold 3 (12G), where the cache threshold 3 (12G) and the cache threshold 1 (4G) are not two cache thresholds that are adjacent to each other in size in the cache threshold of the 4 cache thresholds.
It can be understood that, the TM module determines that the cache threshold corresponding to the cache space that needs to be opened after the third time is the cache threshold 3 (12G), and can satisfy the SLA of the traffic flow flowing into the TM module, where the SLA includes but is not limited to: packet loss rate and transmission delay.
Optionally, in another implementation manner, the TM module determines that the cache threshold corresponding to the cache space that needs to be opened after the third time is the cache threshold 4 (16G), so that the SLA of the service traffic flowing into the TM module can be met.
In step 580, the tm module opens the buffer space corresponding to the buffer threshold 3 (12G).
That is, after the third time, the TM module opens the buffer space corresponding to the threshold 3 (12G). In an example, the TM module opens the cache space corresponding to the cache threshold 3 (12G), which may be understood as that, while the TM module opens the cache space corresponding to the cache threshold 3 (12G), the TM module closes the cache space corresponding to the threshold 1 (4G). In another example, the TM module opens the cache space corresponding to the cache threshold 3 (12G), which may be understood as that the TM module opens a new 12G-4g =8g cache space on the basis of opening the cache space corresponding to the cache threshold 1 (4G), and based on this, the TM module may be considered to open the cache space corresponding to the cache threshold 3 (12G).
It can be understood that after the third time, the TM module opens the buffer space corresponding to the threshold 3 (12G), and the size of the buffer space corresponding to the threshold 3 (12G) can satisfy the SLA of the traffic flow flowing into the TM module.
In the above steps 510 to 580, the schematic diagram of the first time, the second time and the third time on the time axis can be seen in fig. 3 above.
It should be understood that the method 500 shown in fig. 5 is only an example, and does not limit the cache management method provided in the embodiment of the present application in any way. For example, in some implementations, more cache thresholds may also be set for the TM module's cache, e.g., N equals 5,7, or 8, etc. More enqueue cache thresholds may also be set for the enqueue cache of the TM module, e.g., M equals 8. For example, in some implementations, the length of the period from the second time to the third time may also be greater than or less than 1 second, for example, the specific length of the period from the second time to the third time may also be 0.5 second, 2 seconds, 3 seconds, 5 seconds, or the like.
Next, referring to fig. 6, by taking a traffic management device as a TM module in a chip as an example, another specific embodiment of the cache management method provided in the present application is described. It should be understood that the example of fig. 6 is merely to assist those skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the example of fig. 6 given below, and such modifications and variations also fall within the scope of the embodiments of the present application.
Fig. 6 is a schematic flowchart of a cache management method 600 according to an embodiment of the present application.
As shown in fig. 6, the method 600 includes steps 610 to 680. The method 600 may be, but is not limited to, applied to the network device 100 shown in fig. 1. When the method 600 is applied to the network device 100 shown in fig. 1, the method 600 may be performed by the traffic management device 120 in the network device 100, and the traffic management device 120 may specifically be a TM module. That is, the main body performing the method of the embodiment of the present application may specifically be a TM module. Next, steps 610 to 680 are described in detail.
It is understood that, in the embodiment of the present application, the TM module corresponds to the traffic management device in the method 200, the reference enqueue buffer value corresponds to the reference enqueue buffer value in the method 200, the buffer threshold 3 corresponds to the first buffer threshold in the method 200, the enqueue buffer threshold 3 corresponds to the first enqueue buffer threshold in the method 200, the time period from the second time to the third time corresponds to the preset time period in the method 200, the buffer threshold 2 corresponds to the third buffer threshold in the method 200, and the buffer threshold 4 corresponds to the second buffer threshold in the method 200.
Optionally, before step 610, the TM module may also be initially configured. In an initialization configuration stage, N cache thresholds may be determined according to the total cache size of the TM module, and M enqueue cache thresholds may be determined according to the total enqueue cache size of the TM module, where the cache space corresponding to any one cache threshold is different in size, the enqueue cache space corresponding to any one enqueue cache threshold is different in size, and N and M are integers greater than 1. In this application, the total buffer of the TM module is 16G, and 4 thresholds (i.e., N = 4) may be set for the 16G buffer in the initialization phase, which are respectively noted as: buffer threshold 1 (4G), buffer threshold 2 (8G), buffer threshold 3 (12G), and buffer threshold 4 (16G). The total enqueue buffer of the TM module is 512KB, and the initialization phase may set 4 thresholds (i.e., M = N) for the enqueue buffer of 512KB, which are respectively noted as: enqueue buffer threshold 1 (128 KB), enqueue buffer threshold 2 (256 KB), enqueue buffer threshold 3 (384 KB), enqueue buffer threshold 4 (512 KB). Taking the cache threshold 1 (4G) as an example, the size of the cache space corresponding to the cache threshold 1 is 4G. Taking the enqueue buffer threshold 1 as an example, the buffer space size corresponding to the enqueue buffer threshold 1 is 128KB.
In step 610, at the first time, the cache threshold corresponding to the cache space opened by the TM module is the cache threshold 3 (12G), and the enqueue cache threshold corresponding to the opened enqueue cache space is the enqueue cache threshold 3 (384 KB).
The cache threshold 4 may be understood as a cache threshold corresponding to a cache space opened by the TM module at the first time. The enqueue cache threshold 4 may be understood as an enqueue cache threshold corresponding to an enqueue cache space opened by the TM module at the first time. In step 510, the TM module opens 16G of buffer space and opens 384KB of enqueue buffer space at the first time.
It should be understood that after the first time, the TM module still opens the cache space corresponding to the cache threshold 1, and opens the enqueue cache space corresponding to the enqueue cache threshold 1. And the TM module closes the cache space corresponding to the enqueue cache threshold value 1 when determining that the enqueue cache threshold value needs to be adjusted, and opens the cache space corresponding to the adjusted cache threshold value.
In step 620, the tm module determines that the reference enqueue buffer value is obtained from the second time to the third time after the first time.
The reference enqueue buffer value represents the maximum value occupied by the enqueue buffer of the TM module in the period from the second moment to the third moment. It can be understood that the reference enqueue buffer value can well reflect the change of the traffic flow flowing into the TM module in the period from the second time to the third time, and can also well reflect the occupation situation of the enqueue buffer of the traffic management device in the period from the second time to the third time. In this embodiment of the application, the size of the obtained reference enqueue buffer value may be 200KB from the second time to the third time, and the specific length of the time period from the second time to the third time may be 1 second.
It should be understood that, in the period from the second time to the third time, the cache threshold corresponding to the cache space opened by the TM module is still the cache threshold 1, and the enqueue cache threshold corresponding to the enqueue cache space opened by the TM module is still the enqueue cache threshold 1.
In step 630, the TM module determines that the traffic flowing into the TM module is in a reduced state according to the reference enqueue buffer value and the enqueue buffer threshold 1.
Wherein the TM module may determine that traffic flowing into the TM module after the first time is in a reduced state by comparing that the reference enqueue buffer value (200 KB) is less than the enqueue buffer threshold value of 3 (384 KB). At this point, the TM module also determines that the reference enqueue buffer value (200 KB) is less than the enqueue buffer threshold 2 (256 KB), and that the enqueue buffer threshold 2 (256 KB) is less than the enqueue buffer threshold 3 (384 KB).
In step 640, the TM module determines whether a predetermined condition is satisfied.
Wherein the preset conditions include: condition 1: in the second moment to the third moment, the cache bandwidth corresponding to the enqueue cache threshold value 1 is greater than the flow rate of the service flow flowing into the flow management equipment; alternatively, condition 2: and in the second moment to the third moment, the enqueue cache occupancy value of the traffic management device is smaller than (a preset coefficient multiplied by a cache threshold value 3), wherein the preset coefficient is a number which is larger than zero and smaller than 1.
In this embodiment of the present application, the preset coefficient may be equal to 0.8, based on which, in the second time to the third time, the enqueue buffer occupancy value of the traffic management device is smaller than (0.8 × 12) G
In step 640, the TM module determines whether a preset condition is met, including:
when the TM module determines that the preset condition is met, executing step 650 and step 660; alternatively, the first and second electrodes may be,
in the case where the TM module determines that the preset condition is not satisfied, steps 670 and 680 are performed.
Here, the satisfaction of the preset conditions may be understood as satisfaction of some of the preset conditions described in the above step 640 (for example, satisfaction of condition 1 or satisfaction of condition 2), or may be understood as satisfaction of all of the preset conditions described in the above step 640 (that is, condition 1 and condition 2). The preset condition is not satisfied, and it is understood that all of the preset conditions (i.e., condition 1 and condition 2) described in the above step 540 are not satisfied.
In step 650, the tm module determines that the cache threshold corresponding to the cache space to be opened is the cache threshold 2 (8G).
In this embodiment of the present application, the TM module determines that the reference enqueue buffer value (200 KB) is smaller than the enqueue buffer threshold value 3 (384 KB) and smaller than the enqueue buffer threshold value 2 (256 KB), and the TM module determines that the buffer threshold value corresponding to the buffer space that needs to be opened is the buffer threshold value 2 (8G). It is to be understood that, when the TM module determines that the reference enqueue buffer value (e.g., the reference enqueue buffer value is equal to 260 KB) is less than the enqueue buffer threshold value 3 (384 KB), but not less than the enqueue buffer threshold value 2 (256 KB), the TM module determines that the buffer threshold value corresponding to the buffer space that needs to be opened is still the buffer threshold value 3 (12G).
And 660, the TM module starts a cache space corresponding to the cache threshold 2 (8G).
That is, after the third time, the TM module opens the buffer space corresponding to the threshold 2 (8G). In an example, the TM module opens the cache space corresponding to the cache threshold 2 (8G), which may be understood that when the TM module opens the cache space corresponding to the cache threshold 2 (8G), the TM module closes the cache space corresponding to the threshold 3 (12G). In another example, the TM module opens the cache space corresponding to the cache threshold 2 (8G), which may be understood as that the TM module closes a part of the cache space corresponding to the threshold 3 (12G) (that is, the size of the part of the cache space is 12G-8g = 4g), and based on this, the TM module may be considered to open the cache space corresponding to the cache threshold 2 (8G).
In step 670, the tm module determines that the cache threshold corresponding to the cache space that needs to be opened is cache threshold 4 (16G).
And under the condition that the preset condition is not met, the TM module determines that the cache threshold value corresponding to the cache space needing to be opened is a cache threshold value 4 (16G).
It can be understood that, the TM module determines that the cache threshold corresponding to the cache space that needs to be opened after the third time is the cache threshold 4 (16G), and can satisfy the SLA of the traffic flow flowing into the TM module, where the SLA includes but is not limited to: packet loss rate and transmission delay.
Optionally, in other implementation manners, if the TM module determines that the cache threshold corresponding to the cache space that needs to be opened after the third time is the cache threshold 4 (16G), the SLA on the traffic flow flowing into the TM module cannot be satisfied, and the TM module still determines that the cache threshold corresponding to the cache space that needs to be opened is the cache threshold 4 (16G). It can be understood that the cache threshold corresponding to the total cache space of the TM module is cache threshold 4 (16G), that is, the maximum cache space opened by the TM module is 16G.
In step 680, the tm module opens the buffer space corresponding to buffer threshold 4 (16G).
That is, after the third time, the TM module opens the buffer space corresponding to the threshold 4 (16G). In an example, the TM module opens the cache space corresponding to the cache threshold 4 (16G), which may be understood that when the TM module opens the cache space corresponding to the cache threshold 4 (16G), the TM module closes the cache space corresponding to the threshold 3 (12G). In another example, the TM module opens the cache space corresponding to the cache threshold 4 (16G), which may be understood as that the TM module opens a new 16G-12g =4g cache space on the basis of opening the cache space corresponding to the cache threshold 3 (12G), and based on this, the TM module may be considered to open the cache space corresponding to the cache threshold 4 (16G).
It can be understood that after the third time, the TM module opens the buffer space corresponding to the threshold 4 (16G), and the size of the buffer space corresponding to the threshold 4 (16G) can satisfy the SLA of the traffic flow flowing into the TM module.
In the above steps 610 to 680, the schematic diagram of the first time, the second time and the third time on the time axis can be seen in fig. 3 above.
It should be understood that the method 600 shown in fig. 6 is only an illustration and does not constitute any limitation to the cache management method provided in the embodiment of the present application. For example, in some implementations, more cache thresholds may also be set for the TM module's cache, e.g., N equals 5 or 8, etc. More enqueue cache thresholds may also be set for the enqueue cache of the TM module, e.g., N equals 8. For example, in some implementations, the length of time from the second time to the third time may be greater than or less than 1 second, for example, the length of time from the second time to the third time may also be 0.5 second, 2 seconds, 3 seconds, 5 seconds, or the like.
The method 500 described above in fig. 5 and the method 600 described in fig. 6 may also be used in combination, for example, the TM module determines that traffic flowing into the TM module is in a growing state for a period of time, and the TM module performs cache management according to the method 500 described above in fig. 5. Thereafter, the TM module determines that traffic flowing into the TM module is in a reduced state for a period of time, and the TM module may perform cache management according to the method 600 described above in fig. 6.
The network device suitable for the embodiment of the present application and the cache management method provided by the present application are described in detail above with reference to fig. 1 to 6. Next, the traffic management device provided in the present application is described with reference to fig. 7. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 7 is a schematic structural diagram of a traffic management device 700 according to an embodiment of the present application. As shown in fig. 7, the traffic management device 700 includes an acquisition unit 710, a determination unit 720, and a processing unit 730. Among them, the traffic management device 700 may be the traffic management device 120 in the network device 700 shown in fig. 1 above.
In some implementations, the obtaining unit 710 is configured to perform step 210 in the method 200, the determining unit 720 is configured to perform step 220 in the method 200, and the processing unit 730 is configured to perform step 230 in the method 200. For the step 210, the step 220 and the step 230, reference may be specifically made to the method 200 in the foregoing, and details are not described here for brevity.
Optionally, in another implementation manner, the obtaining unit 710 is configured to perform a step related to obtaining the reference enqueue buffer value in step 520 of the method 500, the determining unit 720 is configured to perform step 530, step 540, step 550, or step 570 of the method 500, and the processing unit 730 is configured to perform step 510, step 560, or step 580 of the method 500. The steps included in the method 500 may specifically refer to the method 500 above, and for brevity, detailed description is omitted here.
Optionally, in another implementation manner, the obtaining unit 710 is configured to perform a step related to obtaining the reference enqueue buffer value in step 620 of the method 600, the determining unit 720 is configured to perform step 630, step 640, step 650, or step 670 of the method 600, and the processing unit 730 is configured to perform step 610, step 660, or step 680 of the method 600. The steps included in the method 600 may specifically refer to the method 600 described above, and for brevity, detailed description is omitted here.
The present application provides a computer program product, which, when running on a network device, causes the network device to execute the method in the above method embodiments.
The embodiment of the application provides a computer-readable storage medium for storing a computer program, wherein the computer program comprises a program for executing the method in the method embodiment.
The embodiment of the application provides a chip system, which comprises at least one processor and an interface; the at least one processor is configured to call and run a computer program, so that the chip system executes the method in the above method embodiment.
The apparatuses in the various product forms respectively have any function of the network device in the method embodiments, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solutions of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method for cache management, the method comprising:
the method comprises the steps that a traffic management device obtains working state parameters, wherein the working state parameters comprise a reference enqueuing cache value, a first cache threshold value and a first enqueuing cache threshold value, the reference enqueuing cache value represents the maximum value occupied by an enqueuing cache of the traffic management device in a preset time period, a cache threshold value corresponding to a cache space opened by the traffic management device in the preset time period is the first cache threshold value, and an enqueuing cache threshold value corresponding to the opened enqueuing cache space is the first enqueuing cache threshold value;
the traffic management device determines to open a cache space corresponding to one of N cache thresholds based on the operating state parameter, where the cache space corresponding to any one cache threshold is a cache space included in the traffic management device, the cache spaces corresponding to any two cache thresholds are different, the N cache thresholds include the first cache threshold, and N is an integer greater than 1;
and the flow management equipment starts the determined cache space.
2. The method according to claim 1, wherein the determining, by the traffic management device, to open the cache space corresponding to one of the N cache thresholds based on the operating state parameter includes:
the traffic management equipment determines the change state of the traffic flowing into the traffic management equipment within the preset time period according to the first enqueue cache threshold value and the reference enqueue cache value;
and the flow management equipment determines to open a cache space corresponding to one of the N cache thresholds based on the change state of the service flow.
3. The method according to claim 2, wherein the buffer space corresponding to the first buffer threshold is smaller than all buffer spaces included in the traffic management device,
the determining, by the traffic management device, to open a cache space corresponding to one of the N cache thresholds based on the change state of the service traffic and the first cache threshold includes:
under the condition that the traffic flow flowing into the traffic management device within the preset time period is in an increasing state, in response to that a preset condition is met, the traffic management device determines to start a cache space corresponding to a second cache threshold, where the second cache threshold is one of the N cache thresholds, the cache space corresponding to the second cache threshold is greater than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is less than or equal to all cache spaces included in the traffic management device.
4. The method of claim 3, wherein the reference enqueue buffer value is greater than the first enqueue buffer threshold.
5. The method according to claim 3 or 4,
the second caching threshold and the first caching threshold are two caching thresholds which are adjacent to the caching threshold in size in the N caching thresholds.
6. The method according to any one of claims 3 to 5, further comprising:
and under the condition that the traffic flow flowing into the traffic management device in the preset time period is in an increasing state, in response to that the preset condition is not met, the traffic management device determines to start a cache space corresponding to a third cache threshold, wherein the cache space corresponding to the third cache threshold is greater than or equal to the cache space corresponding to the second cache threshold, and the cache space corresponding to the third cache threshold is less than or equal to all cache spaces included by the traffic management device.
7. The method according to claim 2, wherein the first buffer threshold corresponds to a buffer space greater than zero and equal to or less than all buffer spaces included in the traffic management device,
the determining, by the traffic management device, to open a cache space corresponding to one of the N cache thresholds based on the change state of the service traffic and the first cache threshold includes:
and under the condition that the traffic flow flowing into the traffic management device in the preset time period is in a reduced state, in response to that a preset condition is met, the traffic management device determines to start a cache space corresponding to a third cache threshold, wherein the cache space corresponding to the third cache threshold is smaller than the cache space corresponding to the first cache threshold, and the cache space corresponding to the third cache threshold is larger than zero.
8. The method of claim 7,
the reference enqueue cache value is smaller than the first enqueue cache threshold value, and the reference enqueue cache value is smaller than a second enqueue cache threshold value, wherein the second enqueue cache threshold value is smaller than the first enqueue cache threshold value.
9. The method according to claim 7 or 8,
the third caching threshold and the first caching threshold are two caching thresholds which are adjacent to the caching threshold in the N caching thresholds.
10. The method of any of claims 7 to 9, further comprising:
and under the condition that the traffic flow flowing into the traffic management device in the preset time period is in a reduced state, in response to that the preset condition is not met, the traffic management device determines to start a cache space corresponding to a second cache threshold, wherein the cache space corresponding to the second cache threshold is larger than the cache space corresponding to the first cache threshold, and the cache space corresponding to the second cache threshold is smaller than or equal to all cache spaces included by the traffic management device.
11. The method according to any one of claims 3 to 10, wherein the meeting of the preset condition comprises:
in the preset time period, the cache bandwidth corresponding to the first cache threshold is larger than the flow rate of the service flow flowing into the flow management equipment; alternatively, the first and second electrodes may be,
and in the preset time period, the requirement that the cache occupancy value of the traffic management device is smaller than (a preset coefficient is multiplied by a first cache threshold value), wherein the preset coefficient is a number which is larger than zero and smaller than 1.
12. A traffic management device, characterized in that it is configured to perform the method according to any of claims 1 to 11.
13. A network device comprising at least one processor and a communication interface, the at least one processor being configured to execute a computer program or instructions to cause the network device to perform the method of any of claims 1 to 11.
14. The network device of claim 13, further comprising at least one memory coupled to the at least one processor, wherein the computer program or instructions are stored in the at least one memory.
15. A computer-readable storage medium, in which a computer program is stored which, when run on one or more processors, causes the computer to perform the method of any one of claims 1 to 11.
16. A system on chip comprising at least one processor and an interface, the at least one processor being configured to invoke and execute a computer program to cause the system on chip to perform the method of any of claims 1 to 11.
CN202110932899.0A 2021-08-13 2021-08-13 Cache management method and equipment Pending CN115706712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932899.0A CN115706712A (en) 2021-08-13 2021-08-13 Cache management method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932899.0A CN115706712A (en) 2021-08-13 2021-08-13 Cache management method and equipment

Publications (1)

Publication Number Publication Date
CN115706712A true CN115706712A (en) 2023-02-17

Family

ID=85180235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932899.0A Pending CN115706712A (en) 2021-08-13 2021-08-13 Cache management method and equipment

Country Status (1)

Country Link
CN (1) CN115706712A (en)

Similar Documents

Publication Publication Date Title
US8761012B2 (en) Packet relay apparatus and method of relaying packet
US9185047B2 (en) Hierarchical profiled scheduling and shaping
US7660252B1 (en) System and method for regulating data traffic in a network device
US9106577B2 (en) Systems and methods for dropping data using a drop profile
EP3410641A1 (en) Network-traffic control method and network device thereof
EP1553740A1 (en) Method and system for providing committed information rate (CIR) based fair access policy
CN101547159B (en) Method and device for preventing network congestion
EP2670085B1 (en) System for performing Data Cut-Through
CN112953848B (en) Traffic supervision method, system and equipment based on strict priority
US11799803B2 (en) Packet processing method and apparatus, communications device, and switching circuit
CN113315720B (en) Data flow control method, system and equipment
US8867353B2 (en) System and method for achieving lossless packet delivery in packet rate oversubscribed systems
CN115794407A (en) Computing resource allocation method and device, electronic equipment and nonvolatile storage medium
CN105978821B (en) The method and device that network congestion avoids
US7660246B2 (en) Method and apparatus for scaling input bandwidth for bandwidth allocation technology
CN112511448A (en) Method for processing network congestion, method for updating model and related device
CN1245817C (en) Control method of network transmission speed and Ethernet interchanger using said method
CN115706712A (en) Cache management method and equipment
EP3425862B1 (en) Automatically cycling among packet traffic flows subjecting them to varying drop probabilities in a packet network
CN114337916A (en) Network transmission rate adjusting method, device, equipment and storage medium
CN113765796B (en) Flow forwarding control method and device
CN108632162B (en) Queue scheduling method and forwarding equipment
CN110347518A (en) Message treatment method and device
US7500012B2 (en) Method for controlling dataflow to a central system from distributed systems
US11349770B2 (en) Communication control apparatus, and communication control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication