WO2017000872A1 - 缓存分配方法及装置 - Google Patents

缓存分配方法及装置 Download PDF

Info

Publication number
WO2017000872A1
WO2017000872A1 PCT/CN2016/087476 CN2016087476W WO2017000872A1 WO 2017000872 A1 WO2017000872 A1 WO 2017000872A1 CN 2016087476 W CN2016087476 W CN 2016087476W WO 2017000872 A1 WO2017000872 A1 WO 2017000872A1
Authority
WO
WIPO (PCT)
Prior art keywords
port
cache
queue
switch
static
Prior art date
Application number
PCT/CN2016/087476
Other languages
English (en)
French (fr)
Inventor
刘伟平
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017000872A1 publication Critical patent/WO2017000872A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Definitions

  • the present application relates to, but is not limited to, the field of switch technology, and in particular, to a cache allocation method and apparatus.
  • QoS Quality of Service
  • WRR Weighted Round Robin
  • the queues are configured with fewer bandwidth resources, and the buffered packets of each queue may be transferred out in the form of weighted polling.
  • the ports and the queues are cached in a fair manner. Therefore, the queues with the higher priority may not be able to apply sufficient buffers.
  • the queues with lower priority occupy the idle cache, which reduces the efficiency of the switch scheduling queue. In severe cases, the WRR scheduling loses accuracy.
  • the present application provides a buffer allocation method and device, which aims to solve the technical problem of low efficiency and accuracy when using WRR scheduling technology for queue scheduling.
  • the present application provides a cache allocation method, and the cache allocation method includes the following steps:
  • each of the first ports in the first port is allocated a static cache according to each of the first caches.
  • the cache allocation method further includes:
  • the second cache is allocated to each of the first ports.
  • the cache allocation method further includes:
  • the cache allocation method further includes:
  • the port configuration information includes dynamic cache information and static cache information of each port in the switch, and dynamic cache information and static cache information of each queue in the port.
  • the cache allocation method further includes:
  • the configuration information restores the size of the static cache allocated to each of the first ports to the initial value.
  • the cache allocation method further includes:
  • the cache allocation method further includes:
  • the application further provides a computer readable storage medium storing computer executable instructions that are implemented when the computer executable instructions are executed.
  • the present application further provides a cache allocation apparatus, where the cache allocation apparatus includes:
  • the first obtaining module is configured to acquire, when the first configuration information of the WRR scheduling queue according to the weighted cyclic scheduling algorithm is detected, the ratio between the queues in the first port corresponding to the first configuration information, the first The size of the packet to be sent with the largest data traffic in the port and the minimum unit occupied by the switch that the switch belongs to the first port;
  • the first calculating module is configured to calculate a first cache respectively allocated to each queue in the first port according to the ratio, the size of the to-be-sent packet, and a minimum unit;
  • a setting module configured to allocate a static cache for each of the first ports according to each of the first caches.
  • the cache allocation device further includes:
  • a second obtaining module configured to acquire, after the setting module allocates a static cache for each queue in the first port, the number of ports that the switch currently has no traffic, and the dynamic cache currently available to the switch;
  • a second calculating module configured to calculate a second cache according to the current number of ports without traffic and the currently available dynamic cache
  • the cache allocation device further includes:
  • the modifying module is configured to modify a dynamic cache threshold of each queue in the first port according to the ratio after the setting module allocates a static cache to each of the first ports.
  • the cache allocation device further includes:
  • a storage module configured to acquire, at the first obtaining module, a ratio between each queue in the first port corresponding to the first configuration information, and a size of the to-be-sent packet with the largest data traffic in the first port
  • the current port configuration information of each port in the switch is stored before the switch to which the first port belongs to the switch, and the port configuration information includes dynamic cache information and static cache of each port in the switch. Information and dynamic cache information and static cache information of each queue in the port;
  • a restore module configured to: after the setting module allocates static caches for each of the first ports, when the first port has no traffic, or when detecting the second configuration information according to the WRR scheduling queue And when the ratio between the queues in the first port is not carried in the second configuration information, the size of the static buffer allocated to each queue in the first port is restored to an initial value based on the port configuration information.
  • the cache allocation device further includes:
  • a first modifying module configured to: after the first computing module calculates a first cache respectively allocated to each of the first ports, and in the setting module, according to each of the first caches, the first Before the static queues are allocated to the respective queues in the port, the state of the second port is modified to a cache prohibition modification state, where the second port is a port other than the first port of the switch;
  • the second modification module is configured to modify the state of the second port to a cache modifiable state after the setting module allocates a static cache to each of the first ports according to each of the first caches.
  • the proportion between the queues in the first port corresponding to the acquired first configuration information is in the first port.
  • the size of the to-be-sent packet with the largest data traffic and the minimum unit occupied by the switch that the switch belongs to the first port, and the first cache is allocated to each queue in the first port, and then is first according to each first cache.
  • a static cache is allocated to each of the queues in the port.
  • the static cache of each queue in the first port is set according to the scale factor of each queue in the first port corresponding to the WRR command, so that the queue with the highest priority and the queue with the lower priority are configured.
  • the cache resources corresponding to the proportion of each queue in the first port scheduled by the WRR queue are occupied, so that the allocation of the buffer resources of the switch is more reasonable, the packet loss probability of the high priority queue is reduced, and the accuracy of the WRR queue scheduling is improved.
  • the efficiency of the switch is improved.
  • FIG. 1 is a flowchart of a cache allocation method of the present application in a first embodiment thereof
  • FIG. 2 is a flow chart of a cache allocation method of the present application in a second embodiment thereof;
  • FIG. 3 is a flowchart of a cache allocation method of the present application in a third embodiment thereof;
  • FIG. 4 is a flowchart of a buffer allocation method of the present application in a fourth embodiment thereof;
  • FIG. 5 is a flowchart of a cache allocation method of the present application in a fifth embodiment thereof;
  • Figure 6 is a functional block diagram of a cache allocating device of the present application in its first embodiment
  • Figure 7 is a functional block diagram of a cache allocating device of the present application in its second embodiment
  • Figure 8 is a functional block diagram of a cache allocating device of the present application in its third embodiment
  • Figure 9 is a functional block diagram of a cache allocating device of the present application in its fourth embodiment.
  • Figure 10 is a functional block diagram of a cache allocating device of the present application in its fifth embodiment.
  • the application provides a cache allocation method.
  • FIG. 1 is a flowchart of a cache allocation method of the present application in a first embodiment thereof.
  • the cache allocation method includes:
  • step S10 when the first configuration information according to the WRR scheduling queue is detected, the ratio between the queues in the first port corresponding to the first configuration information is obtained, and the data traffic in the first port is the largest to be sent.
  • the proportion of the first configuration information corresponding to the bandwidth or the traffic between the queues of the first port in the first configuration information of the WRR scheduling queue, and the size of the to-be-sent packet with the largest data traffic in the first port is The packet size of the packets to be sent with the highest data traffic in all the queues on the first port.
  • the minimum unit that the switch forwards to the packets is the minimum unit that the switch currently caches, that is, the switch needs to forward packets. The smallest cache used.
  • Step S20 Calculate, according to the ratio, the size of the to-be-sent packet, and the minimum unit, a first cache respectively allocated to each queue in the first port;
  • the formula for calculating the first cache Q of each queue in the first port according to the scale factor W, the packet size P, and the minimum unit C of the cache store-and-forward message is:
  • the first port contains three queues 7, 6, and 5, and the WRR ratio of the three queues is 40:4:2.
  • the current packet size of the first port is 1024 bytes.
  • Step S30 According to each of the first caches, a static cache allocated for each queue in the first port.
  • the cache corresponds to the static cache of the queue in the first port, and the size of the static cache of each queue in the first port is Q*C.
  • the allocation is calculated based on the obtained ratio, the size of the to-be-sent packet with the largest data traffic in the first port, and the minimum buffer occupancy unit.
  • the first cache of each queue in the first port, and then each of the first ports in the first port is allocated a static cache according to each first cache; and the scale coefficient of each queue in the first port corresponding to the WRR instruction is implemented.
  • the static cache of each queue in the first port is set, so that the queue with the highest priority and the queue with the lower priority can occupy the cache resource corresponding to the scale coefficient of each queue in the first port scheduled by the WRR queue, so that the switch caches the resource.
  • the allocation is more reasonable, the packet loss probability of the high priority queue is reduced, and the accuracy of the WRR queue scheduling and the performance of the switch are improved.
  • FIG. 2 is a flowchart of a cache allocation method of the present application in a second embodiment thereof.
  • a second embodiment of the cache allocation method of the present application is proposed based on the first embodiment.
  • the cache allocation method further includes:
  • Step S40 obtaining the number of ports that the switch currently has no traffic and the dynamic cache currently available to the switch;
  • the port with no traffic means that there is no queue in the port, or the queue in the port is not scheduled.
  • the currently available dynamic cache refers to the dynamic cache that is not occupied by the switch.
  • Step S50 Calculate a second cache according to the current number of ports without traffic and the currently available dynamic cache.
  • the calculation formula of the dynamic cache A allocated to each port based on the number of ports B and the dynamic cache size D of the current no traffic is:
  • Step S60 respectively assigning the second cache to each queue in the first port.
  • the dynamic cache corresponding to the second cache obtained in the step S50 is allocated to each current non-traffic queue in the first port. For example, if the original static cache of the first port is E, the static of the first port is added.
  • the cache is A+E.
  • the second cache is obtained by calculating the number of ports that are currently no traffic of the switch and the dynamic cache currently available to the switch, and adding the second cache to the static cache of the first port, according to the current
  • the number of ports without traffic and the dynamic cache currently available to the switch increase the size of the static buffer of the first port, thereby enabling the first port to schedule queues, improving the accuracy of WRR scheduling and the performance of the switch.
  • FIG. 3 is a flowchart of a cache allocation method according to a third embodiment of the present application.
  • a third embodiment of the cache allocation method of the present application is proposed based on the first embodiment.
  • the cache allocation method further includes:
  • Step S70 Modify a threshold of a dynamic cache of each queue in the first port according to the ratio.
  • the switch allocates a certain amount of dynamic cache size and a threshold of the remaining dynamic cache corresponding to the queue for each port and each port in the initialization.
  • the product of the dynamic cache of the port and the corresponding threshold of the port is represented by the threshold.
  • the maximum idle dynamic cache of the port, and the product of the dynamic cache of the queue and the corresponding threshold of the queue represents the maximum idle dynamic cache of the queue.
  • the dynamic cache threshold of each queue in the first port is modified based on the scaling factor, so that the dynamic cache of each queue of the first port can meet the scheduling requirement, and the WRR scheduling accuracy is improved.
  • FIG. 4 is a flowchart of a buffer allocation method of the present application in a fourth embodiment thereof.
  • a fourth embodiment of the cache allocation method of the present application is provided based on the first embodiment.
  • the cache allocation method further includes:
  • Step S80 storing current port configuration information of each port in the switch.
  • the port configuration information includes dynamic cache information and static cache information of each port in the switch, and dynamic cache information and static cache information of each queue in the port;
  • the cache allocation method further includes:
  • Step S90 when there is no traffic on the first port, or when detecting a queue according to WRR If the second configuration information does not carry the ratio between the queues in the first port, the size of the static cache allocated to each queue in the first port is restored based on the port configuration information. To the initial value.
  • the configuration information is set according to the configuration information.
  • the second configuration information is the second configuration information of the WRR queue scheduling that is set when the user does not need to use the WRR queue scheduling to perform scheduling of each queue in the first port.
  • FIG. 5 is a flowchart of a cache allocation method according to a fifth embodiment of the present application.
  • a fifth embodiment of the cache allocation method of the present application is provided based on the foregoing embodiment.
  • the cache allocation method further includes: between step S20 and step S30:
  • Step S100 Modify the state of the second port to a cache prohibition modification state, where the second port is a port other than the first port in the switch;
  • the queues in other ports except the first port are prohibited from modifying their corresponding caches.
  • the cache allocation method further includes:
  • Step S110 modifying the state of the second port to a cache modifiable state.
  • the state of the second port is modified to a cache modifiable state, that is, the cache of the second port can be modified according to requirements.
  • the status of the two ports is changed to the cache prohibition modification state, and then the static cache of each queue in the first port is modified, thereby avoiding the impact of the cache modification of other ports on the first port, and then modifying the static cache of each queue in the first port.
  • the state of the second port is modified to a cache modifiable state, and the cache of the second port can be modified according to requirements, thereby improving the accuracy of WRR scheduling.
  • Embodiments of the present invention further provide a computer readable storage medium storing computer executable instructions that are implemented when the computer executable instructions are executed.
  • the application also provides a cache allocation device.
  • FIG. 6 there is shown a block diagram of a cache allocation device of the present application in its first embodiment.
  • the cache allocation device includes:
  • the first obtaining module 10 is configured to: when detecting the first configuration information according to the WRR scheduling queue, acquire a ratio between the queues in the first port corresponding to the first configuration information, and data in the first port The size of the packet to be sent with the largest amount of traffic and the minimum unit of the cache that the switch that the first port belongs to.
  • the proportion of the first configuration information corresponding to the first configuration information is the ratio of the bandwidth or the traffic between the queues of the first port in the first configuration information of the WRR scheduling queue.
  • the size of the packet is the maximum size of the packets to be sent in the queues of all the packets on the first port.
  • the minimum unit of the packet forwarding packets is the minimum unit when the switch forwards the packets. The minimum cache that needs to be occupied.
  • the first calculating module 20 is configured to calculate, according to the ratio, the size of the to-be-sent packet, and the minimum unit, a first cache respectively allocated to each queue in the first port;
  • the first calculation module 20 calculates the first cache Q of each queue in the first port according to the scale factor W, the packet size P, and the minimum unit C of the cache storage and forwarding message:
  • the first port contains three queues 7, 6, and 5, and the WRR ratio of the three queues is 40:4:2.
  • the current packet size of the first port is 1024 bytes.
  • the current cache stores and forwards the packet.
  • the setting module 30 is configured to allocate a static cache for each of the first ports according to each of the first caches.
  • the setting module 30 sets the first cache of each queue in the first port obtained by the first calculation module 20 to be the static cache of the queue in the first port corresponding to the first cache, and the size of the static cache of each queue in the first port. *C.
  • the first calculating module 20 is based on the ratio acquired by the first obtaining module 10 and the to-be-transmitted packet with the largest data traffic in the first port.
  • the size and the minimum unit are calculated to obtain the first caches respectively allocated to the respective queues in the first port, and then the setting module 30 allocates a static cache to each of the queues in the first port according to the respective first caches;
  • the static cache of each queue in the first port is set according to the scale factor of each queue in the first port corresponding to the WRR command, so that the queue with the highest priority and the queue with the lower priority can occupy the first port scheduled with the WRR queue.
  • the cache resources corresponding to the proportion of each queue further make the allocation of the buffer resources of the switch more reasonable, reduce the packet loss probability of the high-priority queue, and improve the accuracy of the WRR queue scheduling and the performance of the switch.
  • Figure 7 is a functional block diagram of a cache allocation device of the present application in its second embodiment.
  • a second embodiment of the cache allocation apparatus of the present application is proposed based on the first embodiment.
  • the cache allocation apparatus further includes:
  • the second obtaining module 40 is configured to obtain, after the setting module allocates a static cache for each queue in the first port, the number of ports that the switch currently has no traffic, and the dynamic cache currently available to the switch;
  • the port with no traffic means that there is no queue in the port, or the queue in the port is not scheduled.
  • the currently available dynamic cache refers to the dynamic cache that is not occupied by the switch.
  • the second calculating module 50 is configured to be according to the current number of ports without current traffic and currently available Dynamic cache, calculate the second cache;
  • the calculation formula of the second cache A based on the quantity B and the dynamic cache size D is:
  • the adding module 60 is configured to allocate the second cache to a queue in the first port, respectively.
  • the adding module 60 adds the dynamic cache corresponding to the second cache obtained by the second calculating module 50 to the static cache of the first port. For example, if the original static cache of the first port is E, the first port is added.
  • the static cache is A+E.
  • the second cache is obtained by the second computing module 50, the number of ports currently having no traffic, and the dynamic cache currently available to the switch, and then the adding module 60 adds the second cache to the first port.
  • the static cache is configured to increase the size of the static buffer of the first port according to the current number of ports without traffic and the dynamic cache currently available to the switch, thereby scheduling the queue of the first port, improving the accuracy of the WRR scheduling and the switch. performance.
  • FIG. 8 is a functional block diagram of a cache allocating device of the present application in a third embodiment thereof.
  • a third embodiment of the cache allocation apparatus of the present application is proposed based on the first embodiment.
  • the cache allocation apparatus further includes:
  • the modifying module 70 is configured to modify a dynamic cache threshold of each queue in the first port according to the ratio after the setting module allocates a static cache to each of the first ports.
  • the switch allocates a certain amount of dynamic cache size and a threshold of the remaining dynamic cache corresponding to the queue for each port and each port in the initialization.
  • the product of the dynamic cache of the port and the corresponding threshold of the port is represented by the threshold.
  • the maximum idle dynamic cache of the port, and the product of the dynamic cache of the queue and the corresponding threshold of the queue represents the maximum idle dynamic cache of the queue.
  • the modification module 70 modifies the dynamic cache threshold of each queue in the first port based on the scaling factor, so that the dynamic cache of each queue of the first port can meet the scheduling requirement, and the WRR scheduling accuracy is improved. .
  • FIG. 9 is a functional block diagram of a cache allocating device of the present application in a fourth embodiment thereof.
  • a fourth embodiment of the cache allocation apparatus of the present application is proposed based on the first embodiment.
  • the cache allocation apparatus further includes:
  • the storage module 80 is configured to acquire, at the first obtaining module, a ratio between each queue in the first port corresponding to the first configuration information, a size of a to-be-sent packet with the largest data traffic in the first port, and Before the switch to which the first port belongs, the port is configured to store the current port configuration information of each port in the switch, where the port configuration information includes dynamic cache information and static cache information of each port in the switch. And dynamic cache information and static cache information of each queue in the port;
  • the restoring module 90 is configured to: after the setting module allocates static caches for each of the first ports, when the first port has no traffic, or when detecting the second configuration according to the WRR scheduling queue And when the second configuration information does not carry the ratio between the queues in the first port, the size of the static buffer allocated to each queue in the first port is restored to an initial value based on the port configuration information.
  • the restoration module 90 is configured according to the port.
  • the configuration information sets the static cache of the first port and the size of the dynamic cache, and sets the static cache and dynamic cache size of each queue in the first port.
  • the second configuration information is the second configuration information of the WRR queue scheduling that is set when the user does not need to use the WRR queue scheduling to perform scheduling of each queue in the first port.
  • the restore module 90 restores the static cache of each queue in the first port based on the port configuration information when the first port has no traffic or detects the queue deletion command of the first port.
  • the static cache of each queue in the first port is such that each queue in the first port does not need to be able to restore the original configuration of the static cache when scheduling the queue according to WRR.
  • FIG. 10 is a functional block diagram of a cache allocating device of the present application in a fifth embodiment thereof.
  • a fifth embodiment of the cache allocation apparatus of the present application is proposed based on the first embodiment.
  • the cache allocation apparatus further includes:
  • the first modification module 100 is configured to: after the first calculation module calculates the first cache respectively allocated to each of the first ports, and in the setting module, according to each of the first caches, The state of the second port is modified to a cache prohibition modification state before each of the queues in the port is allocated a static cache.
  • the second port is a port other than the first port in the switch.
  • the prohibiting module 100 modifies the state of the second port to the cache prohibition modification state, and prevents the queues in other ports except the first port from modifying the corresponding cache. .
  • the second modification module 110 is configured to modify the state of the second port to a cache modifiable state after the setting module allocates a static cache for each queue in the first port according to each of the first caches.
  • the state of the second port is modified to a cache modifiable state, that is, the cache of the second port can be modified according to requirements.
  • the prohibiting module 100 modifies the state of the second port to the cache prohibition modification state, and then modifies the static cache of each queue in the first port, thereby avoiding
  • the cache modification of the other port affects the first port
  • the second modification module 110 modifies the state of the second port to a cache modifiable state, and then can be modified according to requirements.
  • the second port is cached, which improves the accuracy of WRR scheduling.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by implementing an integrated circuit to implement its corresponding function, or may be implemented in the form of a software function module, for example, executing a program stored in the memory by a processor. / instruction to achieve its corresponding function.
  • Embodiments of the invention are not limited to any specific form of combination of hardware and software.
  • the ratio between the queues in the first port corresponding to the acquired first configuration information, and the maximum data traffic in the first port are determined.
  • the size of the sent packet and the minimum unit of the packet forwarded by the switch to which the first port belongs calculate the first cache allocated to each queue in the first port, and then allocate each queue in the first port according to each first cache.
  • the static cache is configured to set the static cache of each queue in the first port according to the scale factor of each queue in the first port corresponding to the WRR instruction, so that the queue with the highest priority and the queue with the lower priority can occupy the queue with the WRR queue.
  • the cache resource corresponding to the proportion of each queue in the first port makes the allocation of the buffer cache resources more reasonable, reduces the packet loss probability of the high priority queue, and improves the accuracy of the WRR queue scheduling and the performance of the switch.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

公开了一种缓存分配方法和装置,该方法包括:在侦测到根据加权循环调度算法WRR调度队列的第一配置信息时,获取第一配置信息对应的第一端口中各个队列之间的比例、第一端口中数据流量最大的待发送报文的大小及第一端口所属交换机转发报文占用缓存的最小单位;根据比例、待发送报文的大小及最小单位,计算分别分配至第一端口中各个队列的第一缓存;根据各个第一缓存,为第一端口中的各个队列分别分配静态缓存。实现了优先级高的队列及优先级低的队列都能占用与WRR队列调度的第一端口中各个队列的比例对应的缓存资源,进而使得交换机缓存资源的分配更加合理,提高了WRR队列调度的准确性和交换机的效率。

Description

缓存分配方法及装置 技术领域
本申请涉及但不限于交换机技术领域,特别是一种缓存分配方法及装置。
背景技术
交换机内部存在一定数量的缓存,报文从交换机入口进入后,存储于交换机缓存中,并在交换机出端口进行排队,如果出端口处排队的缓存报文达到一定数量时,新加入此队列的报文将会被丢弃。
QoS(Quality of Service,服务质量)主要用来解决网络延迟阻塞等问题,保证网络服务质量。WRR(Weighted Round Robin,加权循环调度算法)是QoS的一种重要的队列调度技术,主要是当出端口的队列拥塞时,为优先级高的队列配置较多的带宽资源,而为优先级低的队列配置较少的带宽资源,以加权轮询的形式使每个队列的缓存报文都可能转出。
但是,在采用WRR调度技术调度队列时,各个端口及队列公平地获取缓存使得优先级高的队列不一定能应用充足的缓存,优先级低的队列占用空闲的缓存,降低了交换机调度队列的效率,严重时导致WRR调度失去准确性。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本申请提供一种缓存分配方法及装置,旨在解决采用WRR调度技术进行队列调度时效率及准确性低的技术问题。
为实现上述目的,本申请提供了一种缓存分配方法,所述缓存分配方法包括以下步骤:
在侦测到根据加权循环调度算法WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口 中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
根据各个所述第一缓存,为所述第一端口中的各个队列分别分配静态缓存。
可选地,在为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
根据所述当前无流量的端口的数量及所述当前可用的动态缓存,计算第二缓存;
分别将所述第二缓存分配至所述第一端口中各个队列。
可选地,在所述将所述第一缓存设置为所述第一端口中各个队列的静态缓存的步骤之后,所述缓存分配方法还包括:
根据所述比例,修改所述第一端口中各个队列的动态缓存的阈值。
可选地,在获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位的步骤之前,所述缓存分配方法还包括:
存储所述交换机中各个端口当前的端口配置信息,其中,所述端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
在所述为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
在所述第一端口无流量时、或在侦测到根据WRR调度队列的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
可选地,在根据所述比例、待发送报文的大小及最小单位,在计算分别分配至所述第一端口中各个队列的第一缓存的步骤之后,并且在所述根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存的步骤之前,所述缓存分配方法还包括:
将所述第二端口的状态修改为缓存禁止修改状态,其中,所述第二端口为所述交换机中除第一端口之外的其他端口;
在所述根据各个所述第一缓存为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
将所述第二端口的状态修改为缓存可修改状态。
本申请另外提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现所述方法。
此外,为实现上述目的,本申请还提供一种缓存分配装置,所述缓存分配装置包括:
第一获取模块,设置成在侦测到根据加权循环调度算法WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
第一计算模块,设置成根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
设置模块,设置成根据各个所述第一缓存,为所述第一端口中的各个队列分别分配静态缓存。
可选地,所述缓存分配装置还包括:
第二获取模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
第二计算模块,设置成根据所述当前无流量的端口的数量及所述当前可用的动态缓存,计算第二缓存;
添加模块,设置成分别将所述第二缓存分配至所述第一端口中各个队列。
可选地,所述缓存分配装置还包括:
修改模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,根据所述比例,修改所述第一端口中各个队列的动态缓存阈值。
可选地,所述缓存分配装置还包括:
存储模块,设置成在所述第一获取模块获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位之前,存储所述交换机中各个端口当前的端口配置信息,其中,所述端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
还原模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,在所述第一端口无流量时、或在侦测到根据WRR调度队列的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
可选地,所述缓存分配装置还包括:
第一修改模块,设置成在所述第一计算模块计算分别分配至所述第一端口中各个队列的第一缓存之后,并且在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之前,将所述第二端口的状态修改为缓存禁止修改状态,其中,所述第二端口为所述交换机中除第一端口之外的其他端口;
第二修改模块,设置成在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之后,将所述第二端口的状态修改为缓存可修改状态。
本申请中,通过在侦测到根据WRR调度队列的第一配置信息时,基于获取到的第一配置信息对应的第一端口中各个队列之间的比例、第一端口中 数据流量最大的待发送报文的大小及第一端口所属交换机转发报文占用缓存的最小单位,计算分别分配至第一端口中各个队列的第一缓存,然后根据各个第一缓存,为第一端口中的各个队列分别分配静态缓存;实现了根据WRR指令对应的第一端口中各个队列的比例系数设置第一端口中各个队列的静态缓存,进而使优先级高的队列及优先级低的队列都能占用与WRR队列调度的第一端口中各个队列的比例对应的缓存资源,使得交换机缓存资源的分配更加合理,降低了高优先级队列的丢包概率,提高了WRR队列调度的准确性和交换机的效率。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本申请的缓存分配方法在其第一实施例中的流程图;
图2为本申请的缓存分配方法在其第二实施例中的流程图;
图3为本申请的缓存分配方法在其第三实施例中的流程图;
图4为本申请的缓存分配方法在其第四实施例中的流程图;
图5为本申请的缓存分配方法在其第五实施例中的流程图;
图6本申请的缓存分配装置在其第一实施例中的功能方框图;
图7本申请的缓存分配装置在其第二实施例中的功能方框图;
图8本申请的缓存分配装置在其第三实施例中的功能方框图;
图9本申请的缓存分配装置在其第四实施例中的功能方框图;
图10本申请的缓存分配装置在其第五实施例中的功能方框图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的较佳实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限 定本申请。
本申请提供一种缓存分配方法。
参照图1,图1为本申请的缓存分配方法在其第一实施例中的流程图。
在本实施例中,该缓存分配方法包括:
步骤S10,在侦测到根据WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
其中,第一配置信息对应的比例是指根据WRR调度队列的第一配置信息中第一端口各个队列之间的带宽或流量的比例,第一端口中数据流量最大的待发送报文的大小是指第一端口中所有队列中数据流量最大的待发送报文的报文大小;交换机转发报文占用缓存的最小单位是指交换机当前缓存转发报文时的最小单位,即交换机转发报文时需要占用的最小缓存。在交换机的端口配置信息变化时,或者在交换机的某端口中新增队列时,均可以触发根据WRR调度队列的第一配置信息。
步骤S20,根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
第一端口中各个队列的第一缓存Q根据比例系数W、报文大小P及缓存存储转发报文的最小单位C计算的公式为:
Q=P/C*W。
例如,第一端口包含三个队列7、6、5,且三个队列配置的WRR比例为40:4:2,当前第一端口的报文大小是1024byte,当前缓存存储转发报文的最小单位C(CELL)为208byte,那么按照公式计算队列7的缓存参数是:Q7=(1024/208)*(40),同样其他两个队列的缓存参数也按照这个公式计算,Q7的单位为C。
步骤S30,根据各个所述第一缓存,为所述第一端口中的各个队列分别分配的静态缓存。
将步骤S20中计算获得的第一端口中各个队列的第一缓存设置为该第一 缓存对应第一端口中队列的静态缓存,第一端口中各个队列的静态缓存的大小Q*C。
本实施例中,通过在侦测到根据WRR调度队列的第一配置信息时,基于获取到的比例、第一端口中数据流量最大的待发送报文的大小及最小缓存占用单位,计算分别分配至所述第一端口中各个队列的第一缓存,然后根据各个第一缓存,为第一端口中的各个队列分别分配静态缓存;实现了根据WRR指令对应的第一端口中各个队列的比例系数设置第一端口中各个队列的静态缓存,进而使优先级高的队列及优先级低的队列都能占用与WRR队列调度的第一端口中各个队列的比例系数对应的缓存资源,使得交换机缓存资源的分配更加合理,降低了高优先级队列的丢包概率,提高了WRR队列调度的准确性和交换机的性能。
参照图2,图2为本申请的缓存分配方法在其第二实施例中的流程图。
基于第一实施例提出本申请的缓存分配方法的第二实施例,在本实施例中,在步骤S30之后,该缓存分配方法还包括:
步骤S40,获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
其中,无流量的端口是指该端口中没有队列、或该端口中的队列未被调度,当前可用的动态缓存是指交换机当前未被占用的动态缓存。
步骤S50,根据所述当前无流量的端口的数量及当前可用的动态缓存,计算第二缓存;
其中,基于当前无流量的端口的数量B及动态缓存大小D计算分配给每个端口的动态缓存A的计算公式为:
A=D/B。
步骤S60,分别将所述第二缓存分配至所述第一端口中各个队列。
将步骤S50中计算获得的第二缓存对应的动态缓存分配至所述第一端口中各个当前无流量的队列,例如,若第一端口的原始静态缓存为E,则添加后第一端口的静态缓存为A+E。
本实施例中,通过基于交换机当前无流量的端口的数量及交换机当前可用的动态缓存计算获得第二缓存,并将所述第二缓存添加至所述第一端口的静态缓存,实现了根据当前无流量的端口的数量及交换机当前可用的动态缓存增大第一端口静态缓存的大小,进而能够第一端口调度队列,提高了WRR调度的准确性和交换机的性能。
参照图3,图3为本申请的缓存分配方法在其第三实施例中的流程图。
基于第一实施例提出本申请缓存分配方法的第三实施例,在本实施例中,在步骤S30之后,该缓存分配方法还包括:
步骤S70,根据所述比例,修改所述第一端口中各个队列的动态缓存的阈值。
交换机在初始化时分别为各个端口以及各个端口中各个队列分配一定量的动态缓存大小及可以用于获取队列对应的剩余的动态缓存的阈值,其中,端口的动态缓存与该端口对应阈值的乘积表示该端口的最大空闲动态缓存,队列的动态缓存与该队列对应阈值的乘积表示该队列的最大空闲动态缓存。
本实施例中,通过基于比例系数修改第一端口中各个队列的动态缓存阈值,使得第一端口的各个队列的动态缓存能够满足调度的需求,提高了WRR调度准确性。
参照图4,图4为本申请的缓存分配方法在其第四实施例中的流程图。
基于第一实施例提出本申请缓存分配方法的第四实施例,在本实施例中,在步骤S10与之前,所述缓存分配方法还包括:
步骤S80,存储所述交换机中各个端口当前的端口配置信息;
其中,端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
在步骤S30之后,所述缓存分配方法还包括:
步骤S90,在所述第一端口无流量时、或在侦测到根据WRR调度队列 的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
在第一端口无流量及未进行队列调度、或在侦测到WRR队列调度的第二配置信息且所述第二配置信息中未携带第一端口中各个队列的比例时,根据配置信息设置第一端口的静态缓存以及动态缓存的大小,以及设置第一端口中各个队列的静态缓存及动态缓存的大小。其中,第二配置信息为用户在不需要使用WRR队列调度进行第一端口中各个队列的调度时,设置的WRR队列调度的第二配置信息。
本实施例中,通过在所述第一端口无流量时、或在侦测到WRR队列调度的第二配置信息且所述第二配置信息中未携带第一端口中各个队列的比例系数时,基于所述端口配置信息还原所述第一端口中各个队列的静态缓存,还原了第一端口中各个队列静态缓存,使得在第一端口中各个队列不需要根据WRR调度队列时能够恢复静态缓存的原始配置。
参照图5,图5为本申请的缓存分配方法在其第五实施例中的流程图。
基于上述实施例提出本申请缓存分配方法的第五实施例,在本实施例中,在步骤S20与步骤S30之间,该缓存分配方法还包括:
步骤S100,将所述第二端口的状态修改为缓存禁止修改状态;其中,所述第二端口为所述交换机中除第一端口之外的其他端口;
在进行第一端口中各个队列的静态缓存的修改之前,禁止除第一端口外的其他端口中的队列修改其对应的缓存。
在所述根据各个所述第一缓存为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
步骤S110,将所述第二端口的状态修改为缓存可修改状态。
在设置第一端口中各个队列静态缓存后,将第二端口的状态修改为缓存可修改状态,即可以根据需求修改第二端口的缓存。
本实施例中,通过在第一端口中各个队列的静态缓存的修改之前,将第 二端口的状态修改为缓存禁止修改状态,然后修改第一端口中各个队列的静态缓存,避免了其他端口的缓存修改对第一端口造成的影响,然后在修改第一端口中各个队列的静态缓存后,将第二端口的状态修改为缓存可修改状态,进而可以根据需求修改第二端口的缓存,提高了WRR调度的准确性。
本发明实施例另外提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述方法。
本申请还提供一种缓存分配装置。
参照图6,图6本申请的缓存分配装置在其第一实施例中的框图。
在本实施例中,该缓存分配装置包括:
第一获取模块10,设置成在侦测到根据WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
其中,第一配置信息对应的比例是指根据WRR调度队列的第一配置信息中第一端口的队列之间的带宽或流量的比例,第一端口第一端口中数据流量最大的待发送报文的大小是指第一端口的所有队列中数据流量最大的待发送报文的报文大小;交换机转发报文的最小单位是指交换机当前缓存转发报文时的最小单位,即交换机转发报文时需要占用的最小缓存。在交换机的端口配置信息变化时,或者在交换机的某端口中新增队列时,均可以触发根据WRR调度队列的第一配置信息。
第一计算模块20,设置成根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
第一计算模块20根据比例系数W、报文大小P及缓存存储转发报文的最小单位C计算第一端口中各个队列的第一缓存Q的公式为:
Q=P/C*W。
例如,第一端口包含三个队列7、6、5,且三个队列配置的WRR比例为40:4:2,当前第一端口的报文大小是1024byte,当前缓存存储转发报文 的最小单位C(CELL)为208byte,那么按照公式计算队列7的缓存参数是:Q7=(1024/208)*(40),同样其他两个队列的缓存参数也按照这个公式计算,Q7的单位为C。
设置模块30,设置成根据各个所述第一缓存,为所述第一端口中的各个队列分别分配静态缓存。
设置模块30将第一计算模块20计算获得的第一端口中各个队列的第一缓存设置为该第一缓存对应第一端口中队列的静态缓存,第一端口中各个队列的静态缓存的大小Q*C。
本实施例中,通过在侦测到根据WRR调度队列的第一配置信息时,第一计算模块20基于第一获取模块10获取到的比例、第一端口中数据流量最大的待发送报文的大小及最小单位,计算获得分别分配至所述第一端口中各个队列的第一缓存,然后设置模块30根据各个第一缓存,为所述第一端口中的各个队列分别分配静态缓存;实现了根据WRR指令对应的第一端口中各个队列的比例系数设置第一端口中各个队列的静态缓存,进而使优先级高的队列及优先级低的队列都能占用与WRR队列调度的第一端口中各个队列的比例对应的缓存资源,进而使得交换机缓存资源的分配更加合理,降低了高优先级队列的丢包概率,提高了WRR队列调度的准确性和交换机的性能。
参照图7,图7为本申请的缓存分配装置在其第二实施例中的功能方框图。
基于第一实施例提出本申请的缓存分配装置的第二实施例,在本实施例中,缓存分配装置还包括:
第二获取模块40,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
其中,无流量的端口是指该端口中没有队列、或该端口中的队列未被调度,当前可用的动态缓存是指交换机当前未被占用的动态缓存。
第二计算模块50,设置成根据所述当前无流量的端口的数量及当前可用 的动态缓存,计算第二缓存;
其中,基于数量B及动态缓存大小D计算第二缓存A的计算公式为:
A=D/B。
添加模块60,设置成分别将所述第二缓存分配至所述第一端口中的队列。
添加模块60将第二计算模块50计算获得的第二缓存对应的动态缓存添加至所述第一端口的静态缓存,例如,若第一端口的原始静态缓存为E,则添加后第一端口的静态缓存为A+E。
本实施例中,通过第二计算模块50交换机当前无流量的端口的数量及交换机当前可用的动态缓存计算获得第二缓存,然后添加模块60将所述第二缓存添加至所述第一端口的静态缓存;实现了根据当前无流量的端口的数量参数及交换机当前可用的动态缓存增大第一端口静态缓存的大小,进而能够调度第一端口的队列,提高了WRR调度的准确性和交换机的性能。
参照图8,图8为本申请的缓存分配装置在其第三实施例中的功能方框图。
基于第一实施例提出本申请缓存分配装置的第三实施例,在本实施例中,缓存分配装置还包括:
修改模块70,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,根据所述比例,修改所述第一端口中各个队列的动态缓存阈值。
交换机在初始化时分别为各个端口以及各个端口中各个队列分配一定量的动态缓存大小及可以用于获取队列对应的剩余的动态缓存的阈值,其中,端口的动态缓存与该端口对应阈值的乘积表示该端口的最大空闲动态缓存,队列的动态缓存与该队列对应阈值的乘积表示该队列的最大空闲动态缓存。
本实施例中,通过修改模块70基于所述比例系数修改所述第一端口中各个队列的动态缓存阈值,使得第一端口的各个队列的动态缓存能够满足调度的需求,提高了WRR调度准确性。
参照图9,图9为本申请的缓存分配装置在其第四实施例中的功能方框图。
基于第一实施例提出本申请缓存分配装置的第四实施例,在本实施例中,缓存分配装置还包括:
存储模块80,设置成在所述第一获取模块获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文的最小单位之前,存储所述交换机中各个端口当前的端口配置信息,其中,所述端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
还原模块90,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,在所述第一端口无流量时、或在侦测到根据WRR调度队列的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
在第一端口无流量及未进行队列调度、或在侦测到WRR队列调度的第二配置信息且所述第二配置信息中未携带第一端口中各个队列的比例时,还原模块90根据端口配置信息设置第一端口的静态缓存以及动态缓存的大小,以及设置第一端口中各个队列的静态缓存及动态缓存的大小。其中,第二配置信息为用户在不需要使用WRR队列调度进行第一端口中各个队列的调度时,设置的WRR队列调度的第二配置信息。
本实施例中,通过在第一端口无流量或侦测到第一端口的队列删除指令时,还原模块90还原模块90基于端口配置信息还原所述第一端口中各个队列的静态缓存,还原了第一端口中各个队列的静态缓存,使得在第一端口中各个队列不需要根据WRR调度队列时能够恢复静态缓存的原始配置。
参照图10,图10为本申请的缓存分配装置在其第五实施例中的功能方框图。
基于第一实施例提出本申请的缓存分配装置的第五实施例,在本实施例中,缓存分配装置还包括:
第一修改模块100,设置成在所述第一计算模块计算分别分配至所述第一端口中各个队列的第一缓存之后,并且在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之前,将所述第二端口的状态修改为缓存禁止修改状态,其中,所述第二端口为所述交换机中除第一端口之外的其他端口。
在进行第一端口中各个队列的静态缓存的修改之前,禁止模块100将第二端口的状态修改为缓存禁止修改状态,实现了禁止除第一端口外的其他端口中的队列修改其对应的缓存。
第二修改模块110,设置成在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之后,将所述第二端口的状态修改为缓存可修改状态。
在设置第一端口中各个队列静态缓存后,将第二端口的状态修改为缓存可修改状态,即可以根据需求修改第二端口的缓存。
本实施例中,通过在第一端口中各个队列的静态缓存的修改之前,禁止模块100将第二端口的状态修改为缓存禁止修改状态,然后修改第一端口中各个队列的静态缓存,避免了其他端口的缓存修改对第一端口造成的影响,然后在修改第一端口中各个队列的静态缓存后,第二修改模块110将第二端口的状态修改为缓存可修改状态,进而可以根据需求修改第二端口的缓存,提高了WRR调度的准确性。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能,也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序/指令来实现其相应功能。本发明实施例不限制于任何特定形式的硬件和软件的结合。
以上仅为本发明的可选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
工业实用性
本申请中,通过在侦测到根据WRR调度队列的第一配置信息时,基于获取到的第一配置信息对应的第一端口中各个队列之间的比例、第一端口中数据流量最大的待发送报文的大小及第一端口所属交换机转发报文的最小单位,计算分别分配至第一端口中各个队列的第一缓存,然后根据各个第一缓存,为第一端口中的各个队列分别分配静态缓存;实现了根据WRR指令对应的第一端口中各个队列的比例系数设置第一端口中各个队列的静态缓存,进而使优先级高的队列及优先级低的队列都能占用与WRR队列调度的第一端口中各个队列的比例对应的缓存资源,使得交换机缓存资源的分配更加合理,降低了高优先级队列的丢包概率,提高了WRR队列调度的准确性和交换机的性能。

Claims (10)

  1. 一种缓存分配方法,包括:
    在侦测到根据加权循环调度算法WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
    根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
    根据各个所述第一缓存,为所述第一端口中的各个队列分别分配静态缓存。
  2. 如权利要求1所述的缓存分配方法,在所述为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
    获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
    根据所述当前无流量的端口的数量及所述当前可用的动态缓存,计算第二缓存;
    分别将所述第二缓存分配至所述第一端口中各个队列。
  3. 如权利要求1所述的缓存分配方法,在所述为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
    根据所述比例,修改所述第一端口中各个队列的动态缓存的阈值。
  4. 如权利要求1所述的缓存分配方法,在获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位的步骤之前,所述缓存分配方法还包括:
    存储所述交换机中各个端口当前的端口配置信息,其中,所述端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
    在所述为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所 述缓存分配方法还包括:
    在所述第一端口无流量时、或在侦测到根据WRR调度队列的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
  5. 如权利要求1至4任一项所述的缓存分配方法,在根据所述比例、待发送报文的大小及最小单位,在计算分别分配至所述第一端口中各个队列的第一缓存的步骤之后,并且在所述根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存的步骤之前,所述缓存分配方法还包括:
    将所述第二端口的状态修改为缓存禁止修改状态,其中,所述第二端口为所述交换机中除第一端口之外的其他端口;
    在所述根据各个所述第一缓存为所述第一端口中的各个队列分别分配静态缓存的步骤之后,所述缓存分配方法还包括:
    将所述第二端口的状态修改为缓存可修改状态。
  6. 一种缓存分配装置,包括:
    第一获取模块,设置成在侦测到根据加权循环调度算法WRR调度队列的第一配置信息时,获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位;
    第一计算模块,设置成根据所述比例、待发送报文的大小及最小单位,计算分别分配至所述第一端口中各个队列的第一缓存;
    设置模块,设置成根据各个所述第一缓存,为所述第一端口中的各个队列分别分配静态缓存。
  7. 如权利要求6所述的缓存分配装置,所述缓存分配装置还包括:
    第二获取模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,获取所述交换机当前无流量的端口的数量及所述交换机当前可用的动态缓存;
    第二计算模块,设置成根据所述当前无流量的端口的数量及所述当前可用的动态缓存,计算第二缓存;
    添加模块,设置成分别将所述第二缓存分配至所述第一端口中各个队列。
  8. 如权利要求6所述的缓存分配装置,所述缓存分配装置还包括:
    修改模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,根据所述比例,修改所述第一端口中各个队列的动态缓存阈值。
  9. 如权利要求6所述的缓存分配装置,所述缓存分配装置还包括:
    存储模块,设置成在所述第一获取模块获取所述第一配置信息对应的第一端口中各个队列之间的比例、所述第一端口中数据流量最大的待发送报文的大小及所述第一端口所属交换机转发报文占用缓存的最小单位之前,存储所述交换机中各个端口当前的端口配置信息,其中,所述端口配置信息包括所述交换机中各个端口的动态缓存信息与静态缓存信息以及所述端口中各个队列的动态缓存信息与静态缓存信息;
    还原模块,设置成在所述设置模块为所述第一端口中的各个队列分别分配静态缓存之后,在所述第一端口无流量时、或在侦测到根据WRR调度队列的第二配置信息且所述第二配置信息中未携带第一端口中各个队列之间的比例时,基于所述端口配置信息将分配至所述第一端口中各个队列的静态缓存的大小还原至初始值。
  10. 如权利要求6至9任一项所述的缓存分配装置,所述缓存分配装置还包括:
    第一修改模块,设置成在所述第一计算模块计算分别分配至所述第一端口中各个队列的第一缓存之后,并且在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之前,将所述第二端口的状态修改为缓存禁止修改状态,其中,所述第二端口为所述交换机中除第一端口之外的其他端口;
    第二修改模块,设置成在所述设置模块根据各个所述第一缓存为所述第一端口中各个队列分别分配静态缓存之后,将所述第二端口的状态修改为缓存可修改状态。
PCT/CN2016/087476 2015-06-30 2016-06-28 缓存分配方法及装置 WO2017000872A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510374803.8 2015-06-30
CN201510374803.8A CN106330765B (zh) 2015-06-30 2015-06-30 缓存分配方法及装置

Publications (1)

Publication Number Publication Date
WO2017000872A1 true WO2017000872A1 (zh) 2017-01-05

Family

ID=57607701

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087476 WO2017000872A1 (zh) 2015-06-30 2016-06-28 缓存分配方法及装置

Country Status (2)

Country Link
CN (1) CN106330765B (zh)
WO (1) WO2017000872A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109428827A (zh) * 2017-08-21 2019-03-05 深圳市中兴微电子技术有限公司 一种流量自适应的缓存分配装置及方法、onu设备
CN113556296A (zh) * 2021-05-27 2021-10-26 阿里巴巴新加坡控股有限公司 调度方法、装置、电子设备和存储介质
TWI748613B (zh) * 2020-08-27 2021-12-01 瑞昱半導體股份有限公司 交換機
CN113872881A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 队列信息的处理方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109428829B (zh) * 2017-08-24 2023-04-07 中兴通讯股份有限公司 多队列缓存管理方法、装置及存储介质
CN112996040B (zh) * 2019-12-02 2023-08-18 中国移动通信有限公司研究院 缓存状态报告、资源配置方法、装置、终端及网络侧设备
CN114095513B (zh) * 2021-11-26 2024-03-29 苏州盛科科技有限公司 有限带宽场景下转发流量和镜像流量调度的方法及应用
CN116796677B (zh) * 2023-08-24 2023-11-17 珠海星云智联科技有限公司 一种加权轮询模块的验证方法、系统、设备以及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881937A (zh) * 2005-05-02 2006-12-20 美国博通公司 将存储空间动态分配给多个队列的方法及设备
US7408946B2 (en) * 2004-05-03 2008-08-05 Lucent Technologies Inc. Systems and methods for smooth and efficient round-robin scheduling
CN102916903A (zh) * 2012-10-25 2013-02-06 华为技术有限公司 缓存调整方法及装置
CN103414655A (zh) * 2013-08-27 2013-11-27 中国电子科技集团公司第二十八研究所 一种异构网络环境下xcp带宽预约方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7408946B2 (en) * 2004-05-03 2008-08-05 Lucent Technologies Inc. Systems and methods for smooth and efficient round-robin scheduling
CN1881937A (zh) * 2005-05-02 2006-12-20 美国博通公司 将存储空间动态分配给多个队列的方法及设备
CN102916903A (zh) * 2012-10-25 2013-02-06 华为技术有限公司 缓存调整方法及装置
CN103414655A (zh) * 2013-08-27 2013-11-27 中国电子科技集团公司第二十八研究所 一种异构网络环境下xcp带宽预约方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109428827A (zh) * 2017-08-21 2019-03-05 深圳市中兴微电子技术有限公司 一种流量自适应的缓存分配装置及方法、onu设备
CN109428827B (zh) * 2017-08-21 2022-05-13 深圳市中兴微电子技术有限公司 一种流量自适应的缓存分配装置及方法、onu设备
CN113872881A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 队列信息的处理方法及装置
TWI748613B (zh) * 2020-08-27 2021-12-01 瑞昱半導體股份有限公司 交換機
CN113556296A (zh) * 2021-05-27 2021-10-26 阿里巴巴新加坡控股有限公司 调度方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN106330765B (zh) 2019-11-05
CN106330765A (zh) 2017-01-11

Similar Documents

Publication Publication Date Title
WO2017000872A1 (zh) 缓存分配方法及装置
CN109479032B (zh) 网络设备中的拥塞避免
WO2017000673A1 (zh) 一种共享缓存分配方法、装置及计算机存储介质
US20230145162A1 (en) Queue protection using a shared global memory reserve
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US9813529B2 (en) Effective circuits in packet-switched networks
EP2466824A1 (en) Service scheduling method and device
CN107347039B (zh) 一种共享缓存空间的管理方法及装置
AU2012261696B2 (en) Wireless service access method and apparatus
US20070253439A1 (en) Method, device and system of scheduling data transport over a fabric
WO2017206587A1 (zh) 一种优先级队列调度的方法及装置
WO2020134425A1 (zh) 一种数据处理方法、装置、设备及存储介质
US8457142B1 (en) Applying backpressure to a subset of nodes in a deficit weighted round robin scheduler
CN110830388B (zh) 一种数据调度方法、装置、网络设备及计算机存储介质
JP2007013462A (ja) パケットスケジューラおよびパケットスケジューリング方法
US10951551B2 (en) Queue management method and apparatus
CN113973085A (zh) 一种拥塞控制方法和装置
EP2728812A1 (en) Method and device for leaky bucket speed-limitation
WO2016082603A1 (zh) 一种调度器及调度器的动态复用方法
CN113032295A (zh) 一种数据包二级缓存方法、系统及应用
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
CN112968845B (zh) 一种带宽管理方法、装置、设备及机器可读存储介质
CN109905331B (zh) 队列调度方法及装置、通信设备、存储介质
WO2017032075A1 (zh) 一种服务质量复用方法及装置、计算机存储介质
CN112671832A (zh) 虚拟交换机中保障层次化时延的转发任务调度方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16817236

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16817236

Country of ref document: EP

Kind code of ref document: A1