WO2014173356A1 - 缓存空间分配控制方法、装置及计算机存储介质 - Google Patents

缓存空间分配控制方法、装置及计算机存储介质 Download PDF

Info

Publication number
WO2014173356A1
WO2014173356A1 PCT/CN2014/078751 CN2014078751W WO2014173356A1 WO 2014173356 A1 WO2014173356 A1 WO 2014173356A1 CN 2014078751 W CN2014078751 W CN 2014078751W WO 2014173356 A1 WO2014173356 A1 WO 2014173356A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache space
shared
queue
allocated
queues
Prior art date
Application number
PCT/CN2014/078751
Other languages
English (en)
French (fr)
Inventor
陈杭洲
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2014173356A1 publication Critical patent/WO2014173356A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to the field of caching, and in particular, to a cache space allocation control method, apparatus, and computer storage medium. Background technique
  • the widely used congestion avoidance mechanism on the Internet is the RED ( Random Early Discard) mechanism.
  • the key to the RED mechanism is how to effectively use limited cache resources and perform reasonable discards to achieve congestion avoidance and ensure smooth network.
  • the usual allocation cache method includes: mode 1: mode of allocating the cache according to the number of ports, the number of queues, and the priority of the queue;
  • mode 1 mode of allocating the cache according to the number of ports, the number of queues, and the priority of the queue;
  • the mobile weighted average algorithm is combined with other functions to implement a multi-queue shared cache.
  • ⁇ It is relatively simple to divide the cache in the above manner, but since the cache of each queue is pre-allocated, and the allocation is fixed, it cannot be automatically adjusted according to the real-time network traffic of each queue, the cache utilization is low, and the adaptation is lacking. And it can't really achieve the effect of shared cache, because after allocation, the maximum amount of cache that can be used by each queue is the part that is allocated to itself;
  • the main purpose of the embodiments of the present invention is to provide a cache space allocation control method, apparatus, and computer storage medium, which are designed to automatically adjust and allocate caches according to network traffic, thereby improving the utilization of cache space.
  • the embodiment of the present invention provides a cache space allocation control method, where the method includes: analyzing whether there is a queue in each queue that needs to allocate a cache space to be shared; and determining that a queue needs to be allocated when a queue that needs to be allocated a cache space needs to be allocated The number of queues of the cache space to be shared;
  • the cache space to be shared is allocated to the queue that needs to allocate the cache space to be shared according to the determined number of queues of the cache space to be shared.
  • the step of allocating a cache space to be shared according to the determined number of queues to be shared, and allocating a cache space to be shared to a queue that needs to allocate the cache space to be shared includes:
  • the method further includes:
  • the method further includes:
  • Determining the queue when the allocated cache space to be exclusive does not meet the execution requirements of the queue A queue that needs to allocate the cache space to be shared.
  • the method further includes:
  • the queue is determined to be a queue that does not need to allocate the cache space to be shared, and the determined cache space to be shared needs to be allocated.
  • the number of queues is reduced by one, and the buffer space to be shared is allocated to each queue that needs to allocate the buffer space to be shared according to the number of queues after the decrement.
  • the embodiment of the present invention further provides a buffer space allocation control device, where the device includes: an analysis module, a processing module, and an allocation module;
  • the analyzing module is configured to analyze whether there is a queue in each queue that needs to allocate a buffer space to be shared, obtain a first analysis result, and send the first analysis result to the processing module;
  • the processing module is configured to determine, when the first analysis result sent by the analysis module is a queue that needs to allocate a buffer space to be shared, determine the number of queues that need to allocate a buffer space to be shared;
  • the allocation module is configured to allocate a buffer space to be shared to a queue that needs to allocate the cache space to be shared according to the number of queues that need to be allocated to the cache space to be shared according to the processing module.
  • the processing module is further configured to reserve a buffer space of a preset value in the buffer space to be shared, and use the remaining cache space as a buffer space to be allocated; And configuring, to allocate the number of queues of the cache space to be shared according to the determined needs, and allocating the cache space to be allocated obtained by the processing module to a queue that needs to allocate the cache space to be shared.
  • the allocating module is further configured to allocate a buffer space to be exclusive to each queue.
  • the analyzing module is further configured to analyze whether the cache space to be exclusive allocated for the queue satisfies the execution requirement of the queue, obtain a second analysis result, and send the second analysis result to the processing module. ;
  • the processing module is further configured to: when the second analysis result sent by the analysis module is that the allocated cache space to be exclusive does not meet the execution requirement of the queue, determine that the queue needs to be allocated for sharing. Queue of cache space.
  • the analyzing module is further configured to analyze whether a shared cache space allocated for a queue that needs to allocate a buffer space to be shared is occupied, obtain a third analysis result, and send the third analysis result to the processing module. ;
  • the processing module is further configured to determine that the queue does not need to be allocated when the third analysis result sent by the analysis module is that the shared cache space allocated by the queue that needs to allocate the buffer space to be shared is not occupied.
  • the queue of the shared cache space which is determined by the number of queues that need to allocate the cache space to be shared by one;
  • the allocating module is further configured to allocate the cache space to be shared to the need to allocate the to-be-shared according to the number of queues in which the number of queues to be shared by the processing module is allocated as needed. Queue of cache space.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the cache space allocation control method according to the embodiment of the invention.
  • the number of queues that need to allocate the cache space to be shared is allocated to the queues that need to allocate the cache space to be shared, and the number of queues to be shared with the cache space to be shared is allocated.
  • the change allocates a shared cache space for each queue that needs to allocate the cache space to be shared, and realizes automatic adjustment of the cache space according to network traffic. Matching, saving hardware resources, is conducive to hardware practices, and improves the utilization of cache space.
  • FIG. 1 is a schematic flowchart of a cache space allocation control method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flowchart of a cache space allocation control method according to Embodiment 2 of the present invention
  • 4 is a schematic flowchart of a cache space allocation control method according to Embodiment 4 of the present invention
  • FIG. 5 is a schematic flowchart of a cache space allocation control method according to Embodiment 5 of the present invention
  • FIG. 6 is a cache space according to an embodiment of the present invention
  • Schematic diagram of the composition of the distribution control device Schematic diagram of the composition of the distribution control device. detailed description
  • FIG. 1 is a schematic flowchart of a cache space allocation control method according to Embodiment 1 of the present invention.
  • Step S11 Analyze whether there is a queue in each queue that needs to allocate the buffer space to be shared.
  • the analyzing whether there is a buffer space that needs to be allocated to be shared in each queue is analyzed.
  • the queue is: Real-time or timed analysis of whether each queue needs to allocate a queue of the cache space to be shared.
  • Step S12 When there is a queue that needs to allocate a buffer space to be shared, determine the number of queues that need to allocate the cache space to be shared.
  • the number of queues that need to allocate the cache space to be shared determines the number of queues that need to allocate the cache space to be shared; for example, when it is found that there are 3 queues that need to allocate the cache space to be shared, The number of queues allocated for the cache space to be shared is 3.
  • Step S13 Allocate the cache space to be shared according to the determined number of queues to be shared, and allocate the cache space to be shared to the queue that needs to allocate the cache space to be shared.
  • the method further includes: acquiring the size of the cache space to be shared, that is, obtaining the total multi-queue to be shared.
  • the size of the cache space for example, can be 100M, or it can be any other preset shared cache space.
  • the obtained size of the shared cache space of the multi-queue is allocated to the queues that need to allocate the cache space to be shared, according to the determined number of queues that need to allocate the cache space to be shared.
  • the size of the shared cache space of the obtained multi-queue is 100M, and the determined number of queues that need to allocate the cache space to be shared is four, and the 100M shared cache space is equally allocated to four that need to be allocated for sharing.
  • the queue of the cache space the size of the shared cache space obtained by each queue that needs to allocate the cache space to be shared is 25M;
  • the manner in which the cache space to be shared is allocated to the queue that needs to allocate the cache space to be shared may also be in accordance with the priority order of each queue and/or corresponding to each priority.
  • the preset parameter allocates the acquired size of the buffer space to be shared to each queue that needs to allocate the cache space to be shared.
  • the number of queues that need to be allocated to the cache space to be shared is allocated to the queues that need to allocate the cache space to be shared, and the number of queues to be shared with the cache space to be shared is allocated.
  • the change allocates shared cache space for each queue that needs to allocate the cache space to be shared, realizes automatic adjustment and allocation of cache space according to network traffic, saves hardware resources, is beneficial to hardware practice, and improves cache space utilization.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the cache space allocation control method according to the embodiment of the invention.
  • FIG. 2 is a schematic flowchart of a cache space allocation control method according to Embodiment 2 of the present invention. As shown in FIG. 2, based on the above first embodiment, step S13 includes:
  • Step S14 Reserve a buffer space of a preset value in the cache space to be shared, and use the remaining cache space as the cache space to be allocated.
  • Step S15 Allocate the number of queues to be shared according to the determined needs, and allocate the cache space to be allocated to the queue that needs to allocate the cache space to be shared.
  • the size of the cache space to be shared may be 100M, or may be any other preset shared cache space;
  • the shared cache space may be any pre-set shared cache space such as 10M, 15M, or 30M, and the remaining cache space is reserved as the cache space to be allocated. Allocating the number of queues to be shared according to the determined needs, and allocating the buffer space to be allocated to the queue that needs to allocate the cache space to be shared. For example, the size of the shared cache space is 100M, the size of the cache space reserved for the preset value is 15M, and the number of queues that need to allocate the cache space to be shared is 5, and then 100M is subtracted from 15M.
  • the shared buffer space 85M is used as the buffer space to be allocated, and the buffer space 85M to be allocated is evenly distributed to the five queues that need to allocate the cache space to be shared, and each queue that needs to allocate the cache space to be shared.
  • the shared cache space is available to other queues. It can more flexibly allocate shared cache space of multiple queues and improve user experience.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the cache space allocation control method according to the embodiment of the invention.
  • FIG. 3 is a schematic flowchart of a cache space allocation control method according to Embodiment 3 of the present invention. As shown in FIG. 3, based on the foregoing first embodiment, before step S11, the method further includes:
  • Step S16 Allocating the cache space to be exclusive to each queue.
  • the total number of queues in the multi-queue, the cache space to be exclusive, the priority of each queue in the multi-queue, and the allocation coefficient corresponding to each priority are obtained.
  • the number of queues is four, which are queue A, queue B, queue C, and queue D.
  • the total exclusive cache space of multiple queues is 200M.
  • the priority order of each queue is: queue A, queue B, queue C and queue D; the corresponding allocation coefficient of each queue.
  • the to-be-exclusive cache space Allocating the to-be-exclusive cache space to each queue according to the order of the priority of the obtained queues, the total number of queues, and the allocation coefficient corresponding to each priority, according to the order of priority of each queue, high priority
  • the exclusive cache allocated by the queue of the level and the queue of the lower priority may be different, and the cache space to be exclusive may be equally distributed to each queue.
  • the cache space, the Q is the total exclusive cache space of the multi-queue, and the n is the number of queues in the multi-queue.
  • any other applicable calculation manner may also be used to obtain an exclusive cache space allocated by each queue, and the priority is dynamically set according to the size of the traffic of each queue, that is, each queue is different in each.
  • Q-single queue (Q alone/n *c allocates the buffer space to be exclusive.
  • the allocation constant corresponding to the priority of queue A is 1.5
  • the allocation constant corresponding to the queue B priority is 1.2
  • the allocation constant corresponding to the queue C priority is 0.8
  • queue D priority queue D priority.
  • the corresponding allocation constant is 0.5
  • the exclusive cache space allocated by queue B is Q independence.
  • the cache space to be exclusively allocated is allocated to each queue according to the order of priority of the acquired queues, according to the priority order of each queue, high priority
  • the buffer space to be exclusive is allocated to each queue, it is analyzed whether the cache space to be exclusive allocated for each queue satisfies the execution requirement of each queue; when the allocated cache space to be exclusive When the execution requirement of the queue is not satisfied, it is determined that the queue is a queue that needs to allocate a cache space to be shared.
  • each queue is reasonable by using an allocation constant corresponding to the preset priority. Allocating the cache space to be exclusive, so that the allocation of the exclusive cache space is allocated according to the size of the traffic of each queue, thereby improving the flexibility of the exclusive cache space allocation, so that the exclusive cache space can be better and more rationally utilized, and then Increase the utilization of exclusive cache space.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the cache space allocation control method according to the embodiment of the invention.
  • FIG. 4 is a schematic flow chart of a cache space allocation control method according to Embodiment 4 of the present invention. As shown in FIG. 4, after the step S13, the method further includes:
  • Step S17 Analyze whether the shared cache space allocated for each queue that needs to allocate the cache space to be shared is occupied.
  • the number of the queues to be shared according to the determined need to allocate the cache space to be shared is allocated to the queue that needs to allocate the cache space to be shared, and then analyzes that the cache to be shared needs to be allocated. Whether the shared cache space allocated by the queue of the space is occupied, that is, whether the data packet stored in the shared cache space of the queue is scheduled to be dequeued.
  • Step S18 When there is a shared cache space allocated to the queue to be shared, the queue is determined to be a queue that does not need to be allocated a cache space to be shared, and the determined need to be allocated is to be shared. The number of queues in the cache space is reduced by one.
  • the queue is determined to be a queue that does not need to be allocated a cache space to be shared, and the determined need to be allocated is to be shared.
  • the number of queues in the cache space is reduced by one. For example, in the case of averaging the cache space to be shared, the size of the cache space to be shared is 100M, and the number of queues to be allocated for the cache space to be shared is four, and the cache to be shared by 100M is used.
  • the space is evenly distributed to the four queues that need to allocate the cache space to be shared, and the shared cache space obtained by each queue that needs to allocate the cache space to be shared is 25M; the queue that needs to allocate the cache space to be shared
  • the queue is a queue that does not need to allocate the cache space to be shared, and
  • the number of queues that need to allocate the buffer space to be shared is reduced by one, that is, the number of queues that need to allocate the buffer space to be shared becomes three.
  • Step S19 According to the number of queues after the decrement, the buffer space to be shared is allocated to each queue that needs to allocate the buffer space to be shared.
  • the determined number of queues that need to allocate the buffer space to be shared is four, and when the data packets stored in the shared cache space of a certain queue are scheduled to be dequeued, that is, the shared cache space allocated for the queue. If the queue is not occupied, it is determined that the queue is a queue that does not need to allocate a buffer space to be shared, and the number of queues that need to be allocated to the cache space to be shared is reduced by one, that is, the number of queues that need to allocate the buffer space to be shared. Change to 3; then allocate the average buffer space to be shared by 100M to 3 queues that need to allocate the cache space to be shared.
  • the determined number of queues that need to allocate the cache space to be shared is reduced by one, and according to the subtraction
  • the number of queues is allocated to the queues that need to be allocated to the cache space to be shared.
  • the dynamic monitoring needs to allocate the number of queues to be shared in the cache space and re-allocate the shared cache space in time.
  • the flexibility of the shared cache space allocation enables the cache to be better and more rationally utilized, thereby increasing the utilization of the cache space.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the embodiment of the present invention.
  • the cache space allocation control method is not limited to:
  • FIG. 5 is a schematic flowchart of a cache space allocation control method according to Embodiment 5 of the present invention. As shown in FIG. 5, after the step S15, the method further includes:
  • Step S20 Analyze whether the shared cache space allocated for each queue that needs to allocate the cache space to be shared is occupied.
  • a buffer space of a preset value is reserved in the cache space to be shared, and a cache space remaining after the reservation is used as a cache space to be allocated; a queue of the cache space to be shared according to the determined need After the allocation of the cache space to be allocated to each queue that needs to allocate the cache space to be shared, analyze whether the shared cache space allocated for each queue that needs to allocate the cache space to be shared is occupied, that is, analyze and deposit Whether the data packets in the shared cache space of the queue are scheduled to be dequeued.
  • Step S21 When there is a shared cache space allocated to the queue to be shared by the queue to be shared, the queue is determined to be a queue that does not need to allocate the cache space to be shared, and the determined need to be allocated is to be shared. The number of queues in the cache space is reduced by one.
  • the queue is determined to be a queue that does not need to be allocated a cache space to be shared, and the determined need to be allocated is to be shared.
  • the number of queues in the cache space is reduced by one.
  • the average size of the cache space to be shared is 100M
  • the number of queues to be allocated to be shared is 5, and the preset value is reserved.
  • the default value of the cache space is 15M
  • the Q is reserved as a shared cache space of a plurality of queues reserved according to a preset value, and the Q total queue is a shared cache space allocated for each queue that needs to allocate a cache space to be shared.
  • the shared cache space allocated for the queue is not occupied
  • the determined number of queues that need to allocate the buffer space to be shared is reduced by one, that is, the number of queues that need to allocate the buffer space to be shared becomes four.
  • Step S22 According to the number of queues after the decrement, allocate the buffer space to be allocated to each queue that needs to allocate the buffer space to be shared.
  • the determined number of queues that need to allocate the buffer space to be shared is five, and when the data packets stored in the shared cache space of a certain queue are scheduled to be dequeued, the shared cache space allocated for the queue is specified. If the queue is not occupied, it is determined that the queue is a queue that does not need to allocate a buffer space to be shared, and the number of queues that need to be allocated to the cache space to be shared is reduced by one, that is, the number of queues that need to allocate the buffer space to be shared. If there are 4, the 85M to be allocated buffer space is evenly distributed to 4 queues that need to allocate the buffer space to be shared.
  • the determined number of queues that need to allocate the buffer space to be allocated is reduced by one, and according to the subtraction
  • the number of queues, the buffer space to be allocated is allocated to each queue that needs to allocate the buffer space to be allocated, and the number of queues that need to allocate the buffer space to be allocated is dynamically monitored, and the allocated cache space is redistributed in time.
  • the flexibility of the shared cache space allocation enables the cache to be better and more rationally utilized, thereby increasing the utilization of the cache space.
  • FIG. 6 is a schematic structural diagram of a cache space allocation control apparatus according to an embodiment of the present invention.
  • the apparatus includes: an analysis module 10, a processing module 20, and an allocation module 30, and the analysis module 10 is configured to analyze Whether there is a queue in each queue that needs to allocate a buffer space to be shared, obtain a first analysis result, and send the first analysis result to the processing module.
  • the processing module 20 is configured to determine, when the first analysis result sent by the analysis module 10 is that there is a queue that needs to allocate a buffer space to be shared, determine the number of queues that need to allocate the cache space to be shared;
  • the allocation module 30 is configured to allocate a buffer space to be shared to a queue that needs to allocate the cache space to be shared according to the number of queues that need to be allocated by the processing module 20 to allocate a buffer space to be shared.
  • the analysis module 10 analyzes in real time or periodically whether each queue needs to allocate a queue of the cache space to be shared.
  • the processing module 20 determines the number of queues that need to allocate the cache space to be shared; for example, when it is found that there are 3 need to allocate the number of queues When the queue of the cache space to be shared is allocated, the number of queues that need to allocate the cache space to be shared is 3.
  • the allocation module 30 obtains the size of the cache space to be shared, that is, the size of the cache space to be shared by the total multi-queue, for example, may be 100M, or may be any other preset shared cache space;
  • the allocating module 30 allocates, on average, the size of the shared cache space of the acquired multi-queue according to the determined number of queues that need to be allocated to the cache space to be allocated to each queue that needs to allocate the cache space to be shared.
  • the size of the shared cache space of the multi-queue acquired by the allocation module 30 is 100M
  • the number of queues that need to be allocated by the processing module 20 to allocate the buffer space to be shared is four
  • the allocation module 30 equally allocates the 100M shared cache space to 4 queues that need to allocate the cache space to be shared, each need to allocate a cache to be shared
  • the size of the shared cache space obtained by the space queue is 25M.
  • Q is a total multi-queue
  • N is the number of queues that need to allocate the cache space to be shared
  • the Q total queue is exclusive to the shared cache space allocated for each queue that needs to allocate the cache space to be shared.
  • the allocation module 30 allocates the size of the buffer space to be shared to the queue that needs to allocate the buffer space to be shared, and may also be in accordance with the priority order of each queue and the pre-correspondence of each priority.
  • the parameter is set to allocate the size of the shared cache space of the multi-queue to each queue that needs to allocate the cache space to be shared.
  • the allocation module 30 allocates a shared cache space to each queue that needs to allocate a cache space to be shared according to the number of queues that need to be allocated to the cache space to be shared according to the processing module 20, and allocates a shared cache space to be shared as needed.
  • the change of the number of queues in the cache space allocates a shared cache space for each queue that needs to allocate the cache space to be shared, thereby realizing the automatic adjustment and allocation of the cache space according to the network traffic, saving hardware resources, facilitating hardware practice, and improving at the same time.
  • the utilization of the cache space is mapped to the cache space.
  • the processing module 20 is further configured to reserve a buffer space of a preset value in the cache space to be shared, and use the remaining cache space as a buffer space to be allocated;
  • the module 30 is further configured to allocate the number of queues to be shared according to the determined needs, and allocate the cache space to be allocated acquired by the processing module 20 to a queue that needs to allocate the cache space to be shared.
  • the allocation module 30 acquires the size of the cache space to be shared, that is, the size of the shared cache space allocated for acquiring multiple queues, for example, may be 100M, or may be any other preset shared cache space;
  • the processing module 20 reserves a shared buffer space of a preset value, and the preset value may be any preset shared cache space, such as 10M, 15M, or 30M, and uses the remaining cache space as the cache space to be allocated.
  • the allocation module 30 allocates the number of queues of the buffer space to be shared according to the determined needs, and divides the cache space to be allocated. A queue that needs to allocate the cache space to be shared.
  • the size of the shared cache space acquired by the allocating module 30 is 100M
  • the size of the cache space that the processing module 20 reserves the preset value is 15M
  • the number of queues that need to allocate the cache space to be shared is obtained. If the number is 5, the shared cache space 85M after the 15M is subtracted from the 15M is used as the buffer space to be allocated, and the allocation module 30 evenly allocates the cache space 85M to be allocated to the 5 cache spaces to be shared.
  • the shared buffer space of the preset value is reserved by the processing module 20, so that when the total shared cache space is fully occupied by one or more queues that need to allocate the cache space to be shared, the shared cache space is available to other queues. It can more flexibly allocate shared cache space of multiple queues and improve user experience.
  • the allocating module 30 is further configured to allocate a buffer space to be exclusive to each queue.
  • the processing module 20 acquires the total number of queues in the multi-queue, the cache space to be exclusive, the priority of each queue in the multi-queue, and the allocation coefficient corresponding to each priority.
  • the number of queues is four, which are queue A, queue B, queue C, and queue D.
  • the total exclusive cache space of multiple queues is 200M; the priority order of each queue is: queue A, queue B, queue C and queue D; the corresponding allocation coefficient of each queue.
  • the allocating module 30 allocates the to-be-independent cache space to each queue according to the order of the priority of the acquired queues, the total number of queues, and the allocation coefficient corresponding to each priority.
  • the distribution module 30 roots Depending on the order of priority of each queue, the exclusive cache allocated by the high priority queue and the low priority queue may be different, and the cache space to be exclusive may be equally distributed to each queue.
  • the Q is the total exclusive cache space of the multi-queue, and the n is the number of queues in the multi-queue.
  • any other applicable calculation manner may also be used to obtain an exclusive cache space allocated by each queue, and the priority is dynamically set according to the size of the traffic of each queue, that is, each queue is different in each.
  • the timing priorities are inconsistent.
  • the allocation constant corresponding to the level is 1.2
  • the allocation constant corresponding to the queue C priority is 0.8
  • the allocation constant corresponding to the queue D priority is 0.5
  • the allocating module 30 allocates the cache space to be exclusively allocated to each queue according to the order of the priorities of the acquired queues, according to the order of priority of each queue.
  • the allocation module 30 allocates the cache space to be exclusively allocated to each queue according to the allocation constant corresponding to the preset priority, so that the allocation of the exclusive cache space is allocated according to the size of the traffic of each queue, thereby improving the exclusive cache space.
  • the flexibility of allocation allows the exclusive cache space to be better and more rationally utilized, thereby increasing the utilization of the exclusive cache space.
  • the analyzing module 10 is further configured to analyze whether the buffer space to be exclusive allocated for the queue satisfies the execution requirement of the queue, obtain a second analysis result, and send the second analysis result to the processing.
  • Module 20 is further configured to analyze whether the buffer space to be exclusive allocated for the queue satisfies the execution requirement of the queue, obtain a second analysis result, and send the second analysis result to the processing.
  • the processing module 20 is further configured to determine that the queue needs to be allocated when the second analysis result sent by the analysis module 10 is that the allocated cache space to be exclusive does not meet the execution requirement of the queue. A queue of shared cache spaces.
  • the analyzing module 10 is further configured to analyze whether a shared cache space allocated for a queue that needs to allocate a buffer space to be shared is occupied, obtain a third analysis result, and send the third analysis result to the processing.
  • Module 10 is further configured to analyze whether a shared cache space allocated for a queue that needs to allocate a buffer space to be shared is occupied, obtain a third analysis result, and send the third analysis result to the processing.
  • the processing module 20 is further configured to determine that the queue is not needed when the third analysis result sent by the analysis module 10 is that the shared cache space allocated by the queue that needs to allocate the cache space to be shared is not occupied. Allocating a queue of the cache space to be shared, and reducing the determined number of queues that need to allocate the cache space to be shared by one;
  • the allocating module 30 is further configured to allocate the buffer space to be shared to the need to allocate the to-be-shared according to the number of queues in which the number of queues to be shared by the processing module 20 is allocated as needed. Queue of cache space.
  • the analysis module 10 analyzes that the allocation is needed after the cache space to be shared is allocated to the queue that needs to allocate the buffer space to be shared. Whether the shared cache space allocated by the queue of the shared cache space is occupied, that is, whether the data packet stored in the shared cache space of the queue is scheduled to be queued. Specifically, when there is a shared cache space allocated to the queue to be shared, the analysis module 10 determines that the queue is a queue that does not need to allocate the cache space to be shared. The processing module 20 reduces the determined number of queues that need to allocate the cache space to be shared by one.
  • the method for selecting the cache space to be shared by the distribution module is taken as an example, the size of the cache space to be shared by the distribution module 30 is 100M, and the number of queues that need to be allocated by the processing module 20 to allocate the cache space to be shared is 4
  • the allocation module 30 allocates 100M of the cache space to be shared equally to the queues that need to allocate the cache space to be shared, and each shared cache space that needs to be allocated to the queue of the cache space to be shared.
  • the processing module 20 determines that the queue does not need to allocate a cache to be shared.
  • the queue of the space reduces the number of queues that need to be allocated to the buffer space to be shared by one, that is, the number of queues that need to allocate the buffer space to be shared becomes three.
  • the queue determined by the processing module 20 needs to allocate the cache space to be shared.
  • the number is decremented by one, and according to the number of queues after the decrement, the allocation module 30 allocates the buffer space to be shared to each queue that needs to allocate the buffer space to be shared, and dynamically allocates a queue that needs to allocate the buffer space to be shared.
  • the number and the shared cache space are redistributed in time to improve the flexibility of the allocation of the cache space to be shared, so that the cache can be better and more rationally utilized, thereby improving the utilization of the cache space.
  • the analysis module 10, the processing module 20, and the distribution module 30 can be used by the central processing unit (CPU) in the buffer space allocation control device and the digital signal processor. (DSP, Digital Signal Processor) or Field Programmable Gate Array (FPGA) implementation.
  • CPU central processing unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • embodiments of the present invention can be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment of a combination of software and hardware. Moreover, the invention can be embodied in the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowcharts and/or block diagrams, and combinations of flow and/or blocks in the flowcharts and/or block diagrams can be implemented by computer program instructions.
  • the computer program instructions can be provided to a processor of a general purpose computer, a special purpose computer, an embedded processor, or other programmable data processing device to produce a machine such that a process or a process and/or a block diagram of a block or A device that has multiple functions specified in the box.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the device is implemented in a flow chart A function specified in a block or blocks of a process or multiple processes and/or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the embodiment of the present invention allocates a shared cache space for each queue that needs to allocate a cache space to be shared, and allocates the number of queues of the cache space to be shared, as needed, by determining the number of queues that need to allocate the cache space to be shared.
  • the change allocates shared cache space for each queue that needs to allocate the cache space to be shared, realizes automatic adjustment and allocation of cache space according to network traffic, saves hardware resources, is beneficial to hardware practice, and improves utilization of cache space.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明实施例公开了一种缓存空间分配控制的方法、装置及计算机存储介质,所述方法包括:分析各个队列中是否存在需要分配待共享的缓存空间的队列;在确定存在需要分配待共享的缓存空间的队列时,确定需要分配待共享的缓存空间的队列的个数;根据确定的所述需要分配待共享的缓存空间的队列个数,将待共享的缓存空间分配给需要分配所述待共享的缓存空间的队列。

Description

緩存空间分配控制方法、 装置及计算机存储介质 技术领域
本发明涉及到緩存领域, 特别涉及一种緩存空间分配控制方法、 装置 及计算机存储介质。 背景技术
随着网络的普及, 信息交流与信息共享成为人们日常生活中必不可少 的一部分。 而随着网络中交互信息 (数据包) 的不断增长, 必然引起网络 拥塞。 因此, 如何避免拥塞显得尤为重要, 现在互联网 (Internet )上使用 得比较广泛的拥塞避免机制是随机早期丟弃( RED, Random Early Discard ) 机制。 而 RED机制的关键在于如何有效的使用有限的緩存资源, 执行合理 的丟弃, 从而实现拥塞避免, 保障网络的顺畅。
在现有的釆用 RED机制的 RED装置的多队列緩存中, 通常的分配緩 存方式包括: 方式一、 按照端口、 队列的数量及队列的优先级来分配緩存 的方式; 方式二、 釆用指数移动加权平均算法并配合其他函数实现多队列 共享緩存的方式。
然而, 上述方式存在不可回避的缺陷:
釆用上述方式一划分緩存比较简单, 但由于各队列的緩存是预先分配, 且分配完毕即是固定的, 不能根据各队列实时网络流量进行自动调整, 緩 存利用率偏低, 缺乏自适应性, 并且也无法真正达到共享緩存的效果, 因 为分配好之后, 每个队列能使用緩存的最大额度就是分配给自己的那一部 分;
釆用上述方式二进行共享緩存, 在激活队列较少时, 緩存利用率偏低, 并且算法相对复杂, 不利于硬件实现。 发明内容
本发明实施例的主要目的为提供一种緩存空间分配控制方法、 装置及 计算机存储介质, 旨在实现根据网络流量进行緩存自动调整分配, 进而提 高緩存空间的利用率。
本发明实施例提出一种緩存空间分配控制方法, 所述方法包括: 分析各个队列中是否存在需要分配待共享的緩存空间的队列; 在存在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的 緩存空间的队列的个数;
根据确定的所述需要分配待共享的緩存空间的队列个数, 将待共享的 緩存空间分配给需要分配所述待共享的緩存空间的队列。
优选地, 所述根据确定的所述需要分配待共享的緩存空间的队列个数, 将待共享的緩存空间分配给需要分配所述待共享的緩存空间的队列的步骤 包括:
在所述待共享的緩存空间中预留出预设值的緩存空间, 并将预留后剩 余的緩存空间作为待分配的緩存空间;
根据确定的需要分配待共享的緩存空间的队列个数, 将所述待分配的 緩存空间分配给需要分配所述待共享的緩存空间的队列。
优选地, 在所述分析各个队列中是否存在需要分配待共享的緩存空间 的队列之前, 所述方法还包括:
将待独享的緩存空间分配给各个队列。
优选地, 所述将待独享的緩存空间分配给各个队列之后, 所述方法还 包括:
分析为各个队列分配的待独享的緩存空间是否满足各个队列的执行需 要;
当分配的待独享的緩存空间不满足队列的执行需要时, 确定所述队列 为需要分配待共享的緩存空间的队列。
优选地, 在所述将待共享的緩存空间分配给需要分配所述待共享的緩 存空间的队列之后, 所述方法还包括:
分析为各个需要分配待共享的緩存空间的队列分配的共享緩存空间是 否被占用;
当有需要分配待共享的緩存空间的队列被分配的共享緩存空间未被占 用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的 需要分配待共享的緩存空间的队列个数减一, 并根据减一后的队列个数, 将待共享的緩存空间分配给各个需要分配待共享的緩存空间的队列。
本发明实施例还提供了一种緩存空间分配控制装置, 所述装置包括: 分析模块、 处理模块和分配模块; 其中,
所述分析模块, 配置为分析各个队列中是否存在需要分配待共享的緩 存空间的队列, 获得第一分析结果, 将所述第一分析结果发送至所述处理 模块;
所述处理模块, 配置为在所述分析模块发送的所述第一分析结果为存 在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的緩存空间 的队列的个数;
所述分配模块, 配置为根据所述处理模块确定的需要分配待共享的緩 存空间的队列个数, 将待共享的緩存空间分配给需要分配所述待共享的緩 存空间的队列。
优选地, 所述处理模块, 还配置为将所述待共享的緩存空间中预留出 预设值的緩存空间, 并将预留后剩余的緩存空间作为待分配的緩存空间; 所述分配模块, 还配置为根据确定的需要分配待共享的緩存空间的队 列个数, 将所述处理模块获取的所述待分配的緩存空间分配给需要分配所 述待共享的緩存空间的队列。 优选地, 所述分配模块, 还配置为将待独享的緩存空间分配给各个队 列。
优选地, 所述分析模块, 还配置为分析为队列分配的待独享的緩存空 间是否满足所述队列的执行需要, 获得第二分析结果, 将所述第二分析结 果发送至所述处理模块;
所述处理模块, 还配置为当所述分析模块发送的所述第二分析结果为 分配的待独享的緩存空间不满足所述队列的执行需要时, 确定所述队列为 需要分配待共享的緩存空间的队列。
优选地, 所述分析模块, 还配置为分析为需要分配待共享的緩存空间 的队列分配的共享緩存空间是否被占用, 获得第三分析结果, 将所述第三 分析结果发送至所述处理模块;
所述处理模块, 还配置为当所述分析模块发送的第三分析结果为有需 要分配待共享的緩存空间的队列被分配的共享緩存空间未被占用时, 确定 所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共 享的緩存空间的队列个数减一;
所述分配模块, 还配置为根据所述处理模块根据需要分配待共享的緩 存空间的队列个数减一后的队列个数, 将所述待共享的緩存空间分配给需 要分配所述待共享的緩存空间的队列。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。
相对现有技术, 通过确定的需要分配待共享的緩存空间的队列个数为 每个需要分配待共享的緩存空间的队列分配共享緩存空间, 并随着需要分 配待共享的緩存空间的队列个数的变化为每个需要分配待共享的緩存空间 的队列分配共享緩存空间, 实现了根据网络流量进行緩存空间自动调整分 配, 节约了硬件资源, 有利于硬件实践, 同时提高了緩存空间的利用率。 附图说明
图 1为本发明实施例一的緩存空间分配控制方法的流程示意图; 图 2为本发明实施例二的緩存空间分配控制方法的流程示意图; 图 3为本发明实施例三的緩存空间分配控制方法的流程示意图; 图 4为本发明实施例四的緩存空间分配控制方法的流程示意图; 图 5为本发明实施例五的緩存空间分配控制方法的流程示意图; 图 6为本发明实施例的緩存空间分配控制装置的组成架构示意图。 具体实施方式
应当理解, 此处所描述的具体实施例仅仅用以解释本发明, 并不用于 限定本发明。
如图 1所示, 图 1为本发明实施例一的緩存空间分配控制方法流程示 意图。
需要强调的是: 图 1 所示流程图仅为一个较佳实施例, 本领域的技术 人员当知, 任何围绕本发明思想构建的实施例都不应脱离于如下技术方案 涵盖的范围:
分析各个队列中是否存在需要分配待共享的緩存空间的队列; 在确定 存在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的緩存空 间的队列的个数; 根据确定的所述需要分配待共享的緩存空间的队列个数, 将待共享的緩存空间分配给所述需要分配待共享的緩存空间的队列。
以下是本实施例逐步实现对多队列的緩存进行控制的具体步骤: 步骤 S11 : 分析各个队列中是否存在需要分配待共享的緩存空间的队 列。
具体的, 所述分析各个队列中是否存在需要分配待共享的緩存空间的 队列为: 实时或定时分析各个队列是否需要分配待共享的緩存空间的队列。 步骤 S12: 在存在需要分配待共享的緩存空间的队列时,确定需要分配 待共享的緩存空间的队列个数。
具体的, 当存在需要分配待共享的緩存空间的队列时, 确定需要分配 待共享的緩存空间的队列个数; 例如, 当找出有 3 个需要分配待共享的緩 存空间的队列时, 即需要分配待共享的緩存空间的队列个数为 3。
步骤 S13: 根据确定的所述需要分配待共享的緩存空间的队列个数,将 待共享的緩存空间分配给需要分配所述待共享的緩存空间的队列。
具体的, 所述将待共享的緩存空间分配给所述需要分配待共享的緩存 空间的队列之前, 所述方法还包括: 获取待共享的緩存空间的大小, 即为 获取总的多队列待共享的緩存空间的大小, 例如, 可以是 100M, 也还可以 是其他任意预先设置的共享緩存空间。 将获取的所述多队列的共享緩存空 间的大小根据所述确定的需要分配待共享的緩存空间的队列个数平均分配 给各个需要分配待共享的緩存空间的队列。 例如, 获取的多队列的共享緩 存空间的大小是 100M,确定的所述需要分配待共享的緩存空间的队列个数 是 4个,则将 100M共享緩存空间平均分配给 4个需要分配待共享的緩存空 间的队列, 每个需要分配待共享的緩存空间的队列得到的共享緩存空间的 大小为 25M; 则需要分配待共享的緩存空间的队列得到共享緩存空间大小 的计算公式为: Q共队列 =0共/^ 其中, 所述 Q共为总的多队列的共享緩 存大小, 所述 N为需要分配待共享的緩存空间的队列个数, 所述 Q共队列 为每个需要分配待共享的緩存空间的队列分配的共享緩存空间的独享。 在 本发明其他实施例中, 将所述待共享的緩存空间分配给需要分配所述待共 享的緩存空间的队列的方式, 还可以是按照各个队列的优先级高低顺序和 / 或各个优先级对应的预设的参数将获取的待共享的緩存空间的大小分配给 各个需要分配所述待共享的緩存空间的队列。 本实施例中, 通过确定的需要分配待共享的緩存空间的队列个数为每 个需要分配待共享的緩存空间的队列分配共享緩存空间, 并随着需要分配 待共享的緩存空间的队列个数的变化为每个需要分配待共享的緩存空间的 队列分配共享緩存空间, 实现了根据网络流量进行緩存空间自动调整分配, 节约了硬件资源, 有利于硬件实践, 同时提高了緩存空间的利用率。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。
图 2 为本发明实施例二的緩存空间分配控制方法流程示意图。 如图 2 所示, 基于上述第一实施例, 步骤 S13包括:
步骤 S14: 在所述待共享的緩存空间中预留出预设值的緩存空间, 并将 预留后剩余的緩存空间作为待分配的緩存空间。
步骤 S15: 根据确定的需要分配待共享的緩存空间的队列个数,将所述 待分配的緩存空间分配给需要分配所述待共享的緩存空间的队列。
具体的, 所述待共享的緩存空间的大小, 即为多队列分配的共享緩存 空间的大小, 例如, 可以是 100M, 也还可以是其他任意预先设置的共享緩 存空间;所述预设值的共享緩存空间,所述预设值可以是 10M、 15M或 30M 等任意预先设置的共享緩存空间, 并将预留后剩余的緩存空间作为待分配 的緩存空间。 根据确定的需要分配待共享的緩存空间的队列个数, 将所述 待分配的緩存空间分配给需要分配所述待共享的緩存空间的队列。 例如, 获取的共享緩存空间的大小是 100M, 预留出预设值的緩存空间大小为 15M,获取的需要分配所述待共享的緩存空间的队列个数是 5个,则将 100M 减去 15M之后的共享緩存空间 85M作为待分配的緩存空间,将所述待分配 的緩存空间 85M平均分配给 5个需要分配待共享的緩存空间的队列, 每个 需要分配所述待共享的緩存空间的队列得到的共享緩存空间的大小为 17M;需要分配待共享的緩存空间的队列得到共享緩存空间的大小的计算公 式为: Q共队列 = ( Q共 -Q预留) /N; 其中, 所述 Q共为总的多队列的共 享緩存空间, 所述 N为需要分配待共享的緩存空间的队列个数, 所述 Q预 留为按照预设值预留的多队列的共享緩存空间, 所述 Q共队列为每个需要 分配待共享的緩存空间的队列分配的共享緩存空间。
本实施例中, 通过预留预设值的共享緩存空间, 以便在总的共享緩存 空间在被一个或多个需要分配待共享的緩存空间的队列全部占用时, 其他 队列有共享緩存空间可用, 能更加灵活的分配多队列的共享緩存空间, 提 高用户体验。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。
图 3 为本发明实施例三的緩存空间分配控制方法流程示意图。 如图 3 所示, 基于上述第一实施例, 在步骤 S11之前, 所述方法还包括:
步骤 S16: 将待独享的緩存空间分配给各个队列。
具体的, 获取多队列中总的队列个数、 待独享的緩存空间、 各个队列 在多队列中的优先级以及各个优先级对应的分配系数。 例如, 队列的个数 为 4个, 分别是队列 A、 队列 B、 队列 C和队列 D; 多队列的总的独享緩 存空间的大小为 200M; 各个队列的优先级顺序为: 队列 A、 队列 B、 队列 C及队列 D;各个队列的对应的分配系数。根据获取的队列的优先级的高低 顺序、 总的队列个数以及各个优先级对应的分配系数, 将所述待独享緩存 空间分配给各个队列, 根据各个队列的优先级的高低顺序, 高优先级的队 列和低优先级的队列分配的独享緩存可以不同, 也可以将待独享的緩存空 间平均分配给各个队列。 当将待独享的緩存空间平均分配给各个队列时, 其公式可以是: Q独队列 =0独/11; 其中, 所述 Q独队列为队列分配的独享 緩存空间, 所述 Q独为多队列总的独享緩存空间, 所述 n为多队列中的队 列个数。 当高优先级的队列和低优先级的队列分配的独享緩存空间可以不 同时, 其公式可以是 Q独队列 = ( Q独 /n ) *c; 其中, 所述 c为优先级对应 的分配常数。 在本实施例中也还可以是其他任意适用的计算方式得到每个 队列分配的独享緩存空间, 所述优先级根据每个队列的流量的大小来动态 设置, 即每个队列在每个不同的时刻优先级是不一致的。 例如, 按照 Q独 队列 =Q独 /n,分配待独享的緩存空间。例如,待独享的緩存空间 Q独 =200M, 多队列中的队列个数 n=4, Q独对列 =Q独 /n=200M/4=50M; 按照 Q独队列 = ( Q独 /n ) *c分配待独享的緩存空间, 若队列 A的优先级对应的分配常数 为 1.5, 队列 B优先级对应的分配常数为 1.2, 队列 C优先级对应的分配常 数为 0.8, 队列 D优先级对应的分配常数为 0.5, 则对应的队列 A分配的独 享緩存空间为 Q独对列 A=(200M/4)*1.5=50M*1.5=75M, 队列 B分配的独 享緩存空间为 Q独对列 B=(200M/4)*1.2=50M*1.2=60M, 队列 C分配的独 享緩存空间为 Q独对列 C=(200M/4)*0.8=50M*0.8=40M, 队列 D分配的独 享緩空间存为 Q独对列 D=(200M/4)*0.5=50M*0.5=25M。
本实施例中, 优选地, 可以根据获取的队列的优先级的高低顺序, 将 所述待独享的緩存空间对应的分配给各个队列, 根据各个队列的优先级的 高低顺序, 高优先级的队列和低优先级的队列分配的独享緩存空间可以不 同的方式分配多队列的独享緩存空间, 即按照公式 Q独队列 = ( Q独 /n ) *c 分配待独享的緩存空间。
本实施例中, 优选地, 将待独享的緩存空间分配给各个队列之后, 分 析为各个队列分配的待独享的緩存空间是否满足各个队列的执行需要; 当 分配的待独享的緩存空间不满足队列的执行需要时, 确定所述队列为需要 分配待共享的緩存空间的队列。
本实施例中, 通过按照预设优先级对应的分配常数给各个队列合理的 分配待独享的緩存空间, 使得独享緩存空间的分配按照各个队列的流量的 大小合理分配, 提高独享緩存空间分配的灵活性, 使得独享緩存空间能得 到更好更合理的利用, 进而提高独享緩存空间的利用率。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。
图 4 为本发明实施例四的緩存空间分配控制方法流程示意图。 如图 4 所示, 基于上述第一实施例, 在步骤 S13之后, 所述方法还包括:
步骤 S17:分析为各个需要分配待共享的緩存空间的队列分配的共享緩 存空间是否被占用。
具体的, 根据确定的需要分配待共享的緩存空间的队列个数将所述待 共享的緩存空间分配给所述需要分配待共享的緩存空间的队列之后, 分析 为需要分配所述待共享的緩存空间的队列分配的共享緩存空间是否被占 用, 即分析存入所述队列的共享緩存空间内的数据包是否被调度出队。
步骤 S18:当有需要分配待共享的緩存空间的队列被分配的共享緩存空 间未被占用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一。
具体的, 当有需要分配待共享的緩存空间的队列被分配的共享緩存空 间未被占用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一。 例如, 以平均分配 待共享的緩存空间的方式为例, 获取待共享的緩存空间的大小是 100M, 确 定的需要分配待共享的緩存空间的队列个数是 4个,则将 100M待共享的緩 存空间平均分配给 4个需要分配所述待共享的緩存空间的队列, 每个需要 分配所述待共享的緩存空间的队列得到的共享緩存空间为 25M; 需要分配 所述待共享的緩存空间的队列得到的共享緩存空间的计算公式为: Q 共队 歹l=Q共/ N; 其中, 所述 Q共为待共享的緩存空间, 所述 N为需要分配待 共享的緩存空间的队列个数, 所述 Q共队列为每个需要分配待共享的緩存 空间的队列分配的共享緩存空间。 当存入某个队列的共享緩存空间内的数 据包被调度出队时, 即为队列分配的共享緩存空间不被占用, 则确定所述 队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的 緩存空间的队列个数减一,即需要分配待共享的緩存空间的队列个数变为 3 个。
步骤 S19: 根据减一后的队列个数,将待共享的緩存空间分配给各个需 要分配待共享的緩存空间的队列。
具体的, 例如, 确定的需要分配待共享的緩存空间的队列个数为 4个, 当存入某个队列的共享緩存空间内的数据包被调度出队时, 即为队列分配 的共享緩存空间不被占用, 则确定所述队列为不需要分配待共享的緩存空 间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一, 即需要 分配待共享的緩存空间的队列个数变为 3个;则将 100M待共享的緩存空间 平均分配给 3 个需要分配待共享的緩存空间的队列, Q 共队列 =Q 共 /N=100M/3=33.33M。
本实施例中, 通过当有需要分配待共享的緩存空间的队列被分配的共 享緩存空间未被占用时, 将确定的需要分配待共享的緩存空间的队列个数 减一, 并根据减一后的队列个数, 将待共享的緩存空间分配给各个需要分 配待共享的緩存空间的队列, 通过动态监控需要分配待共享的緩存空间的 队列个数并及时对待共享的緩存空间重新分配, 提高待共享的緩存空间分 配的灵活性, 使得緩存能得到更好更合理的利用, 进而提高緩存空间的利 用率。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。
图 5 为本发明实施例五的緩存空间分配控制方法流程示意图。 如图 5 所示, 基于上述第二实施例, 在步骤 S15之后, 所述方法还包括:
步骤 S20:分析为各个需要分配待共享的緩存空间的队列分配的共享緩 存空间是否被占用。
具体的, 在所述待共享的緩存空间中预留出预设值的緩存空间, 并将 预留后剩余的緩存空间作为待分配的緩存空间; 根据确定的需要分配待共 享的緩存空间的队列个数, 将所述待分配的緩存空间分配给各个需要分配 待共享的緩存空间的队列之后, 分析为各个需要分配待共享的緩存空间的 队列分配的共享緩存空间是否被占用, 即分析存入所述队列的共享緩存空 间内的数据包是否被调度出队。
步骤 S21 :当有需要分配待共享的緩存空间的队列被分配的共享緩存空 间未被占用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一。
具体的, 当有需要分配待共享的緩存空间的队列被分配的共享緩存空 间未被占用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一。 例如, 以平均分配 待共享的緩存空间的方式为例, 获取待共享的緩存空间的大小是 100M, 确 定的需要分配待共享的緩存空间的队列个数是 5 个, 预留出预设值的緩存 空间的预设值为 15M,则将 100M减去 15M之后的共享緩存空间 85M作为 待分配的緩存空间, 将待分配的緩存空间平均分配给 5 个需要分配待共享 的緩存空间的队列, 每个需要分配待共享的緩存空间的队列得到的共享緩 存空间为 17M; 需要分配待共享的緩存空间的队列得到的共享緩存空间的 计算公式为: Q共队列 = ( Q共 -Q预留) /N; 其中, 所述 Q共为总的多队 列的共享緩存空间, 所述 N为需要分配待共享的緩存空间的队列个数, 所 述 Q预留为按照预设值预留的多队列的共享緩存空间, 所述 Q共队列为每 个需要分配待共享的緩存空间的队列分配的共享緩存空间。 当存入某个队 列的共享緩存空间内的数据包被调度出队时, 即为队列分配的共享緩存空 间不被占用, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将 确定的需要分配待共享的緩存空间的队列个数减一, 即需要分配待共享的 緩存空间的队列个数变为 4个。
步骤 S22: 根据减一后的队列个数,将待分配的緩存空间分配给各个需 要分配待共享的緩存空间的队列。
具体的, 例如, 确定的需要分配待共享的緩存空间的队列个数为 5个, 当存入某个队列的共享緩存空间内的数据包被调度出队时, 即为队列分配 的共享緩存空间不被占用, 则确定所述队列为不需要分配待共享的緩存空 间的队列, 将确定的需要分配待共享的緩存空间的队列个数减一, 即需要 分配待共享的緩存空间的队列个数变为 4个, 则将 85M待分配的緩存空间 平均分配给 4个需要分配待共享的緩存空间的队列, Q共队列 = Q共队列 = ( Q共 -Q预留) /N = ( 100M-85M ) /4=22.5M。
本实施例中, 通过当有需要分配待共享的緩存空间的队列被分配的共 享緩存空间未被占用时, 将确定的需要分配待分配的緩存空间的队列个数 减一, 并根据减一后的队列个数, 将待分配的緩存空间分配给各个需要分 配待分配的緩存空间的队列, 通过动态监控需要分配待分配的緩存空间的 队列个数并及时对待分配的緩存空间重新分配, 提高待共享的緩存空间分 配的灵活性, 使得緩存能得到更好更合理的利用, 进而提高緩存空间的利 用率。
本发明实施例还提供了一种计算机存储介质, 所述计算机存储介质中 存储有计算机可执行指令, 所述计算机可执行指令用于执行本发明实施例 所述的緩存空间分配控制方法。 图 6 为本发明实施例的緩存空间分配控制装置的组成架构示意图, 如 图 6所示, 所述装置包括: 分析模块 10、 处理模块 20及分配模块 30, 所述分析模块 10, 配置为分析各个队列中是否存在需要分配待共享的 緩存空间的队列, 获得第一分析结果, 将所述第一分析结果发送至所述处 理模块。
所述处理模块 20,配置为在所述分析模块 10发送的所述第一分析结果 为存在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的緩存 空间的队列的个数;
所述分配模块 30,配置为根据所述处理模块 20确定的需要分配待共享 的緩存空间的队列个数, 将待共享的緩存空间分配给需要分配所述待共享 的緩存空间的队列。
具体的, 所述分析模块 10实时或定时分析各个队列是否需要分配待共 享的緩存空间的队列。
具体的, 当存在需要分配所述待共享的緩存空间的队列时, 所述处理 模块 20确定需要分配所述待共享的緩存空间的队列个数; 例如, 当找出有 3个需要分配所述待共享的緩存空间的队列时,即需要分配所述待共享的緩 存空间的队列个数为 3。
具体的, 所述分配模块 30获取待共享的緩存空间的大小, 即为总的多 队列待共享的緩存空间的大小, 例如, 可以是 100M, 也还可以是其他任意 预先设置的共享緩存空间; 分配模块 30将所述获取的多队列的共享緩存空 间的大小根据所述确定的需要分配待共享的緩存空间的队列个数平均分配 给各个需要分配待共享的緩存空间的队列。 例如, 分配模块 30获取的多队 列的共享緩存空间的大小是 100M, 处理模块 20确定的需要分配待共享的 緩存空间的队列个数是 4个, 则分配模块 30将 100M共享緩存空间平均分 配给 4个需要分配待共享的緩存空间的队列, 每个需要分配待共享的緩存 空间的队列得到的共享緩存空间的大小为 25M; 需要分配待共享的緩存空 间的队列得到共享緩存空间大小的计算公式为: Q共队列 =Q共/ N, 其中, Q共为总的多队列的共享緩存大小, N 为需要分配待共享的緩存空间的队 列个数, Q 共队列为每个需要分配待共享的緩存空间的队列分配的共享緩 存空间的独享。 在本发明其他实施例中, 分配模块 30分配待共享的緩存空 间的大小给需要分配待共享的緩存空间的队列的方式, 还可以是按照各个 队列的优先级高低顺序及各个优先级对应的预设的参数将获取的多队列的 共享緩存空间的大小分配给各个需要分配待共享的緩存空间的队列。
通过所述分配模块 30根据所述处理模块 20确定的需要分配待共享的 緩存空间的队列个数为每个需要分配待共享的緩存空间的队列分配共享緩 存空间, 并随着需要分配待共享的緩存空间的队列个数的变化为每个需要 分配待共享的緩存空间的队列分配共享緩存空间, 实现了根据网络流量进 行緩存空间自动调整分配, 节约了硬件资源, 有利于硬件实践, 同时提高 了緩存空间的利用率。
优选地, 所述处理模块 20, 还配置为将所述待共享的緩存空间中预留 出预设值的緩存空间, 并将预留后剩余的緩存空间作为待分配的緩存空间; 所述分配模块 30, 还配置为根据确定的需要分配待共享的緩存空间的 队列个数, 将所述处理模块 20获取的所述待分配的緩存空间分配给需要分 配所述待共享的緩存空间的队列。
具体的, 所述分配模块 30获取待共享的緩存空间的大小, 即为获取多 队列分配的共享緩存空间的大小, 例如, 可以是 100M, 也还可以是其他任 意预先设置的共享緩存空间;所述处理模块 20预留预设值的共享緩存空间, 所述预设值可以是 10M、 15M或 30M等任意预先设置的共享緩存空间, 并 将预留后剩余的緩存空间作为待分配的緩存空间。 所述分配模块 30根据确 定的需要分配待共享的緩存空间的队列个数, 将所述待分配的緩存空间分 配给需要分配所述待共享的緩存空间的队列。 例如, 所述分配模块 30获取 的共享緩存空间的大小是 100M, 所述处理模块 20预留出预设值的緩存空 间大小为 15M,获取的需要分配所述待共享的緩存空间的队列个数是 5个, 则将 100M减去 15M之后的共享緩存空间 85M作为待分配的緩存空间,所 述分配模块 30将所述待分配的緩存空间 85M平均分配给 5个需要分配待共 享的緩存空间的队列, 每个需要分配待共享的緩存空间的队列得到的共享 緩存空间的大小为 17M; 需要分配所述待共享的緩存空间的队列得到共享 緩存空间的大小的计算公式为: Q共队列 = ( Q共 -Q预留) /N; 其中, 所述 Q共为总的多队列的共享緩存空间, 所述 N为需要分配待共享的緩存空间 的队列个数, 所述 Q预留为按照预设值预留的多队列的共享緩存空间, 所 述 Q 共队列为每个需要分配待共享的緩存空间的队列分配的共享緩存空 间。
通过所述处理模块 20预留预设值的共享緩存空间, 以便在总的共享緩 存空间在被一个或多个需要分配待共享的緩存空间的队列全部占用时, 其 他队列有共享緩存空间可用, 能更加灵活的分配多队列的共享緩存空间, 提高用户体验。
优选地, 所述分配模块 30, 还配置为将待独享的緩存空间分配给各个 队列。
具体的, 所述处理模块 20获取多队列中总的队列个数、 待独享的緩存 空间、 各个队列在多队列中的优先级以及各个优先级对应的分配系数。 例 如, 队列的个数为 4个, 分别是队列 A、 队列 B、 队列 C和队列 D; 多队 列的总的独享緩存空间的大小为 200M;各个队列的优先级顺序为:队列 A、 队列 B、 队列 C及队列 D; 各个队列的对应的分配系数。 所述分配模块 30 根据获取的队列的优先级的高低顺序、 总的队列个数以及各个优先级对应 的分配系数, 将所述待独享緩存空间分配给各个队列。 所述分配模块 30根 据各个队列的优先级的高低顺序, 高优先级的队列和低优先级的队列分配 的独享緩存可以不同, 也可以将待独享的緩存空间平均分配给各个队列。 当所述分配模块 30将待独享的緩存空间平均分配给各个队列时, 其公式可 以是: Q独队列 =0独/11; 其中, 所述 Q独队列为队列分配的独享緩存空间, 所述 Q独为多队列总的独享緩存空间, 所述 n为多队列中的队列个数。 所 述分配模块 30当高优先级的队列和低优先级的队列分配的独享緩存空间可 以不同时, 其公式可以是: Q独队列 = ( Q独 /n ) *c; 其中, 所述 c为优先 级对应的分配常数。 在本实施例中也还可以是其他任意适用的计算方式得 到每个队列分配的独享緩存空间, 所述优先级根据每个队列的流量的大小 来动态设置, 即每个队列在每个不同的时刻优先级是不一致的。 例如, 所 述分配模块 30按照 Q独队列 =Q独 /n, 分配待独享的緩存空间, 待独享的 緩存空间 Q 独 =200M, 多队列中的队列个数 n=4, Q 独对列 =Q 独 /n=200M/4=50M; 按照 Q独队列 = ( Q独 /n ) *c分配待独享的緩存空间, 若 队列 A的优先级对应的分配常数为 1.5, 队列 B优先级对应的分配常数为 1.2, 队列 C优先级对应的分配常数为 0.8, 队列 D优先级对应的分配常数 为 0.5, 则对应的队列 A 分配的独享緩存空间为 Q 独对列 A=(200M/4)*1.5=50M*1.5=75M, 队列 B分配的独享緩存空间为 Q独对列 B=(200M/4)*1.2=50M*1.2=60M, 队列 C分配的独享緩存空间为 Q独对列 C=(200M/4)*0.8=50M*0.8=40M, 队列 D分配的独享緩空间存为 Q独对列 D=(200M/4)*0.5=50M*0.5=25M。
本实施例中, 优选地, 可以根据获取的队列的优先级的高低顺序, 所 述分配模块 30将所述待独享的緩存空间对应的分配给各个队列, 根据各个 队列的优先级的高低顺序, 高优先级的队列和低优先级的队列分配的独享 緩存空间可以不同的方式分配多队列的独享緩存空间, 即按照公式 Q独队 列 = ( Q独 /n ) *c分配待独享的緩存空间。 通过所述分配模块 30按照预设优先级对应的分配常数给各个队列合理 的分配待独享的緩存空间, 使得独享緩存空间的分配按照各个队列的流量 的大小合理分配, 提高独享緩存空间分配的灵活性, 使得独享緩存空间能 得到更好更合理的利用, 进而提高独享緩存空间的利用率。
优选地, 所述分析模块 10, 还配置为分析为队列分配的待独享的緩存 空间是否满足所述队列的执行需要, 获得第二分析结果, 将所述第二分析 结果发送至所述处理模块 20;
所述处理模块 20,还配置为当所述分析模块 10发送的所述第二分析结 果为分配的待独享的緩存空间不满足所述队列的执行需要时, 确定所述队 列为需要分配待共享的緩存空间的队列。
优选地, 所述分析模块 10, 还配置为分析为需要分配待共享的緩存空 间的队列分配的共享緩存空间是否被占用, 获得第三分析结果, 将所述第 三分析结果发送至所述处理模块;
所述处理模块 20,还配置为当所述分析模块 10发送的第三分析结果为 有需要分配待共享的緩存空间的队列被分配的共享緩存空间未被占用时, 确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配 待共享的緩存空间的队列个数减一;
所述分配模块 30,还配置为根据所述处理模块 20根据需要分配待共享 的緩存空间的队列个数减一后的队列个数, 将待共享的緩存空间分配给需 要分配所述待共享的緩存空间的队列。
具体的, 根据确定的需要分配待共享的緩存空间的队列个数将所述待 共享的緩存空间分配给所述需要分配待共享的緩存空间的队列之后, 所述 分析模块 10分析为需要分配所述待共享的緩存空间的队列分配的共享緩存 空间是否被占用, 即分析存入所述队列的共享緩存空间内的数据包是否被 调度出队。 具体的, 当有需要分配所述待共享的緩存空间的队列被分配的共享緩 存空间未被占用时, 则所述分析模块 10确定所述队列为不需要分配所述待 共享的緩存空间的队列, 所述处理模块 20将确定的需要分配待共享的緩存 空间的队列个数减一。 例如, 以选取平均分配待共享的緩存空间的方式为 例, 所述分配模块 30获取待共享的緩存空间的大小是 100M, 处理模块 20 确定的需要分配待共享的緩存空间的队列个数是 4个, 则所述分配模块 30 将 100M待共享的緩存空间平均分配给 4个需要分配所述待共享的緩存空间 的队列, 每个需要分配所述待共享的緩存空间的队列得到的共享緩存空间 为 25M; 需要分配所述待共享的緩存空间的队列得到的共享緩存空间的计 算公式为: Q共队列 =0共/^ 其中, 所述 Q共为待共享的緩存空间, 所述 N为需要分配待共享的緩存空间的队列个数, 所述 Q共队列为每个需要分 配待共享的緩存空间的队列分配的共享緩存空间。 当存入某个队列的共享 緩存空间内的数据包被调度出队时, 即为队列分配的共享緩存空间不被占 用, 则所述处理模块 20确定所述队列为不需要分配待共享的緩存空间的队 列, 将确定的需要分配待共享的緩存空间的队列个数减一, 即需要分配待 共享的緩存空间的队列个数变为 3个。
具体的, 例如, 所述处理模块 20确定的需要分配待共享的緩存空间的 队列个数为 4个, 当存入某个队列的共享緩存空间内的数据包被调度出队 时, 即为队列分配的共享緩存空间不被占用, 则所述处理模块 20确定所述 队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共享的 緩存空间的队列个数减一,即需要分配待共享的緩存空间的队列个数变为 3 个, 则所述分配模块 30将 100M待共享的緩存空间平均分配给 3个需要分 配待共享的緩存空间的队列, Q共队列 =Q共/ N=100M/3=33.33M。
通过当有需要分配待共享的緩存空间的队列被分配的共享緩存空间未 被占用时, 将所述处理模块 20确定的需要分配待共享的緩存空间的队列个 数减一, 并根据减一后的队列个数, 所述分配模块 30将待共享的緩存空间 分配给各个需要分配待共享的緩存空间的队列, 通过动态监控需要分配待 共享的緩存空间的队列个数并及时对待共享的緩存空间重新分配, 提高待 共享的緩存空间分配的灵活性, 使得緩存能得到更好更合理的利用, 进而 提高緩存空间的利用率。
实际应用中, 所述分析模块 10、 处理模块 20及分配模块 30在实际应 用中,均可由所述緩存空间分配控制装置中的中的中央处理器( CPU, Central Processing Unit )、 数字信号处理器(DSP, Digital Signal Processor )或现场 可编程门阵列 (FPGA, Field Programmable Gate Array ) 实现。
本领域内的技术人员应明白, 本发明的实施例可提供为方法、 装置、 或计算机程序产品。 因此, 本发明可釆用硬件实施例、 软件实施例、 或结 合软件和硬件方面的实施例的形式。 而且, 本发明可釆用在一个或多个其 中包含有计算机可用程序代码的计算机可用存储介质 (包括但不限于磁盘 存储器和光学存储器等 )上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、 装置、 和计算机程序产品的 流程图和 /或方框图来描述的。 应理解可由计算机程序指令实现流程图和 / 或方框图中的每一流程和 /或方框、以及流程图和 /或方框图中的流程和 /或方 框的结合。 可提供这些计算机程序指令到通用计算机、 专用计算机、 嵌入 式处理机或其他可编程数据处理设备的处理器以产生一个机器, 使得通过 程图一个流程或多个流程和 /或方框图一个方框或多个方框中指定的功能的 装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理 设备以特定方式工作的计算机可读存储器中, 使得存储在该计算机可读存 储器中的指令产生包括指令装置的制造品, 该指令装置实现在流程图一个 流程或多个流程和 /或方框图一个方框或多个方框中指定的功能。 这些计算机程序指令也可装载到计算机或其他可编程数据处理设备 上, 使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机 实现的处理, 从而在计算机或其他可编程设备上执行的指令提供用于实现 在流程图一个流程或多个流程和 /或方框图一个方框或多个方框中指定的功 能的步骤。
以上所述仅是本发明实施例的实施方式, 应当指出, 对于本技术领域 的普通技术人员来说, 在不脱离本发明实施例原理的前提下, 还可以作出 若干改进和润饰, 这些改进和润饰也应视为本发明实施例的保护范围。 工业实用性
本发明实施例通过确定的需要分配待共享的緩存空间的队列个数为每 个需要分配待共享的緩存空间的队列分配共享緩存空间, 并随着需要分配 待共享的緩存空间的队列个数的变化为每个需要分配待共享的緩存空间的 队列分配共享緩存空间, 实现了根据网络流量进行緩存空间自动调整分配, 节约了硬件资源, 有利于硬件实践, 同时提高了緩存空间的利用率。

Claims

权利要求书
1、 一种緩存空间分配控制方法, 所述方法包括:
分析各个队列中是否存在需要分配待共享的緩存空间的队列; 在存在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的 緩存空间的队列的个数;
根据确定的所述需要分配待共享的緩存空间的队列个数, 将待共享的 緩存空间分配给需要分配所述待共享的緩存空间的队列。
2、 根据权利要求 1所述的緩存空间分配控制方法, 其中, 所述根据确 定的所述需要分配待共享的緩存空间的队列个数, 将待共享的緩存空间分 配给需要分配所述待共享的緩存空间的队列, 包括:
在所述待共享的緩存空间中预留出预设值的緩存空间, 并将预留后剩 余的緩存空间作为待分配的緩存空间;
根据确定的需要分配待共享的緩存空间的队列个数, 将所述待分配的 緩存空间分配给需要分配所述待共享的緩存空间的队列。
3、 根据权利要求 1或 2所述的緩存空间分配控制方法, 其中, 在所述 分析各个队列中是否存在需要分配待共享的緩存空间的队列之前, 所述方 法还包括:
将待独享的緩存空间分配给各个队列。
4、 根据权利要求 3所述的緩存空间分配控制方法, 其中, 所述将待独 享的緩存空间分配给各个队列之后, 所述方法还包括:
分析为各个队列分配的待独享的緩存空间是否满足各个队列的执行需 要;
当分配的待独享的緩存空间不满足队列的执行需要时, 确定所述队列 为需要分配待共享的緩存空间的队列。
5、 根据权利要求 1或 2所述的緩存空间分配控制方法, 其中, 在所述 将待共享的緩存空间分配给需要分配所述待共享的緩存空间的队列之后, 所述方法还包括:
分析为各个需要分配待共享的緩存空间的队列分配的共享緩存空间是 否被占用;
当有需要分配待共享的緩存空间的队列被分配的共享緩存空间未被占 用时, 则确定所述队列为不需要分配待共享的緩存空间的队列, 将确定的 需要分配待共享的緩存空间的队列个数减一, 并根据减一后的队列个数, 将待共享的緩存空间分配给各个需要分配待共享的緩存空间的队列。
6、 一种緩存空间分配控制装置, 所述装置包括: 分析模块、 处理模块 和分配模块; 其中,
所述分析模块, 配置为分析各个队列中是否存在需要分配待共享的緩 存空间的队列, 获得第一分析结果, 将所述第一分析结果发送至所述处理 模块;
所述处理模块, 配置为在所述分析模块发送的所述第一分析结果为存 在需要分配待共享的緩存空间的队列时, 确定需要分配待共享的緩存空间 的队列的个数;
所述分配模块, 配置为根据所述处理模块确定的需要分配待共享的緩 存空间的队列个数, 将待共享的緩存空间分配给需要分配所述待共享的緩 存空间的队列。
7、 根据权利要求 6所述的緩存空间分配控制装置, 其中,
所述处理模块, 还配置为将所述待共享的緩存空间中预留出预设值的 緩存空间, 并将预留后剩余的緩存空间作为待分配的緩存空间;
所述分配模块, 还配置为根据确定的需要分配待共享的緩存空间的队 列个数, 将所述处理模块获取的所述待分配的緩存空间分配给需要分配所 述待共享的緩存空间的队列。
8、 根据权利要求 6或 7所述的緩存空间分配控制装置, 其中, 所述分配模块, 还配置为将待独享的緩存空间分配给各个队列。
9、 根据权利要求 8所述的緩存空间分配控制装置, 其中,
所述分析模块, 还配置为分析为队列分配的待独享的緩存空间是否满 足所述队列的执行需要, 获得第二分析结果, 将所述第二分析结果发送至 所述处理模块;
所述处理模块, 还配置为当所述分析模块发送的所述第二分析结果为 分配的待独享的緩存空间不满足所述队列的执行需要时, 确定所述队列为 需要分配待共享的緩存空间的队列。
10、 根据权利要求 6或 7所述的緩存空间分配控制装置, 其中, 所述分析模块, 还配置为分析为需要分配待共享的緩存空间的队列分 配的共享緩存空间是否被占用, 获得第三分析结果, 将所述第三分析结果 发送至所述处理模块;
所述处理模块, 还配置为当所述分析模块发送的第三分析结果为有需 要分配待共享的緩存空间的队列被分配的共享緩存空间未被占用时, 确定 所述队列为不需要分配待共享的緩存空间的队列, 将确定的需要分配待共 享的緩存空间的队列个数减一;
所述分配模块, 还配置为根据所述处理模块根据需要分配待共享的緩 存空间的队列个数减一后的队列个数, 将所述待共享的緩存空间分配给需 要分配所述待共享的緩存空间的队列。
11、 一种计算机存储介质, 所述计算机存储介质中存储有计算机可执 行指令, 所述计算机可执行指令用于执行权利要求 1至 5任一项所述的緩 存空间分配控制方法。
PCT/CN2014/078751 2013-08-26 2014-05-29 缓存空间分配控制方法、装置及计算机存储介质 WO2014173356A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310377185.3A CN104426790B (zh) 2013-08-26 2013-08-26 对多队列的缓存空间进行分配控制的方法及装置
CN201310377185.3 2013-08-26

Publications (1)

Publication Number Publication Date
WO2014173356A1 true WO2014173356A1 (zh) 2014-10-30

Family

ID=51791086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078751 WO2014173356A1 (zh) 2013-08-26 2014-05-29 缓存空间分配控制方法、装置及计算机存储介质

Country Status (2)

Country Link
CN (1) CN104426790B (zh)
WO (1) WO2014173356A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106330770A (zh) * 2015-06-29 2017-01-11 深圳市中兴微电子技术有限公司 一种共享缓存分配方法及装置
CN107347039B (zh) * 2016-05-05 2020-02-21 深圳市中兴微电子技术有限公司 一种共享缓存空间的管理方法及装置
CN107818056B (zh) * 2016-09-14 2021-09-07 华为技术有限公司 一种队列管理方法及装置
CN108092787B (zh) * 2016-11-21 2020-04-14 中国移动通信有限公司研究院 一种缓存调整方法、网络控制器及系统
CN106681830B (zh) * 2016-12-21 2019-11-29 深圳先进技术研究院 一种任务缓存空间监测方法和装置
CN106776043A (zh) * 2017-01-06 2017-05-31 郑州云海信息技术有限公司 一种基于文件为客户端分配缓存配额的方法及其装置
CN109428829B (zh) * 2017-08-24 2023-04-07 中兴通讯股份有限公司 多队列缓存管理方法、装置及存储介质
CN110007867B (zh) * 2019-04-11 2022-08-12 苏州浪潮智能科技有限公司 一种缓存空间分配方法、装置、设备及存储介质
CN112000294A (zh) * 2020-08-26 2020-11-27 北京浪潮数据技术有限公司 一种io队列深度调节方法、装置及相关组件

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881937A (zh) * 2005-05-02 2006-12-20 美国博通公司 将存储空间动态分配给多个队列的方法及设备
CN101609432A (zh) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 共享缓存管理系统及方法
CN102347891A (zh) * 2010-08-06 2012-02-08 高通创锐讯通讯科技(上海)有限公司 共享缓存的使用方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1878144A (zh) * 2006-07-14 2006-12-13 华为技术有限公司 一种多队列流量控制的方法
CN101998505B (zh) * 2009-08-12 2013-06-12 中兴通讯股份有限公司 Hsdpa数据缓存方法和移动终端
CN102088395B (zh) * 2009-12-02 2014-03-19 杭州华三通信技术有限公司 一种调整媒体数据缓存的方法和装置
US8566532B2 (en) * 2010-06-23 2013-10-22 International Business Machines Corporation Management of multipurpose command queues in a multilevel cache hierarchy
CN102916903B (zh) * 2012-10-25 2015-04-08 华为技术有限公司 缓存调整方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881937A (zh) * 2005-05-02 2006-12-20 美国博通公司 将存储空间动态分配给多个队列的方法及设备
CN101609432A (zh) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 共享缓存管理系统及方法
CN102347891A (zh) * 2010-08-06 2012-02-08 高通创锐讯通讯科技(上海)有限公司 共享缓存的使用方法

Also Published As

Publication number Publication date
CN104426790A (zh) 2015-03-18
CN104426790B (zh) 2019-02-26

Similar Documents

Publication Publication Date Title
WO2014173356A1 (zh) 缓存空间分配控制方法、装置及计算机存储介质
WO2021174735A1 (zh) 一种保证延迟敏感应用延迟slo的动态调控资源方法及系统
JP3872716B2 (ja) パケット出力制御装置
US10097478B2 (en) Controlling fair bandwidth allocation efficiently
TWI629599B (zh) 虛擬磁碟的io口調度方法及其調度裝置
WO2017000673A1 (zh) 一种共享缓存分配方法、装置及计算机存储介质
US20130212594A1 (en) Method of optimizing performance of hierarchical multi-core processor and multi-core processor system for performing the method
US8588242B1 (en) Deficit round robin scheduling using multiplication factors
CN106464733B (zh) 一种调整云计算中虚拟资源的方法及装置
CN107347039B (zh) 一种共享缓存空间的管理方法及装置
WO2013078588A1 (zh) 虚拟机内存调整方法和设备
RU2643666C2 (ru) Способ и устройство для управления авторизацией виртуальной очереди вывода, а также компьютерный носитель информации
CN114390000B (zh) 基于入队整形的tsn流量调度方法及相关设备
WO2021115196A1 (zh) Dpdk数据加密处理方法、装置和网络设备
CN106716368B (zh) 用于应用的网络分类
WO2016082603A1 (zh) 一种调度器及调度器的动态复用方法
US10708195B2 (en) Predictive scheduler
Kogan et al. Balancing work and size with bounded buffers
CN106201721B (zh) 一种基于虚拟化技术的内存动态调整方法及系统
KR20120055946A (ko) 공평한 대역 할당 기반 패킷 스케줄링 방법 및 장치
Iqbal et al. Instant queue occupancy used for automatic traffic scheduling in data center networks
WO2016000502A1 (zh) 一种资源分配方法、装置和计算机存储介质
Behnke et al. Towards a real-time IoT: Approaches for incoming packet processing in cyber–physical systems
US20200081889A1 (en) Dynamic block intervals for pre-processing work items to be processed by processing elements
CN109144664B (zh) 一种基于用户服务质量需求差异的虚拟机动态迁移方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14788551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14788551

Country of ref document: EP

Kind code of ref document: A1